I was responsible for designing, leading, and building the frontend for an AWS service. One of the challenges was with obtaining useful feedback from a diverse range of people. During the product definition phase, the majority of the feedback, input, and feature priority was for customers who were planning to dedicate a large budget towards using said service. I often felt that stakeholder decisions sacrified usability for feasibility.
Regardless, it was the responsibility of my service team to seek and obtain feedback, input, and data points that could help us inform our decisions. But from what I witnessed, it only went as far as to validate our exisiting concepts and user personas of how people use AWS services. Going beyond that was seen as unnecessary.
The universal thinking within AWS is that people will ultimately use the API/CLI/SDK. So, investment into the console is on a case by case basis. Some services have dedicated engineers or teams to focus on the console, but most don’t.
I’m proud of what I built. I hope that my UI decisions and focus on usability are benefiting the customers of that service that I helped build.
A little known fact, in the AWS console exists a feedback tool (look in the footer) that will send feedback straight to the service team. I encourage you to submit your thoughts, ideas, and feedback through that tool. There are people and service teams who value that feedback.
I talked to Jassy after his keynote in 2018: “Your message says AWS is for ‘builders’. Why do you keep saying ‘just click and ...’ instead of ‘just call the API and’?”
In short, to your point: AWS is for builders... who pay. And right now all the growth is in enterprise, where we don’t know how to make API calls from a command line.
We don’t know how, because two decades of IT practices and security practices made sure we couldn’t make API calls from a command line. (No access to install CLI tools, no proxy, firewall rules from the S3 era still classify AWS as cloud storage and block it, etc.) So we can’t adopt AWS at all if that’s the only path in. But our proxy teams can figure out how to open a console URL. For this market, giving a point and click web page with magic infra behind it is a big deal: the modern ‘service catalog’.
So I think he’s right, that’s the dominant use case by dollar count and head count, and he’s speaking to those deciders.
At the same time, I think it’s terrible when capabilities show up in the console first or only, as the infra-as-code builders can’t code infra and services through the console.
So to anyone following along from a team with two pizzas: invest in the UI, but please nail the APIs first, and then use those from the console. Keep yourselves honest to the Bezos imperative from 15 years back: if you want it in the console, so do IaC developers, so let there be an API for that.
And then your bean counters are going to be rightfully confused about why they are spending so much more on infrastructure when “they moved to the cloud” without changing their people and/or processes.
But then again, they probably listened to some old school net ops folks who watched one ACloudGuru video, passed a multiple choice AWS certification, called themselves “consultants” when all they were were a bunch of “lift and shifters”.
The console should be for exploration/discovery and if you're actually building production infrastucture by pointing and clicking, well, shame on you.
Infact AWS's HSM devices intentionally don't have an API, as "security feature."
I work pretty extensively with enterprise companies that are on AWS, and most make significant use of the APIs and command line. Lots of these companies are ones that I am helping move to AWS, and their teams are frequently excited at being able to utilize the command line and API as much as possible.
>We don’t know how, because two decades of IT practices and security practices made sure we couldn’t make API calls from a command line. (No access to install CLI tools, no proxy, firewall rules from the S3 era still classify AWS as cloud storage and block it, etc.) So we can’t adopt AWS at all if that’s the only path in. But our proxy teams can figure out how to open a console URL. For this market, giving a point and click web page with magic infra behind it is a big deal: the modern ‘service catalog’.
This also sounds pretty crazy to me. It's not a situation I've ever spoken to anyone in, and quite frankly: If your security and networking teams are unable to figure out how to open access to API endpoints that are all documented, you need new people on those teams. It's also certainly possible to proxy the API and command line calls to these endpoints, as well.
>So to anyone following along from a team with two pizzas: invest in the UI, but please nail the APIs first, and then use those from the console. Keep yourselves honest to the Bezos imperative from 15 years back: if you want it in the console, so do IaC developers, so let there be an API for that
I 100% agree with this, though. I want APIs for everything, but a lot of people like the console for discoverability and gaining familiarity - not everyone can grok what something is from reading the API documentation as they can from poking at it with the console, even if they ultimately do end up managing it elsewhere. Build great APIs, build a great console on top of those APIs, and everyone is better off for it.
For all of the major data leaks from S3 buckets, I suspect the existence (and persistence) of these firewall rules across the industry is a principal reason why there haven't been significantly more of these leaks.
Adopt an AWS service through the console. Then discover advanced feature [X] can only be done from the CLI via APIs.
( ͡° ͜ʖ ͡°) - But still 99% of tutorials and documentation refer to the UI.
Assuming everyone, even extremely experienced AWS users like me, will just use the CLI seems like a mistake.
The only time I find the CLI useful is for S3.
1. Each individual service in AWS may be perfectly well designed, but there are now about 5000 services in AWS, which means there's 5000^2 possible interactions between services. Services interact in strange ways, and there's no visibility (and no documentation) into exactly how. You can write 5000 bug-free functions, but that doesn't mean you'll end up with a bug-free program.
2. The craftsmanship that goes into each element of the AWS console is poor. Controls don't work like I expect, and don't work like similar controls elsewhere. Error messages are terrible, or missing, and don't give any clue what is actually going on, or what secret AWS-specific trick I need to use to fix it. I've wasted hours of my life on those spinners because it's not even clear if an action will occur right away, in 30 seconds, or 30 minutes. What is one supposed to do when they click a button, wait a few minutes, go to lunch, and come back to see "at least one of the environment termination workflows failed"?
3. The documentation and support is lousy. I've asked a few questions on AWS's own forums, and never gotten any response at all. The above error message appears in exactly one forum post, and AWS finally got back to them after 2 weeks, and it was all done via PM so I learned nothing from it. I've used the 'Feedback' button, and when I get a reply, it feels like some combination of "it's your fault" and "you should have googled harder".
> designing, leading, and building the frontend for an AWS service
Designing the frontend for an AWS service doesn't help with the biggest problems. It's like designing a city by designing apartments and offices, with no thought given to roads or signs.
> The universal thinking within AWS is that people will ultimately use the API/CLI/SDK.
I can't understand this. If someone can't get the web console to work, they're not going to say "I know, I'll just write everything by hand with the API instead". The web console is essentially your landing page and your trial combined. Do all your "personas" consist of people who build for the web but never use the web? Or who try a service, and when they can't get it to work, they double down on it?
As a personal anecdote, my first interaction with AWS was to try and adjust the size of some elasticsearch disks. Not knowing better I tried to do it through the UI, only to find some crazy inconsistencies were the tooltip would say to type any size between 5 and 50 GB while the current value was 100GB. Even if you clicked "apply" with the current value of 100 you'd get an error message. I tried different browsers and it seemed to be a browser-specific issue.
After that I delved into the terraform that was used to provision all our AWS resources and I haven't looked back since. Apart from the obvious benefits of keeping your infrastructure as code and automation etc, terraform actually helped me understand how all the different services we had worked together and allowed me to get a grasp of our infrastructure layout quicker.
I would seriously discourage anyone from using the console for anything other than searching logs or managing DNS records (terraform is a bit flaky on that regard)
I would like to try Azure or Google a try, but neither seem to make it easy to transfer Petabytes.
Both seem to be bugs and single user report should be sufficient to identify and fix the issue. I think you make it look more complicated than it is.
Holy mother of god, the search on it is horrific and simply doesn't work. Heartfelt begging through the feedback tool goes unanswered. I have offered money, firstborn children, sacrificial goats, virtually everything. But the search is still broken :( I've had to scroll through a hundred pages of parameters to find things.
- severe unchangeable, undocumented limits before you get throttled. Throttling is so bad that if you have too many parameter store resources in your CloudFormation template it will start causing errors because CF is trying to call the API too quickly - the only way around it is to use DependsOn and chain the creation.
- no way of creating an encrypted value with CF without a custom resource.
We ended up just using DynamoDB for config anda Custom CloudFormation resource to create values in it.
You should never depend on Parameter Store as a reliable key/value store for configuration.
CodePipeline is pretty bad too. There is no way you can create a cross account code pipeline from the console.
It always felt like each product did their own UX because of all the various inconsistency between different areas. I don't have any examples off-hand, but anyone who's used it would probably agree with me.
For the record, I think the AWS GUI is sufficient, but not very good. If you login to GCP, see that feedback button in the upper right on each page? Product managers have emailed me back asking for more information, or explaining features when I've used that feedback button.
Nowadays there's a workflow to ship new consoles and big features that require UI changes but there are still many consoles built on the legacy design system and designing or improving those is pretty hard. The right decision is to migrate those consoles to the design system but that is a painful process.
Any truth to this sense I get?
Security Groups were initially an EC2 only concept. You couldn't write security groups for SQS or S3, and they came about alongside EC2.
Obviously EC2 is no longer the only service that utilize security groups, but it's an artifact of when they were.
(I'm not saying this is how it should be, just answering the 'why' part of the question ;))
What is odd is that during the Chicago summit one of the presenters explicitly said that most of their customers use the UI instead of API/automation. I don't recall the percentage but it was higher than I imagined.
- Create account. Enter credit card details, but verification SMS never shows up. Ask for help.
- I get called at night (I'm abroad) by an American service employee, we do verification over the phone.
- Try to get the hang of things myself. Lost in a swamp of different UI's. Names of products don't clarify what they do, so you first need to learn to speak AWS, which is akin to using a chain of 5 dictionaries to learn a single language.
- Do the tutorials. Tutorials are poorly written, in that they take you by the hand and make you do stuff you have no idea what you are actually doing (Ow, I just spun up a load balancer? What is that and how does it work?).
- Do more tutorials. Tutorials are badly outdated. Now you have a hold your hands tutorial, leading you through the swamp, but every simple step you bump your knee against an UI element or layout that does not exist in the tutorial. Makes you feel like you wasted your time, and that there is no one at AWS even aware that tutorials may need updating if one design department gets the urge to justify their spending by a redesign.
- Give up and search for recent books or video courses. Anything older than 3-4 years is outdated (either the UI's have changed, deprecated, or new products have been added).
- Receive an email in the middle of the night: You've hit 80% of your free usage plan. Log in. Click around for 20 minutes, until I find the load balancer is still up (weird, could have sworn I spun that entire tutorial down). Kill it, go back to sleep.
- Next night, new email: You've gone 3.24 $ over your free budget. Please pay. 30 minutes later: We've detected unusual activity on your account. 1 hour later: Your account has been de-activited. AWS takes fraud and non-payment very seriously.
Now I need a new phone number/name/address to create a new account. I am always anxious that AWS will charge for something that I don't want, and can't find the UI that shows all running tutorial stuff that I really don't want to pay for. I know the UI is unintuitive, non-consistent, and out-of-sync with the technical - and tutorial writers. And I know that learning AWS, consists of learning where tutorials and books are out-dated, or stumbling around until you find the correct set of sequences in a "3 minutes max." tutorial step.
AWS has grown fat and lazy. The lack of design - and onboarding consistency is typical for a company of that size. Outdated tutorials show a lack of inter-team communication, and seems to indicate that no one at AWS reruns the onboarding tutorials every month, so they can know what their customers are complaining about (or why they, like me, try to shun their mega-presence).
(EDIT: The order of my experiences may be a bit jumbled. Sorry. More constructive feedback: 1) I'd want a safe tutorial environment, with no (perceived) risk of having to pay for dummy services. 2) I want the tutorial writer to have the customer's best interest in mind: "For a smaller site, load balancing may be overkill, and can double your hosting costs for no tangible gains." beats "Hey Mark, we need more awareness and usage on the new load balancer. I need you to write a stand-alone tutorial, and add the load balancer to the sample web page tutorial." 3) Someone responsible for updating the tutorials (even if: "This step is deprecated. Please hold on for a correction") 4) A unified and consistent UI and UX. Scanning, searching, sorting, etc. should work without making me think, I don't want a different UI model for every service. Someone or some team to create the same recipes and boundaries for the different 2-pizza teams, so I don't get a pizza UI with all possible ingredients.)
How was this a good idea? I’m horribly inexperienced with modern web development but I know the rest of the stack pretty well - backend, databases, AWS networking and most of their standard technologies, CI/CD etc. When I was responsible for setting up everything for a green field project, I pulled in someone who was much better than I was for the front end even though I could have muddled my way through. Why would I take the risk?
Meanwhile over in GCloud, almost /any/ operation whatsoever will spam you with an endless series of progress meters, meaningless notification popups, laptop CPU fans on, 3-4 second delays to refresh the page (because most of their pages lack a refresh button), etc., and the experience is uniform regardless of whatever tool you're using.
The uniform design itself was clearly done by a UI design team with little experience of the workflows involved during a typical day. For example, requiring 2 clicks and at least one (two?) slow RPCs to edit some detail of a VM, with the first click involving the instance name with any 'show details' button completely absent from the primary actions bar along the top. The right-hand properties bar in GCloud is also AFAIK 100% useless. I've yet to see any subsection that made heavy use of it
Underengineering beats massive overengineering? Something like that. Either way, the GCloud UI definitely pushes me to AWS for quick tasks when a choice is available, because the GCloud UI is the antithesis of quick
Do you really prefer Cloudwatch to Stackdriver? How about having a Lambda being triggered both on SNS messages and HTTP requests (setting up a proxy) and having that Lambda deployed with a CD pipeline - compared to doing the same with Cloud Functions?
But I guess it also really boils down to which products you make the most use how, how and your scale. Clearly we have different preferences.
I guess I am not seeing the bad parts you do because 1) Apart from DNS and some IAM, most infra changes are done from Terraform or CLI and 2) I have pretty high-end workstation.
I'll always prefer the ability to quickly hit refresh than waiting 4 seconds because I made the mistake of ctrl-clicking a link, and now a new tab is 'booting'. But I guess this preference depends on how quickly one expects to be able to get their job done
Mind you once you get to a certain point using the APIs is better.
I spent 3 hours trying to get a bucket to host a static single page of html and failed completely.
I use amazon polly. I wanted to know how many characters I was using each month. I spent 2 hours searching through hundreds of pages and literally couldn't find that information.
I thought of trying to start a little text to speech service for dyslexics to make it easy to use Polly but one of the main thing putting me off is having to get my arms mangled in the AWS machine.
The whole thing is so totally maddening. I would love to be able to sit in on their meetings where they talk about usability, what do they say? Do they think everything is fine? Do they know it's totally broken and don't care? Are they unable to hire a UX designer? What is the problem?
From my experience, AWS has up-to-date documentation pages for everything. And when something is hard to understand from their docs, you can find really everything you need by searching on Google. Literally, everything. And if you ask the support forum, you'll be provided with an answer in relatively short time. Competent answers most of the time.
So, what's the alternative to the ugly AWS web console? Learn the basic concepts, and maybe use the aws cli.
Speaking about the bucket -> https://medium.com/@P_Lessing/single-page-apps-on-aws-part-1...
The only complaint I had was when I needed to rapidly get like 15 N-Series GPU instances and it took like two months. At the time they were new so weren't allocating them as quickly as they do now. Amazon was way faster for us to get GPUs running on - but this was over two years ago now so I'm not sure if that's still the case.
At this point I have to wonder if this is intentional. It makes it difficult to escape if all you know is AWS's reality abstractions.
I guess each AWS service get's named it's own thing as it is developed and those names just stick forever. it is maddening. Reading the docs outloud often sounds like a weird technical Dr. Seuss. I've never looked at Azure, but since Microsoft has been the king of making up their own names for things, I expect it to be just as bad.
I wonder how this naming issue comes about. If AWS devs and early adopters are doing this as their first big rodeo, then everything might seem new and they get to invent names - as if the computing were new. But after these devs and early adopters work on 2 or 5 of these kinds of projects in different environments they will see that special naming is a mistake, because it makes it incredibly hard to communicate about the same computing tasks using dozens of different names and acronyms.
I know computing requires continuous learning, but specialized naming tends to obfuscate higher order abstractions. And if you grok the higher order abstraction and want to dev a system, then the naming and minute computing differences make development on any given service harder than it needs to be because it requires learning specialized lingo. As human beings we need to get much better at getting to standard names and conventions faster. It will speed all our development.
When I started learning AWS, it was quite simple mapping what I’ve done for over 20 years on prem from both a development and networking perspective.
As for documentation: I don't think that neither AWS nor Azure have excellent documentation. The Azure documentation lacks depth, the AWS documentations is just throw together, things that are part of the same system is documented wildly differentially, e.g. some CloudWatch metric are complete, how you use them, which dimensions are available for which, and you get example. Other parts of CloudWatch: "Well we have some metrics and these dimensions, have fun figuring out which goes together.
Why? Why should "Greengrass" be used for IoT?
As with any "convenience tech", learning the underlying protocols is essential.
It's maddening and they clearly do not care.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/uploa...
There are parts of AWS that are hard to use, and non-intuitive. But S3 didn't ever seem to be one of them, though perhaps I'm forgetting how hard it was initially.
You might consider a "friendly" system for static hosting if raw-S3 is too hard though, such as netlify. There are a lot of services out there which basically wrap and resell AWS services. (I run my own to do git-based DNS hosting, which is a thin layer upon the top of route53 for example.)
I leave a few legacy things running there (billing works, of course) but these days just put personal stuff on Digital Ocean, which seems to meet basically all my needs without the complexity and cheaper to boot.
I ended up on Netlify, it's 1000x more my speed.
I'm of two minds about this.
On the one hand, the cloud is a meaningfully different abstraction from hosting locally, and figuring out how to do things effectively with it end to end without prior experience is a little bit like going from Windows to Linux, or back.
On the other hand, the use case you describe is one of the most basic, standard and well documented out there.
That, was a paragon of design reliability and speed compared to the AWS console.
What annoys me the most is the sheer weight of each page. If you have to context change, its multiple seconds before the page is useable.
A classic example is lambda. I click on the function, page reloads (no problem) 1-2 seconds later the page is fully rendered, I can _then_ click on the monitoring tab, another couple of seconds, and then I can jump to the latest log, in cloudwatch.
Cloudwatch can get fucked. Everything about it is half arsed. Search? all of the speed of splunk, combined with its reliability, but none of the usefulness.
The standard answer is "you should use Cloudformation" I do, it too is distilled shart. (anything to do with ECS can cause a 3 hour, uninterruptible hang, followed by another timeout as it tries to roll back.)
It also lazy evaluates the acutal CF, which means that parameter validation happens 5 minute in. Good fucking job there kids.
What I really want is a QT app in the style of the VMware fat client (you know with an inbuilt remote EC2 console, that'd be great...) that talks to AWS. The GUI is desgined by the same team, and is _tested_ before a release by actual QAs who have the power to block things until the change is released.
This is the single largest problem with ECS and the fact that neither the containers team, nor the CloudFormation team have paid any attention to the problem after who knows how many years is incredibly frustrating.
And 3 hours is actually one of the better cases. 10+ hour hangs that can only be cancelled / rolled back by contacting support are joyous occasions.
I compared it to a bartender who immediately recognizes that you’re underage but offers you alcoholic drinks, gives you samples, asks about preferences, counts out your change, and only after all of that stops you from drinking it.
http://blog.tyrannyofthemouse.com/2016/02/some-of-my-geeky-t...
Amazon is very much in that "we were here first so we'll do whatever we want" mentality. They can provide worse service for more money, and people love them. Nobody ever got fired for picking AWS!
Think of an Amazon shopping competitor that has a great site UX and actually makes good recommendations (no, Amazon, I do not need 20 more variations on lightbulbs I just bought), the site UX and recommendations, among other things; would surely have to cumulatively far exceed the perceive value placed on the immediacy of AMZ logistics operation that can have you the thing you lust after on the same day sometimes.
I’m certain AMZ knows quite well, just as Google, FB, etc., they have a monopoly of Good Enough in core competencies to both maintain their monopoly and stave off or at least frustrate competitors through their monopolization of our minds. It’s a new type of monopoly, Mental Monopoly, suited for the Information Age abstracted from the physical world of goods.
It’s why we suffer through Amz, Aws, as well as put up with google and YouTube and fb and endless scrolling through rubbish on Netflix … they have a grip on our lazy mind because they’re all Good Enough and there is no one that is enforcing comptmltition in a manner that is appropriate for thevtech industry.
Same applies to all other cloud providers.
Typically, you solve this problem partially with tools like Terraform, etc. However, of course there is never a one-size-fits-all solution for such things. Vendor lock-in is an issue that many companies try to solve by adopting standard solutions, but that's it. Kubernetes for example is one of these solutions.
Each terraform file uses modules that are quite specific to the individual services provided by a given cloud. These cannot be simply swapped out without rewriting the config.
Also, do you really want to support Amazon's human-rights-abuse parade?
This is not just bad UX, this is the territory of never even bothering to sit down with someone to see how they might use the product. Amazon love to tout their focus on the customer and amazing leadership principles, but they sure produce some mediocre experiences.
I wrote some video training material 3 years ago that goes over setting up an ECS cluster and I decided to use the CLI for just about everything. We interact with a number of resources (S3, load balancers, EC2, ECS, ECR, RDS, Elasticache, etc.) and other than a single flag to login to ECR it all works the same today.
I'm happy I chose not to use the web console. The only time I used the web console was for creating IAM roles and I've had to make a bunch of updates since the UI changes pretty often. It would have been a disaster if I used it for everything.
1. AWS needs a Chief Consistency Officer who can block shipping until you cleanup the prototype slop
There are lots of services like Zeit Now and Heroku that supply a complex abstraction to the point where it feels like an entirely different product. What I would want is something that allows me to host docker images/K8s on one of the big three (I guess others as well) and lets me to use configuration as code to the extent possible, but with UI/command line/API helpers that create a uniform abstraction so that can easily switch.
[1] https://www.crunchbase.com/organization/cloudkick
[2] https://techcrunch.com/2010/12/16/rackspace-buys-server-mana...
If you need abstractions, use Heroku and that's it, you don't have to know how DNS works, or which subnet to choose for your VMs etc.
There are already tools that attempt to do a limited form of this such as nixops, which attempts to devolve the ultimate power over someone's services to the user.
Sometimes it works great (searching for EC2 instances).
Sometimes you need to construct restricted search queries (slightly aided by a slow dropdown auto-complete) that look like `Name: Begins With: /blah/` (ParameterStore).
Sometimes search is client-side, and only searches the page you're currently on (ECR, I think? I can't remember what does this). I think in this case it's sometimes form just following the limited functionality of the API.
I have a _lot_ of scripts that are just ways to extract data quicker than I can in the UI.
I assume the bad search functionality happens because the service teams don’t really use their own service with more than some demo resources.
Since their APIs covers everything this should be possible. Be the first UXaaS.
A killer feature: a server by server breakdown of Google Cloud expenses. It is impossible to understand what you are paying for on Google Cloud. They lump everything together in an incredibly confusing bill.
Incidentally a UI startup for AWS/Google Cloud is an incredibly bad idea. You're just a sitting duck waiting to be killed, and also you have no full control over the API.
For example, listing API keys for a given project.
By the way, how much would you be willing to pay for such a UI?
1) state management/sync is frequently terrible. E.g you are looking at a page with some health indicator and a log view. The last entry in the log is some variation of "transitioned from busted to not busted", but the state indicator doesn't update until you refresh.
2) if you have multiple tabs open at a time (pretty common use case) there is a good chance it will suddenly decide that you have to reload the page for some reason, often when you are in the middle of something
3) live updating. Why the hell do I have to sit there hitting refresh on so many of the views to get up to date data? I've often sat there waiting for something to finish, only to realise it's been done for a while but the page has not updated. This seems closely related to (1).
I find the overall design of the console fine, generally the UI is manageable, but the actual implementation is a steaming pile.
I so strongly agree with that observation, and have repeatedly and often submitted feedback through their in-page feedback mechanism about please, I'm begging you, never involuntarily reload my page. That's why @adreamingsoul (https://news.ycombinator.com/item?id=20903229) saying "send us complaints, we read your feedback" is like spitting into the wind for me
I thought your "multiple tabs" was also going to mention that they have _exactly the same browser title_, no matter what subsection you have open. So, if one EC2 tab is looking at volumes, and another at instances, and another at autoscaling groups, well, too bad for you because you're just going to have to click on them all or have a good memory/tab-management scheme
I kind of figure the console doesn't get any engineering love because of what other people in here have said: they want you to use the APIs
I can't work out what it is they are doing that nesessitates these reloads
Not sure why, but for some reason I like clicking around in the web app, so makes me wish it was a better experience. In contrast, compare this to the Digital Ocean web console. It has beautiful design that is nice to look at. It's un-complicated & clutter free. Overall a very pleasurable web app, I've always been inpressed with their UX.
But as people have pointed out, it seems Amazon expects us to use the CLI & APIs, and the web console is not a priority. So maybe I'll start moving in that direction with my AWS services.
User-friendly tools prevent skilled middlemen from monetizing their expertise, which stifles adoption of that tool. So on-sellable tools that are too easy-to-use, don't get on-sold.
Some examples by contradiction: tax returns, AWS Dashboard, many programming languages.
By programming languages, do you mean Rust?
In this way Lisp suffers from having no syntax, although it's a slightly different argument. When you can't have flamewars about a language's syntax, fewer articles are written about it. So instead, people will argue about the encoding of the AST - the parentheses.
Similarly, well-designed languages like Clojure, Haskell and Erlang have fewer questions on StackOverflow and older GitHub issues, so there are fewer flamewars about them (although monads are Haskell's saving grace here).
The NPM crowd are quick to ask, "Is this project abandoned?" when it hasn't had any activity for a year. In Clojure country, we dislike using libraries that haven't been stable for at least five years. As Alan Kay put it, Computer Science is very much a pop culture.
The phenomenon needs a good name, though. Perhaps the Moving Target Paradox, since developers are more likely to run after a moving target.
You still have to do some trickery with the CLI too. Let's say I want to get all logs from failed Batch jobs in the past day? This involves:
* Listing the Jobs (possibly paginated)
* Parsing out the log stream names from JSON (oh, and separate logs for separate attempts)
* Iterate through log streams and query Cloudwatch (each paginated)
* Parse JSON
I am sure we're all writing half-baked wrappers for our individual use-cases, I am surprised no one's published something generally useful for stuff like this.
Whereas with Kubernetes, that's all a single call with kubectl...
Don't get me wrong, we wouldn't be on AWS if it didn't make sense and they have been pushing development forward a lot. But it's unfortunately fragmented.
The only way to stay sane here is to use Terraform. That way you can stay out of it at least for creation and modification of resources and will have an easier time should you want to migrate.
EDIT: Another great example from Batch: Let's say you have a job that you want to run again, either a retry or changing some parameters.
AWS Console:
* Find job in question (annoying pagination through client-side pagination where refresh puts you back on page 1).
* Click Clone Job
* Perform any changes. (Changing certain fields will reset the command, so make sure you stash that away prior to changing)
* Click Submit
* Job ends up in FAILED state with an ArgumentError because commands can not be over a certain length.
Turns out that the UI will split arguments up, sometimes more than doubling the length of a string, and there's nothing you can do about it except resort to CLI or split it up into smaller jobs if you have that option.
CLI:
* Get job details
* Parse JSON and reconstruct job creation command
* Post
It baffles me how container fields and parameters differ from what you can GET and what you can POST; you really need to parse the job down, and reconstruct the create job request.
I completely understand that it will be like this when services launch. But it's been years now.
Don’t want to bother you with specific examples, but every interaction I had with them was dreadful.
I think this attitude gets reflected in their console design.
What I do find frustrating is how much of the docs are written in a console-first way. In most cases, the straightforward definitions of resources, attributes and the relationships between them are tucked away (or not present at all) in favor of "click this, then click that" style.
I am convinced that the best way to understand a cloud service is to understand its internal data model and semantics, but this is too often hidden behind procedural instructions.
My understanding is that AWS hasn't officially closed it because of US-Gov accessibility guidelines.
Are there any other similar clients?
[0]: https://aws.amazon.com/tools/aws-elasticwolf-client-console/
* Order column by X
* Type search into input
* Column order drops
* Can no longer apply ordering when search input is there
100% understand that larger companies will not typically, or at least shouldn't, be directly manipulating infra via the web console but there is 1000s of customers that use web for small business. It's a valid customer to think about it!
ps I logged into Reddit so to add to that thread. Felt this in my soul.
Soon after i gave up. Too many silly bugs, and no fixes.
Reference: https://github.com/andreineculau/fl-aws
This is probably not a popular way of doing it, but I write python to orchestrate the provisioning steps of a VM with specific roles, routes, etc in a VPC (with public/private subnets in multiple AZs) and then I use other tools for config-management and deploy.
I'm using few of AWS's services, it helps me do multi-cloud (another python script doing the same thing on another cloud), and it helps me keep my local dev environments in parity with production even on MacOS.
I do use S3 and route53 globally - they're simple enough to use using boto. IMO if infra is now code, you should probably write code to manage infra...
I really believe there is a business opportunity here. I think you could pick a general use-case for AWS, like serverless, and build an intuitive UI around AWS offerings typically utilized by the serverless stack.
Even though AWS web interface has it's flaws, it's still 10 times better than Azure's web UI.
On the Python side, "boto" works well, too.