This is the part where people get excited about AI. I personally think they're dead wrong on the process, but strongly empathize with that end goal.
Giving people the power to make the interfaces they need is the most enduring solution to this issue. We had attempts like HyperCard or Delphi, or Access forms. We still get Excel forms, Google forms etc.
Having tools to incrementaly try stuff without having to ask the IT department is IMHO the best way forward, and we could look at those as prototypes for more robust applications to create from there.
Now, if we could find a way to aggregate these ad hoc apps in an OSS way...
The usual situation is that the business department hires someone with a modicum of talent or interest in tech, who then uses Access to build an application that automates or helps with some aspect of the department's work. They then leave (in a couple of cases these people were just interns) and the IT department is then called in to fix everything when it inevitably goes wrong. We're faced with a bunch of beginner spaghetti code [0], utterly terrible schema, no documentation, no spec, no structure, and tasked with fixing it urgently. This monster is now business-critical because in the three months it's been running the rest of the department has forgotten how to do the process the old way, and that process is time-critical.
Spinning up a proper project to replace this application isn't feasible in the short term, because there are processes around creating software in the organisation, for very good reasons learned painfully from old mistakes, and there just isn't time to go through that. We have to fix what we can and get it working immediately. And, of course, these fixes cause havoc with the project planning of all our other projects because they're unpredictable, urgent, and high priority. This delays all the other projects and helps to give IT a reputation as taking too long and not delivering on our promised schedules.
So yeah, what appears to be the best solution from a non-IT perspective is a long, long way from the best solution from an IT perspective.
[0] and other messes; in one case the code refused to work unless a field in the application had the author's name in it, for no other reason than vanity, and they'd obfuscated the code that checked for that. Took me a couple of hours to work out wtf they'd done and pull it all out.
most of these teams only wants a straightforward spec, shut themselves off from distractions, just to emerge weeks or months later with something that completely misses the business case. and yet, they will find ways to point fingers at the product owner, project manager, or client for the disaster.
I assume those processes weren't applied when deciding to use this application, why? Was there a loophole because it was done by an intern?
This reminds me of the "just walk confidently to their office and ask for a job to get one!" advice. This sounded bullshit to me until I got to stay with some parts of a previous company, where the hiring process wasn't that far really.
That's also the kind of companies where contracts and vendor choices will be negociated on golf courses and the CEO's buddies could as well be running the company it would be the same.
I feel for you.
Love the assumption "when it inevitably goes wrong." In real life, many of these applications work perfectly for years and assist employees tremendously. The program doesnt fail, but the business changes - new products, locations, marketing, payment types, inventory systems, tons of potential things.
And yes, after the original author is gone, nobody is left to update the program. Of course, a lot of programmers or IT folks probably could update it, but ew, why learn and write Access when we can create a new React app with microservices-based backend including Postgres in the cloud and spin up a Kubernetes cluster to run it.
And then you need to implement that, which is never an easy task, and maintain the eternal vigilance to both adhere to the vision but also fit future changes into that vision (or vice versa).
All of that is already hard to do when you're trying to build something. Only harder in a highly collaborative voluntary project where it's difficult or maybe even impossible to take that sort of ownership.
Nor can the design world, for that matter. They think that making slightly darker gray text on gray background using a tiny font and leaving loads of empty space is peak design. Meanwhile my father cannot use most websites because of this.
That's part of the problem, they'll defend their poorly visible choice by lawyering "but this meets the minimal recommended guideline of 2.7.9"
It's like dark patterns are the ONLY pattern these days.. WTF did we go wrong?
Win95 was peak UI design.
I don’t understand modern trends.
Then the world threw away the menus, adopted an idiotic “ribbon” that uses more screen real estate. Unsatisfied, we dumbed down desktop apps to look like mobile apps, even though input technology remains different.
Websites also decided to avoid blue underlined text for links and be as nonstandard as possible.
Frankly, developers did UI better before UI designers went off the deep end.
A few days ago I had trouble charging an electric rental car. When plugging it in, it kept saying "charging scheduled" on the dash, but I couldn't find out how to disable that and make it charge right away. The manual seemed to indicate it could only be done with an app (ugh, disgusting). Went back to the rental company, they made it charge and showed me a video of the screen where to do that. I asked "but how on earth do you get to that screen?". Turned out you could fucking swipe the tablet display to get to a different screen! There was absolutely no indication that this was possible, and the screen even implied that it was modal because there were icons at the bottom which changed the display of the screen.
So you had: zero affordances, modal design on a specific tab, and the different modes showed different tabs at the top, further leading me to believe that this was all there was.
Is this an inherently bad thing if the software architecture is closely aligned with the problem it solves?
Maybe it's the architecture that was bad. Of course there are implementation details the user shouldn't care about and it's only sane to hide those. I'm curious how/why a user workflow would not be obviously composed of architectural features to even a casual user. Is it that the user interface was too granular or something else?
I find that just naming things according to the behavior a layperson would expect can make all the difference. I say all this because it's equally confusing when the developer hides way too much. Those developers seem to lack experience outside their own domain and overcomplicate what could have just been named better.
I think it's because they are not using the product they are designing. A lot of problems you typically see in modern UIs would have been fixed before release if the people writing it were forced to use it daily for their job.
For example, dropdown menus with 95 elements and no search/filter function that are too small and only allow you to see 3 lines at a time.
Are there any good resources for developing good UX for necessarily complex use cases?
The best method I have found is to use the interface and fix the parts that annoy me. After decades of games and internet I think we all know what good interfaces feel like. Smooth and seamless to get a particular job done. If it doesn't feel good to use it is going to cause problems with users.
Thats said. I see the software they use on the sales side. People will learn complexity if they have to.
The toughest hurdle to overcome as a developer is not thinking about the gui as a thin client for the application, because to the user, the gui is the application. Developers intuitively keep state in their head and know what to look for in a complex field of information, and often get frustrated when not everything is visible all at once. Regular users are quite different— think about what problems people use your software to solve, think about the process they’d use to solve them, and break it down into a few primary phases or steps, and then consider everything they’d want to know or be able to do in each of those steps. Then, figure out how you’re going to give focus to those things… this could be as drastic as each step having its own screen, or as subtle as putting the cursor in a different field.
Visually grouping things, by itself, is a whole thing. Important things to consider that are conceptually simple but difficult to really master are informational hierarchy and how to convey that through visual hierarchy, gestalt, implied lines, type hierarchy, thematic grouping (all buttons that initiate a certain type of action, for example, might have rounded corners.)
You want to communicate the state of whatever process, what’s required to move forward and how the user can make that happen, and avoid unintentionally communicating things that are unhelpful. For example, putting a bunch of buttons on the same vertical axis might look nice, but it could imply a relationship that doesn’t exist. That sort of thing.
A book that helps get you into the designing mindset even if it isn’t directly related to interface design is Don Norman’s The Design of Everyday Things. People criticize it like it’s an academic tome — don’t take it so seriously. It shows a way of critically thinking about things from the users perspective, and that’s the most important part of design.
For teasing apart complex workflows I'd suggest Holtzblatt and Beyer's Contextual Design book, I taught a user-centered research and design class many years ago and used that as our textbook, hopefully it still holds up.
For organizing complex applications I like to start with affinity diagrams, card sorts, and collaborative whiteboard sessions. And of course once you have a working prototype, spend as much time as possible quietly watching people interact with your software.
Not at all. Talented human artists still impress me as doing the same level of deep "wizardry" that programmers are stereotyped with.
I also remember the hostility of my informal universities IT chat groups. Newbs were rather insulted for not knowing basic stuff, instead of helping them. A truly confident person does not feel the need to do that. (and it was amazing having a couple of those persons writing very helpful responses in the middle of all the insulting garbage)
I don't think that's entirely true, what I usually see is people that think AI art is just as good as many artists.
You can be impressed by something and still think a machine can do it just as well. People that can do complex mental arithmetic are impressive, even if that skill is mostly obsolete by calculators.
Other engineering disciplines are simpler because you can only have complexity in three dimensions. While in software complexitiy would be everywhere.
Crazy to believe that
Cost, safety, interaction between subsystems (developed by different engineering disciplines), tolerances, supply chain, manufacturing, reliability, the laws of physics, possibly chemistry and environmental interactions, regulatory, investor forgiveness, etc.
Traditional engineering also doesn't have the option of throwing arbitrary levels of complexity at a problem, which means working within tight constraints.
I'm not an engineer myself, but a scientist working for a company that makes measurement equipment. It wouldn't be fair for me to say that any engineering discipline is more challenging, since I'm in none of them. I've observed engineering projects for roughly 3 decades.