* Legible Mathematics, an essay about the UI design of understandable arithmetic: http://glench.com/LegibleMathematics/
* FuzzySet: interactive documentation of a JS library, which has helped fix real bugs: http://glench.github.io/fuzzyset.js/ui/
* Flowsheets V2: a prototype programming environment where you see real data as you program instead of imagining it in your head: https://www.youtube.com/watch?v=y1Ca5czOY7Q
* REPLugger: a live REPL + debugger designed for getting immediate feedback when working in large programs: https://www.youtube.com/watch?v=F8p5bj01UWk
* Marilyn Maloney: an interactive explanation of a program designed so that even children could easily understand how it works: http://glench.com/MarilynMaloney/
[1] The weird symbols might seem like an initial barrier to this, but they're only hard to read if you're unfamiliar with APL; they're actually very easy to learn and remember, as there's not that many of them and they all do very basic operations. However, APL loves operator overloading and likes to give operators different functions based on whether they're used in prefix ("monadic") or infix ("dyadic") forms, and there are also higher-order operators that consume the operators immediately adjacent to them and then operate on the expressions after that; all of this makes the nominally right-to-left parsing require a fair bit of mental effort instead of being able to rely on immediate visual recognition.
What are your thoughts on Mathematica/Wolfram Language? Some of these ideas are present in it (like mathematical typesetting, interactive documentation and live code/data updates).
This stuck out to me, there seems to be a trend in UX/UI where any move away from the "simplest path" is seen as a huge negative. Could it be the case that we use these tools (especially UI patterns like vi) because after the learning curve the give a huge amount of value? It seems like we are assuming that we should make a developer tool with the same level of "immediate familliarity" that we try to build into a website where customers will bounce easily, for an audience who is willing to spend time learning a tool if it provides value to them.
> But look at this guitar player with blisters. A harpist has blisters, a base player with blisters. There's this barrier to overcome for every musician. Imagine if you downloaded something from GitHub and it gave you blisters. Right? The horrors!
That whole talk is filled with some interesting takes on designing and building software (with the usual skew that paints Clojure in a good light, so take it with a grain of salt if necessary).
[1] https://github.com/matthiasn/talk-transcripts/blob/master/Hi...
I'll have to remember that next time someone mentions that C should be a dead language.
(I actually think C is a fine language, but should be deftly handled)
You mean Haskell?
There are two issues here:
a) Designers trying to simplify everything beyond usefulness is a good instinct gone haywire. Simplification helps, but without an understanding of accidental complexity versus essential complexity, one is bound to end up painted into a corner with no flexibility left in the app. Few designers understand this, and those who do got it the long way round — by working on products that have a lot of essential complexity, like AdWords, and by repeatedly fighting those battles
b) An engineer's operating environment, OS, IDE, shell, terminal, is a reflection of the inside of his or her mind writ large. Like every Jedi has to build their own lightsaber, every engineer has to go through this pain of building out their weapon, because one's workflow is how one thinks, how you look at the problems at hand. No UI designer can help with that.
[0] (Because it's relevant to the context: ex-Google, ex-Facebook as professional experience)
There is a great little book - Daily Rituals [0] - that goes into many artists' and scientists' daily habits. The habits are very much along the same lines as your thought - they are workflows for how the individual tends to think best.
I'd love to see someone put together a website or book that did that in the context of software engineers' workflows. Does someone know if a resource like that already exists?
[0] https://www.goodreads.com/book/show/15799151-daily-rituals
-edited for grammar.
How did you juggle acquiring skills for both roles? Also, where there times you had to struggle between the two mindsets on a task?
Shout out to the people that don't really bother because they're good enough to adapt to any setup. :)
Many mice are ambidextrous (e.g. the Apple puck). Most are weakly right-handed with a slightly assymetrical shape. Some are very strongly right-handed (e.g. vertical mice) and can't be used sensibly in the left hand. So left-handed mice also exist.
Some people are naturally left-handed. We (as a civilisation) used to treat this as aberrant but have now recognised it, and that different tools suit different people.
I believe that something similar exists in programming tooling in relation to how people think about programs. There are clearly some people who have a strong, unusual "handedness" and have developed tools to match (e.g. Colorforth). A few people discover these and find them amazingly usable. Most other people find them baffling.
Consider the three propositions:
a) Jimi Hendrix played guitar in the wrong way with the strings in the wrong positions
b) Jimi's configuration was correct and everyone else was wrong, because he's producing the objectively best music
c) Jimi was left handed, and had constructed an accomodation which worked for him but should not be expected to work for anyone else
Far too many discussions of programming tools devolve into (a) versus (b), largely because people want there to be an objective ranking of who the best programmer is and what the best tools are, rather than allowing for diversity of (programmer x tool).
Pretty sinister, if you ask me.
And I feel this is a very similar situation with other tools. I edit code with vim, in a terminal. This is simple as dirt. I do it because it is simple as dirt. Visual Studio is incredibly complicated to me because to do the creating code part my job I need to understand the following:
- How code is built.
- How to build the code without using any graphical front end.
- But now when you bring VS into the mix I need to also understand visual studio. It does not remove complexity, it adds it.
Similar thing with debugging, I need to understand all the ins and outs of debugging but now bring VS into the mix and I need to understand it's stupid UI.
I like simple, my mind is simple. I can learn things, if there are rules and patterns it makes it easier to learn, but the less things I have to learn the happier I am. I don't have an option to not learn some things, like how to do build automation, how to debug code, how computers work, etc. But I do have an option to not learn something entirely useless like VS.
I think the lie being sold is that somehow you can be a programmer without actually knowing how to use a computer. And to know how to use a computer is not the same thing as knowing how to click on things in the UI with a mouse. To know how to use a computer you need to understand how to use it to do automation - and once you need to do this VS is just a nuisance.
Just a rant I guess.
Sure, the pickaxe is simpler, but it will take you a lot longer to break that concrete with a pickaxe than with a jackhammer, because of simple human limitations.
Similarly, refactoring and advanced code navigation support takes a while to learn how to use, but once you do, it empowers you through technology to quickly do things that your mind would take much longer to do by hand.
For example, say you want to extract some code from the middle of a function into a separate function. With vim, you would generally have to manually move the code, write the new function header, then start to painstakingly inspect the code to identify the parameters to pass between the two. You will probably make mistakes and have to wait for the compiler to find them. All in all, assuming it's a bit of hairy code, it might easily take you more than an hour to get it working. In VS, you would type ctrl-r, ctrl-m (Refractor, extract Method) and it would automatically detect all of these for you, pop out a dialog box, you'd enter the new function name, Tab, param name, tab, param name etc, and enter when you're done. Maybe 5 minutes all in all, assuming you also do some ctrl-r, ctrl-p (refractor, parameter) afterwards to extract some larger expressions back into the original function.
Similarly, you have things like 'analyze data flow to...' which can find all places in your program where a particular value can be written to, and do that recursively until you get to the original source. Same thing - this can be done by hand with a series of finds and so on, but an advanced tool will just help you do it faster.
But, just like with advanced editing in vim, you need to take the time to learn the tool until you can get the most out of it. Same as the first time you enter vim you're likely to fumble to even be able to exit, you can't expect to be productive in VS if you don't take the time to learn what it can do for you.
Wait, what happened to XFCE? I use it right now.
I'd look into switching to Fedora. I use the XFCE spin and it is stable.
Your tools, then again, chainsaw, microscope, text editor... have no reason whatsoever to have an UI that is intuitive without training[1]. Because without training you are anyway going to be either dangerous, useless or in best case just really unproductive.
[1] of course, the UI needs to be efficient after training.
For professional tools knowledge encoded in the head supported by appropriately encoded knowledge in the world absolutely is a viable approach, provided there's appropriate feedback and conceptual mapping corresponds to the mental model a user has about how that tool works, i.e. actions and reactions should be consistent.
With modal design patterns such as the ones used by vi, for example, this can become a problem.
And this isn't just me saying that because I like vim. It's because of the objective fact that nearly every developer tool that is created will include a vim mode. And if included as an extension it will often be one of the most popular extensions. What that objectively indicates is that there is a large contingent of developers who genuinely find the vim modal editing UX excellent, to the point they seek it in other tools as well (including browsers, mail clients, RSS readers, etc).
What I would say is that vi really doesn't give you a lot of interactive context -- and it's hard to add it on.
It's simply not possible to learn vi by just using vi.
Also, the author emphasizes that just dumbening a product is not the solution.
A quick summary of my experience:
- Most complex apps are not learnable linearly: Ableton Live, Final Cut Pro, Logic Pro X, Photoshop, After Effects, Blender, all practically require reading the manual.
- A few apps are ok at it: Adobe Lightroom, and Sketch, although those apps also are probably less powerful than those in the first category, e.g., Photoshop can do the majority of what both of those apps can do, and more.
I would actually put shells and text editors as some of the easiest complex apps to learn linearly, because you can do so much with them with just cut and pasting text from the internet. Try following along with a Blender tutorial video that's not for an absolute beginner, and you'll get stuck almost immediately, because you won't know the keyboard shortcuts to perform the actions in the video. This happens far less with programming tutorials involving text editors and terminals.
I did. I learnt vi when I was told go fix this file on that computer, you have ssh, x forwarding won't work because you have 3 hops (embedded devices that won't allow forwarding of any kind), there is only vi on the box.
So I figured out how to use vi. It is not rocket science, there is not that much to learn, vi muscle memory takes time but also not even that long I think.
As positive examples I would point out the kind of interactive editing mode that you can find in dependently typed programming languages. I believe e.g. Idris has a pretty cool Emacs mode.
> We coders still put up with horrid UX/UI when programming.
which is illustrated with a screenshot from Visual Studio... .NET 2002, I think, judging by the application icon?
Setting aside the relevance of a 20-year old screenshot, what exactly is wrong with that interface and what makes it horrid? I mean it definitely had its quirks but:
- It's spectacularly compact, certainly way better than anything I've seen in the last five years. We could display an UI builder and the associated code on single 1024x768 screen and work on it semi-comfortably. "Beautiful" UI/UX, as understood today, is so cluttered by whitespace (oh, the irony...) that it's barely usable on a 1920x1080 screen. A similarly compact interface on today's huge screens would be a dramatic productivity improvement that, twenty years ago, we could only dream of.
- You could easily access any function through textual menus -- no hamburger menus, no obscure, monochrome icons. Granted, the toolbar icons were a pain, but the way I remember it, most of us either disabled it straight away, or just populated with a couple of items that were of real value and which we knew well.
- The colors have great contrast, the whole thing is readable even on a very poor-quality screenshot that seems to have been actually downsized.
- UI items have enough relief and/or distinction that it's clear what you can interact with and what you can't (maybe the item palette from the diagram editor is an exception, or at least the screenshot makes it look like one, but virtually every program in that era made it look like that so it wasn't so hard to use).
So what's wrong with that thing?
There are definitely cargo cult designers, just like there are cargo cult programmers out there. But I don't think that's true of most designers, just like I don't think it's true of most programmers.
Chasing the latest trend isn't always a decision you make by yourself, and isn't always a decision you can oppose on your own terms.
Virtually every designer who was in the industry ten years ago or so can come up with a good design that's 100% against the latest trends -- contrasting, non-flat, compact, whatever.
But it'll get shot down within minutes in any product design meeting. Sometimes by people who have zero design experience, so they can't judge a design except by how well it conforms to the latest trends. Sometimes by people who lack the "political" capital to argue for an original design with their bosses, too. There are a lot of factors here, and most of the time the people who do the designs have the least amount of influence.
Any software business that is profitable, already have code in production, and that code need to be maintained. So instead of creating a new better experience, you can make the current experience better. eg. putting rockets on a horse, rather then creating an automobile.
Selling is going to be hard but I feel like you underestimate the technical difficulty of replacing a large stack of complex tools that have decades of work and experience behind them. And that, in part, makes selling harder: I'm immediately suspicious of anyone who claims they've invented a superior way to work. It's more likely that they've invented a small improvement (and an arguable one at that) for a particular scenario, but developers would still have to rely on their old tools for a lot of stuff. In worst case, they're trying to sell a tool that doesn't extend but replaces the old tools without providing support for scenarios and workflows that existed with the old tools; step forward on one front, three steps back on others.
Of course, small improvements to existing workflows can usually be implemented by developers for themselves (and others while at it) once they learn about the idea, and that's how the developer experience has slowly improved over the years.
For example, you can make a new fancy code editor (let's call it sublime) and hype it on features like multiple cursors. And I can have that in emacs at the cost of about 3000 sloc of elisp, and I don't have to give up any of the old things that I've grown to rely on.
I spent some time nerding out over woodworking hand tools a few years back and it pretty well cemented for me something that I’ve suspected for most of my career: people down in the muck have very limited vision. That your output is only better than your input by degrees.
I’m not sure there would be much fine woodworking at all if the best woodworking tools were only as good as the best software tools. There is no Lee Valley of developer tools. You can’t make me stop using JetBrains (individual licenses were their best idea every), but it still doesn’t rate above a Woodriver, and if I’m honest some of their stuff is Stanley level, and not even the antique stuff. And their stuff is better than just about any other tool I use all day.
I suspect Harbor Freight could make better software than Atlassian, and I don’t even mean that as a metaphor. I think I could take harbor freight employees and get better requirements out of them because they wouldn’t be up to their ears in cognitive dissonance.
I'm a bit hopeful that DX might finally start getting the love it deserves. -- For better or worse, Microsoft seems to understand the potential that lies in building better tools and seducing programmers to join their fold.
Later when I used their stuff and could be (more) objective, I was sort of confused by that previous experience. They're... okay. If that was intimidating maybe that said more about us than about them.
I am probably experiencing a little backlash when I react to their shenanigans because they are not 'all that'. At all.
Trac (no affiliation) is ugly but like your favorite hammer. It is probably the least painful project management suite I ever recall using. And I used it before they decoupled their parsing and rendering logic, so writing plugins that involved anchor tags was a warren of duplicated code.
My last musing-while-brushing-my-teeth was to wonder what would happen if someone drop-kicked the Trac CSS files and wrote a new one from scratch using modern conventions.
We accept the heritage of developer tools - the keyboard-driven interface that displays to a teletype emulator, the edit-compile-debug workflow - and build tools that improve the processes that have built around those legacies. This is why when something comes out of left field like Adele Goldberg and colleagues describing Smalltalk, we find it easy to adopt the approach to code organisation on offer and hard to adopt the image model, browser-based workflow, debugger-driven iteration, and other changes.
Meanwhile, when we go out into other domains, we use a little bit of understanding of that domain, a lot of reasoning by analogy, and an intention to "disrupt" what already exists and "eat the world", and create something that works very well for the spherical user in a vacuum without all of the detailed understanding that comes from having grown up in the system and learnt from people who grew up in it even longer ago.
That 30 year time frame takes us back to 1990 and back then the user experience was limited by the technology of the time.
However a decade later we had Windows XP.
I would say that 20 year old Windows XP might in fact be a much better user experience than the modern UX/UI we have to live with today.
The much less powered CPUs of that time felt much more responsive than the modern day CPUs/OS that we have to today.
This is why there are occasional spasms of "back to basics" or plaintive remembering of the BBC Micro. You power it on, it beeps, and within a second you're in the interactive development environment. Typing code runs it directly. Typing code with a line number adds it to the program. No configuration, containers, downloads, updates, dependencies or uninformed choices to make.
> Why do we treat this as a moral failing instead of a usability issue?
Yes. This applies in so many places. Learn from "Poka-yoke". The system should make it easier to do safe things and harder to do unsafe things.
> Tests are a usability dead end
Depends what you mean by "tests". A strong type system does away with certain categories of test (and conversely a lot of the heavy unit testing usage comes from communities with weakly typechecked languages). But both types and tests are capturing a human-level requirement of "if X then Y", a constraining of the problem space.
This is why many successful code archaeology maintenance projects start by building a test suite to capture the current functionality of the program. An executable requirements document.
I'm not quite sure what the author is arguing for here in particular.
Anything that makes writing tests a bit easier, e.g. suggestions for additional test cases, would be cool, but ultimately tests are about writing down your assumptions/expectations about the code.
No, they are not formal proofs and sometimes they are not perfect but they still provide a lot of value. So far I haven't found a good reason not to write tests (since I outgrew my newcomer attitude) and yeah integration tests are usually what I focus on most. For any case where testing whole systems today is hard, there are some fundamental challenges (e.g. end to end web UI test). I don't quite see how tooling will get rid of the need for tests.
I write device control software. It’s very difficult to have true automated testing of things like drivers. You can write unit tests for subsystems, like packet parsers, but integration testing generally requires good ol’ “monkey testing.”
“Just write a mock!” Is what I hear all the time.
Mocking a device is a massive project; potentially larger than designing the device, itself. Remember that the mock needs to be of unimpeachable quality, and also needs to do things like simulate adverse signal environments.
DX for that kind of thing can be awful.
As far as basic DX goes...
Most developer tools are wrappers for command-line OS tools, and it shows.
They can also be quite buggy, and we accept this bugginess. I use Xcode, which is quite “crashy.” I am constantly fixing issues by deleting the build folder.
Back to testing...
I prefer test harnesses over unit tests. I write about that here: https://medium.com/chrismarshallny/testing-harness-vs-unit-4...
You only need to _mock_ the object, that is emulate behavior on a small scale.
Also, I don't think there's as much value in unit tests than other more holistic types of testing, i.e. functional/integration tests.
TBH I read your linked article and still don't quite know what a test harness (in that context) is and in what way it's better and differs exactly in the concrete. Guess I'll have to read up on it elsewhere.
Perhaps some sort of formal spec for the hardware communication API would help, and would also help on the ASIC side, but I can't see how such a thing would be built and popularised. It's a very small balkanised world of driver writing.
I really like to explore new DX approaches (just recently, I published an extension for VS Code enabling visual debugging [1]). But I find it hard to make a living out of it, as so many companies find it granted that everything is free. They would rather hire another developer than paying for licenses that might effectively increase the effiencency of the developers they already have.
[1] https://github.com/hediet/vscode-debug-visualizer/blob/maste...
On impact: I've been running the numbers for a tool I'm working on and the projected savings for the industry look insane! Just by shaving off 10 minutes here or there you can contribute a lot.
Text files and their disconnection from a documentation is a root of all our evils. Not only they diverge with new versions of everything, but there is a constant attention switch (stacks of them!) and unnecessary diving into things that may or may not be important to the development process. There is no way to omit these checks when you learn or return to an idle project.
I have a long time idea that every config, format, api call, and so on should come with inseparable documentation ui (+rationale, examples of use, best practice links, pre-configuration, diff/merge views, etc). Yes texts are simple and easy to read, but we also write. You can make text from a structure in O(1), but you cannot make a knowledge from an empty file in O(sensible), for any sensible sensible.
Not arguing on salaries though. Even pretenders who have no clue can take a great cut off this nonsense.
The interface shows you all the components you can use, and what data they require.
I then move on to teaching Python, and watch kids get frustrated.
Languages like DRAKON should be the future of our profession - not typing 80 char lines into a terminal.
I don’t think it would survive a pass from most “UX” folks in such a nice state. It 1000% wouldn’t survive a designer or hybrid designer/UX person (it wouldn’t look pretty in screenshots on their portfolio).
The main problems in software tools are lack of consistent behavior, lies, and tons of ways to use a bunch of tools that all do basically the same thing (and you’ll probably have to know more than one). The hardest part’s not using them, exactly, it’s knowing all the different, stupid reasons they break. It’s a general quality issue more than a broader UX thing, I think. That extends to libraries. And I don’t also mean tools and libs from big names—I mostly mean them.
Let's take an example: Specifying DNS servers in Windows. 95: 1. Rightclick "Network" on desktop -> Properties 2. Doubleclick on the TCP/IP protocol for the NIC. 3. Type new DNS server. 4. Press OK.
10: 1 Press start 2 Search for control panel 3 Open Network and Internet settings 4 Select Change Adapter Settings. 5 Rightclick NIC and select Properties 6 Select TCP/IP and then Properties. 7 Type new Server 8 Press OK
Sure, some may think that Windows 10 looks better than Windows 3.11 or 95, but I don't and I can't believe everyone does.
1. Usability tests where developers literally sit down and watch someone install and use your library from scratch (I find a lot of developers do not like to do this). Things like, seeing where they have to look up documentation (and how they do it), what bugs they hit, and how often they make common mistakes. I think a lot of this could be logged e.g. A developer signs up, you have their email + API key, you can connect the dots between what doc pages they view, how often, and what errors they commonly run into.
2. Doing whatever it takes to minimize the time to aha moment. This is absolutely critical for any product design effort, but not many companies measure this if any at all when it comes to DX. I think Twilio and maybe Stripe are the only ones that may have had this as a key onboarding KPI.
Ultimately I think a majority of developers that are capable of implementing these things are quite technical and used to the general state of DX so they don't view bad DX as much of an issue unless its really terrible.
Lastly, I really wish error messages would just be super informative. For example getting something like this "undefined method `my_method_name' for nil:NilClass (NoMethodError)" still feels a bit cryptic to someone newer to programming, if you can also tell me the human readable variable that I used that caused this issue, the one that was nil, and the exact line (the stack trace purely by itself can be confusing) that little touch would go a long way. For example compare that error message to something highlighted in a different color that says "The variable you used called "contact" on line 87 was found to be nil, this is likely causing this issue". This way when you run into the error and are scanning the stack trace, the computer is telling you as quickly as possible what may be wrong, again for a novice since the way the original error is written for someone more experienced is likely succinct enough.
Why doesn't every error message have a link to a specific page with discussions, instructions, etc. Or maybe even a button you can click where the machine tries a best guess at an automatic fix?
I mean, fairly often throughout the years. Particularly in communities for lisps, ruby, perl, python, C. More common 5-10 years back perhaps.
Relevance is a fairly common topic across the board. Discoverability is maybe the least common topic here, and one that's a pretty interesting one for PL design imo.
I haven't seen too many blog posts about these things lately, but they're frequent enough discussions in personal circles and in mailing lists/chats that this question seemed odd to me.
The problems the author identifies largely have already been solved. The solutions just haven't become universal, generally for reasons completely unrelated to the actual problem, and more to do with PR.
> When was the last time you heard of a programming language discussed in terms of discoverability, succinctness, relevance, let alone beauty?
The ruby-lang mailing list used to be full of these sorts of discussions. If the author hasn't come across these factors being spoken about, that says more about the social value systems around the dominant ecosystems than about any fundamental complexity. Ruby lost the PR war to JS.
> Coding tools were around before UI/UX was a thing...
...yeah, no. At least, not in a way that makes the author's point. COBOL was an attempt to improve the developer experience. UI research predates JS by decades.
This paragraph approaches the complexities in JS as though they were a natural consequence of when JS was written, as though it wasn't possible to have done any better so we've all got to live with the best that was available then, rather than what we know now. That's just not true. JS even when invented wasn't a good language. Brendan Eich wanted to write a Scheme, and we'd all have been better off if he'd got away with it, and also if he'd had more than 10 days to implement it. The things we complain about in JS were commonly known to be bad at the time, they just didn't end up fixed for reasons entirely unrelated to the technology.
I think it would be arrogant to think of software development as exceptional in this sense, but it's certainly reflected in a lot of software designs. The reality IMO is that if you aren't working off a set of insights and observations like the one listed in the article—regardless of your users' domain—you aren't making enough of an effort to design for your users.
I certainly may be wrong though, all sorts of stuff ends up being Turing-Complete.
I connect experience with hyped concepts that are already forgotten today.
That said, tendency to decrease choice seems to only serve certain users. Other feel just as restricted as developers, which are also users, so the dichotomy should be questioned.
See here for examples of publications in the area: http://web.eecs.utk.edu/~azh/publications.html
Also relevant, I wrote a blog post for students to get started in human factors in software engineering: http://web.eecs.utk.edu/~azh/blog/guidehciseresearch.html