On a file transfer I'd really like to know, how much has been copied, how much is remaining and the current (5 sec average maybe) bit rate. That gives me enough information to know whether its worth waiting around for it to finish or I should find an alternative or the connection has dropped.
Staring at a spinner, with no idea how fast or slow things are going is the worst and I'm going to give up a lot sooner than if I knew what was happening.
Ordering a sandwich in a restaurant apparently becomes less normal nowadays. Instead we order it online for delivery. I would find it extremely annoying if the delivery service keeps sending me messages about his status. If the delivery is going to arrive at the time frame I asked for, I don't want any extra information from them. While I wait for the sandwich delivery, I can order other stuff too. I only want any messages about anything I ordered is if the delivery is not going to make it. If I received such delivery error, I can submit another order from a different delivery service.
Now you can go down a level and consider how the restaurant operates in this case. The restaurant receives orders from a queue and make those in batches. Whenever the delivery person comes, the restaurant hands out packaged orders each with their endpoint addresses. The restaurant does not need any extra information from you or the delivery person, unless something unexpected happens. There is no need for extra communication unless there is an interruption.
If you are the restaurant owner, of course you need to manage ordering ingredients for the restaurant, and ordering sandwiches for yourself. Whatever you need to order, your system can be transparent to it. Once you make the decision what to order, you can go back to paragraph two above and start from there.
loved reading this taking ages while installing from the floppy disk
- Participants preferred whatever they saw first
- Otherwise, accelerating or rapidly accelerating progress bars were generally preferred
If you're designing progress bars with an unknown time to completion, my recommendation is to use an accelerating function for the first 95% over the predicted median time to completion. Then linear timing function for the next 4% and set the time to 3x the median time to completion. Automatically fill the bar when progress completes, no matter what.
e.g. if an action takes a median time of 4s to complete;
f(0 <= t < 4) = 0.95 * ((t / 4) ^ 2)
f(4 <= t <= 16) = 0.95 + (0.04 * ((t - 4) / 12))
f(t > 16) = 0.99
You can adjust this however you see fit but I find it gives a pleasant experience.They may not have been the first to invent the idea or implement it. I found this description of the same idea here: https://cerealnumber.livejournal.com/27537.html
and an online implementation and demo here: https://jan-martinek.com/etc/zeno/
Or have zombie spinners eliminated all trust that they indicate ongoing activity?
It seems to me that most progress bars are lies.
One approach is to remember how long a similar task took, and present progress based on the expected time will be similar.
Though one interesting thing to note is that both of these curves violate one of the findings in this article that users seem to prefer the bar to move slower at the beginning than the end and what you like want is more of an "ease-in" curve.
Should eventually figure out what tasks are IO bound, what are CPU bound, what are fixed time, etc. as well as what device configurations effect that. Could also predict a service is down given enough scale...
- Vendor: Foo Corp
- "Downloading files", Internet, 1000 MB
- "Extracting files", Disk, 2000 MB
- "Copying files into place", Disk, 4000 MB
- "Configuring", CPU, 100 seconds
The library would generate a progress bar that you'd update like:
- progress('Extracting files', 100), progress('Extracting files', 200), etc.
It would learn that 100MB of disk IO takes about this much time, and downloading 100MB from the Internet takes so long, and so forth. And the thing is, the estimates wouldn't even have to be particularly good as long as they were reasonably consistent for all of the same developer's projects. If CPU bound process #1 takes 10 seconds on the developer's laptop, and CPU bound process #2 takes 20 seconds, then the library could see how long the first takes on the user's hardware and then double it for the starting estimate of the second process.
I'd love so much for this to exist.
Edit: because of your comment I finally got around to writing this up at https://honeypot.net/post/smart-progress-bars/
If it wasn't clear, I completely agree that the data should be aggregated across all apps, all tasks.
I thought the goal of a progress bar was to report reality, not manipulate users into thinking that your slow code is faster than it actually is.
Since when have dark patterns infiltrated academic HCI?
As a software developers things like % complete and estimated finish time are general far more useful (for debugging things like "what stage is taking too long", for instance), but those specific details are rather further down the list of priorities for most average users.
Studies cannot "back up" a goal. The goal is defined by the designer, developer, or in this case, the researcher.
Users also want to know if the app is lying to them. Sometimes an app crashes but keeps showing a progress bar. Sometimes the app says that something will take less time than it does. The more that developers make progress bars that lie to them, the less that users can depend on progress bars to tell them the truth and make informed decisions about what to do with their lives. These computers are tools for them. You don't know better than your users.
I am having that exact problem in an application I am developing.
Display a progress bar for downloading, and then another one for processing. You can caption them, too.
You have one progress bar showing progress for this floppy, and another showing progress for the entire operation.
It's probably easier to visualize than describe, but I have longer descriptions on the thinking behind it on my blog/GitHub repo. Unfortunately, my demo site succumbed to JS CDN bit rot and I keep forgetting to update the demo to something more recent.
(ETA: Forgot the link: https://github.com/WorldMaker/compradprog Also thought I could point out that the idea mostly jives with the findings in the paper here.)
Why do you classify that as a dark pattern? It's not harming people in any way, but the opposite.
It's lying. Lying is inherently harmful. Just because a computer is doing it doesn't make it unethical. Especially since people trust computers to be precise and correct.
Please explain how lying to your users is good for them.
Participants tended to prefer (i.e., perceive as faster) whichever function they saw first. Of the 990 paired com- parisons, the first function was preferred 376 times (38%), the second 262 times (26%), with no preference 352 times (36%).
Also relevant: Tom Scott's recent video[0]