This led me to wondering how software development would have progressed if CPU clock speeds were effectively 20x slower.
Might the overall greater pressure for performance have kept us writing lower-level code with more bugs while shipping less features? Or could it actually be that having all the free compute to throw around has comparatively gotten us into trouble, because we've been able to just rapidly prototype and eschew more formal methods and professionalization?
Windows 95 could do a decently responsive desktop UI on an 80386. Coding was a lot less elegant in one way - C code that returns a HWND and all that - but with the number of levels of indirection and abstraction these days, we've made some things easier at the cost of making other things more obfuscated.
It’s written in Zig, not C. But that style of programming is still available to us if we want it. Even in more modern languages.
Honestly I’m really tempted to try to throw together a 90s style fantasy desktop environment and widget library and make some apps for it. There’s something about that era of computing that feels great.
The style of programming does work for general purpose computing, but their requirements enable a significant % of “orders of magnitude faster than postgress”.
SerenityOS might be exactly what you're looking for. Join the community and make some apps, it's great (both the community and the OS/dev experience)!
They're full KDE installs, and openSUSE 10.2 wasn't a lightweight distribution in its day. But now, even running in VMs, UI and network response are noticeablly snappy, and are sparing of resources by modern standards.
if [ "$os" = "Darwin" ]; then
arch="universal"
os="macos"
elif [ "$os" = "Linux" ]; then
os="linux"
else
echo "Unsupported OS."
exit 1
fi
This is nothing to emulate.If you're interested in that, you should definitely check out Serenity OS!
I remember switching to VB.NET on a 600mhz machine and the IDE was a sluggish piece of garbage.
As a 15 year former JavaScript developer I can see that things are changing in this regard with hiring slowing down, but there remains close to no incentive to be good at any of this. Even if you are good at this and defy all extreme expectations by finding a job that values bleeding edge performance over hiring you won’t be paid any more for the rarity of your talent/experience so why bother?
My own experience from a time at a SpringBoot/iBATIS/Hibernate etc. shop (so server-side not UI side) is that it's all well and good running on "rocket fuel" as the consultants say, until something goes wrong. Then at some point you need the person who understands HTTP and SQL and other antique stuff to diagnose the problem, even if you can fix it at the abstraction layer.
One of the problems we had one day, was that a particular script "just wasn't working" and it turned out the file was being sent correctly, but with the content-type set to "text/html" by mistake because someone's "clever trick" interfered with SpringBoot's "magic" so the content-type detection wasn't running the way you'd want. Easy to fix, but you need to know what's going on in the first place.
Over time I also developed a feel for code smells of the form `return SomeItemDAO.fetchAll().size()` which runs just fine on the test server with a few thousand items, then you deploy it to prod where there's tens of millions and the database is on a different machine. It turns out SELECT COUNT is a thing!
Or, are we simply re-inventing the wheel each time, where the set of features over the last few decades really haven't change that much or cycles through featureset phases.
Concretely, "shadow DOM", reactive (web) components and js frameworks etc. etc. are all ways of trying to get a set of rich UI components (like we're used to from desktop applications) into an environment that was originally built for static text pages, and has been expanded by patchwork ever since. DOM updates are slow and cause flickering because the original model was you submit a form and the entire page reloads; updating part of the page in response to an asynchronous request or a local event wasn't in the original design space, and every solution we've tacked on so far sucks in a slightly different way. Not helped by the complexity of being able to dynamically change the layout and size of every single element - CSS is incredibly powerful, but setting a button to have a 1px border only on hover for example has a tendency to make other page elements "jiggle" in ways you don't want.
The problem space is much larger now than it used to be. 20 years ago, you didn't care about things like responsive design, accessibility, fractional scaling, even internationalization was basic/non-existent. There's a long tail of features (often extremely complex, like accessibility) which are not immediately obvious for an english speaker with normal vision staring at a standard sized display.
This is how it has been since I've been in the industry (2005). it's part of why I got out of web development; it felt like a whole lot of relearning how to do the same thing every few years. At first, there was incremental feature gains, but after a while, it felt like newer frameworks or approaches were a functional step back (i.e., NoSQL, the fact that it eventually came to stand for "not only SQL" tells all).
Edit: Thanks to the other comments, I can see it's a crude re-stating of Wirth's Law [0]
I think the GPU would do a lot more work in most applications than it does today. If a process needs to be super fast, when applicable, I write a compute shader. I've written ridiculous compute shaders that do ridiculous things. They are stupidly fast. One time I reduced something from a 15 minute execution time to running hundreds of times per second. And I didn't even do that good of a job with the shader code.
(The file alignment still defaults to a 512-byte (0x200) sector size which means the inefficiency is there today even though you may not notice it in isolation, but the "sector"/buffer size has been at least 4096 bytes since 2011. [2])
> The /FILEALIGN option can be used to make disk utilization more efficient, or to make page loads from disk faster. [Assuming it matches the page size = 4096 bytes.] [1]
> All hard drive manufacturers committed to shipping new hard drive platforms for desktop and notebook products with the Advanced Format sector formatting [4096-byte or greater] by January 2011. [2]
[1] https://learn.microsoft.com/en-us/cpp/build/reference/fileal...
That's too good a story not to have just a little more detail. Are you willing to share more?
I read the paper that described the algorithm and implemented code on the CPU, thinking, quite stupidly, that it would be fast enough. Not fast, but fast enough. Nope. Performance was utterly horrible on my tiny 128x128 pixel test case. The hoped-for use cases, data sets of 4096x4096 or 10000x10000 were hopeless.
Performance was bad for a few key reasons: the original data was floating point, and it went through several complicated transformations before being quantized to RGBA. The transforms meant that the loops were like two lines total, with an ~800 line inner loop, plus quantization of course (which could not be done until you had the final results). In GLSL there are functions to do all the transformations, and most of them are hyper-optimized, or even have dedicated silicon in many cases. FMA, for example.
So I wrote some infra to make it possible to use a compute shader to do it. And I use the term 'infra' quite loosely. I configured our application to link to OpenGL and then added support for compute shaders. After a few days of pure hell, I was able to upload a texture, modify the memory with a compute shader, and then download the result. The whole notion of configuring workgroups and local groups was like having my pants set on fire. Especially for someone who had never worked on a GPU before. But OpenGL, it's just a simple C API, right? What could go wrong? There's all these helpful enumerations so the functions will be easy to call. And pixel formats, I know what those are. Color formats? Oh this won't be hard.
But once everything was working, it only took a few more days to make the compute shader work. The hardest part was reconfiguring my brain to stop thinking about the algorithm in terms of traversing the image in a double nested for loop - which is what you would do on the CPU. Actually, the first time I wrote it, that's what I did, in the shader. Yes, I actually did that. And it wasn't fast all. Oh man, it felt like I was fucked.
But in the end, it could process the 4096x4096 use case at 75 FPS, and even better, when I learned about array textures, I found that it could do even more work in parallel. That's how I got it from 15 minutes to hundreds of frames per second.
I am primarily doing game development and HPC; I am decently familiar with C++, but desktop UI has been a pain point for me so far. Most GUI tools I write in C++ are using ImGui, or they are written in C#.
1. What is your goal? Do you need to run on Windows and Linux? QT isn't bad, although I personally think the UI looks a little weird. It is definitely highly opinionated and parts of it are quite strange IMHO. There's probably lots of jobs writing with QT, which might be a nice side bonus from learning the framework.
2. Do you need a totally custom UI? If so, I would stay with ImGui. You might find Windows UI development extremely frustrating, especially that you have to owner draw a lot of stuff to get a really custom UI. That can be an extremely difficult and terrible experience, and I don't recommend it to anyone who isn't already an expert at it.
3. State management? You mean like the state of the UI? Is a button pressed? Could you be more specific?
4. User interaction? This is such a broad area. Could you be more specific? Like filtering mouse and keyboard messages? Windows has several APIs for this.
EDITED TO ADD: In my experience, which is significant, either use a GUI framework and operate within its capabilities, or draw everything yourself. In Windows, your life will become exceedingly difficult if you use a framework when you want to do a lot of custom components, or if you want a lot of custom look/feel. If it were me, I would draw everything myself. People don't need the consistency of the Windows UI anymore, provided you stick with common and well-known metaphors like text boxes and property editors, etc.
Are you living in the same world as the rest of us? Nowadays programs are shipped with plenty of bugs, mostly because patching them afterwards is "cheap". In the old days that wasn't as cheap.
So having lower powered computers would have made us write programs with less features, but also less bugs. Formal coding would be up, and instead of moving fast and break things most serious business would be writing coq or idris tests for their programs.
Bootcamps also wouldn't be a thing, unless they were at least a couple of years long. We'd need people knowing about complexity, big O, defensive programming, and plenty of other things.
And plenty of things we take for granted would be far away. Starting with LLMs and maybe even most forms of autocomplete and automatic tooling.
2. Get it right
3. Make it fast ... pretty
Weird observation and from personal exp a good percentage of development stops at 1 with periodic blips to 3 when issues popup (of course with an eventual rewrite coming when new people come onboard) as a consequence of not focusing on 2 due to how we build today.
The limitations, and features we had then are a minimum starting point.
So I'm thinking around the era of a 486 100mhz machine. We'd have at least that (think mylti-player Doom and Quake era as a starting point.)
We had Windows, preemptive multi threading, networks, internet, large hard drives, pretty much the base bones of today.
Of course cpu-intensive things would be constrained. Voice recognition. CGI. But we'd have a lot more cores, and likely more multi-thread approaches to programming in general.
A 20x reduction really isn't that significant in a historical context. Gray beards here have seen CPU performance increase by 200x or more over their computing careers since the late 80s or early 90s. And that is ignoring multicore/SMP gains.
I found this nice figure trying to summarize CPU performance trends over many decades: https://www.researchgate.net/figure/CPU-performance-Historic...
Prognostication depends on other unstated assumptions about the market or fundamental technological limitations. Generally, I'd say that if the single CPU core trend was more flattened, we would have seen more emphasis on parallel methods including SIMD, multicore, and the kinds of GPGPU architecture we're already familiar with.
The kind of programming model that is at the heart of CUDA, OpenCL, etc is exactly what the high-performance numerical computing researchers were using back in the late 80s to early 90s when computers were much slower. They were simply applying it to exotic multi-socket SMP machines and networks of computers, rather than arrays of processors on a single massive chip.
However, IMO simply thinking in terms of actual chips that existed isn’t that interesting. What would computing look like if the PIII was a 12 CPU at 500 MHz. That’s a little closer to 5% of modern chips and something nobody worked with.
Alternatively what the 486 era would have looked like with gigabytes of RAM and an SSD?
Old software on older hardware was «responsive» because library they used came with much less built-in capabilities (nice ui relayout, nice font rendering, internationalization, ui scaling), and also, less code means less memory, and rotating disk swap meant huge slow downs when hit, so being memory hungry was just not an option.
People that remember fast software was just people that could afford renewing their computer a year or so in the 20% top bracket prices, and don't realize that today mere inconvenient slugginess in 6-7 years computer was just impossible to imagine back then.
For the «let's imagine current day from that past», I would say we would be mostly in the same place, without AI, with much less abondance of custom software, and more investments in using and building properly designed software stack. Eg, we would have proper few UI libraries atop of web/dom and not the utter mess of today, and much more native apps. Android might not have prevailed has it has, it relied a lot on cheap CPU improvements for its success.
Still safe language like rust would have emerged, but the roadblock in fixing compiler performance would have slowed down things a bit, but interest would have emerge even faster and stronger.
They weren't some halcyon days of bug-free software back then, quite the opposite.
It was always slow on contemporary hardware. On affordable PCs Win 3.1 was so slow you could see it redrawing windows and menus. Win 95 was so resource hungry, people wrote songs about it (https://www.youtube.com/watch?v=DOwQKWiRJAA). XP seemed fast only at the end of its very long life, due to Longhorn project failing and delaying its famously shitty successor.
It wasn't just Windows. Classic MacOS for most of its life could not drag windows with their contents in real time. Mac OS X was a slideshow before 10.4, and Macs kept frequently beachballing until they got SSDs.
V8 might just invent like 3 more execution engines though, 1 of which uses an external TPU (open source though!) to run code JITed to HVM (Higher Order Virtual Machine) that everyone is eventually compelled to adopt, one can't be too sure JS will lose. /s
Moore's law show that CPU speeds double every 2years. 2years * log2(20) = 8.64years, so we'd just be 8.64years late, that's it, literally no reason for anything to be any different apart from that.
95% of comments seem to completely overlook this fact and go into deep explanations about how everything would be different. It's pretty surprising that even a pretty sciencey community like Hacker News still doesn't get exponentials.
But apparently you also didn't get the question - hardware would stay slow, but software would continue evolving, the question is how, given the hardware constraints. It would definitely not be "exactly the same, except 8.64years later".
Obligatory XKCD: https://xkcd.com/435/
My comment: https://news.ycombinator.com/item?id=39977838
You wouldn't have people wasting cpu cycles on pointless animation. You'd have people thinking about how long it takes to follow a pointer. You'd have people seriously thinking about whether the Specter and Meltdown and subsequent bugs really need to be worked around when it costs you 50% of the meager performance you still have.
I might ask if everything else is 20x times slower too. GPU speeds, memory bandwidth, network bandwidth.
CISC computers which did more in parallel per instruction would be common because they existed for concrete reasons: the settling time for things in a discrete logic system was high, you needed to try and do as much as possible inside the time. (thats a stretch argument. they were what they were, but I do think the DEC 5 operand instruction model in part reflected "god, what can we do while we're here" attitudes) -We'd probably have a lot more Cray-1 like parallelism where a high frequency clock drove simple logic to do things in parallel over matrices, so I guess thats GPU cards.
Definitely sounds right that we'd get an earlier, heavier emphasis on parallelism and hardware acceleration. I'm guessing the slower speed of causality also applies to propagation delay and memory latencies, so there wouldn't be new motivation for particular architectural decisions beyond "God please make this fast enough for our real-time control systems or human interaction needs".
If we got deep learning years or decades earlier, that also seems scary for AI existential risk, as we are just barely starting to figure out how the big inscrutable matrices work, and that's with the benefit of more time people have had to sound the alarm bells and attract talent and funding for AI interpretability research.
Compared to a modern CPU it was maybe 5000x slower, the early Vax systems that Unix ran on were maybe 6 times faster.
People certainly wrote smaller programs, we'd just stopped using cards and carrying more than a box around (1000) was a chore. You spent more time thinking about bugs (compiling was a lot slower, and they went in a queue, you were sharing the machine with others).
But we still got our work done, more thinking and waiting
As with all else - just look back to computers about 20 years ago, and that'll give you a good idea of what it'd be like. I guess the main difference is that we might have still been able to miniaturise the transistors in a chip as well as we do now, so you'd still have multi-core computers, which they didn't really do very often 20 years ago.
They could probably figure out a less efficient parallel bus with lots more leads rather than the pixel, line, and frame sync we have now, at least once we moved on from CRTs (I don’t know how those work wrt phosphors). It’d change the cost tradeoffs and mean more chips nearer the display but not really put us back, as long as other components kept up. I.e. pcie line rate is developing much faster than display size/framerate/bandwidth so limiting factor is the panel development and connection standards.
To stick with your analogy: There would be more optimization and the rate of releasing stuff would be slower because it would have to be tested. That's it. Remember catrdige based console games? How many patches or day one updated did you have to install there? How many times would they crash or soft-lock themselves? People tested more and optimized more because there were constraints.
Today we have plenty of resources and thus you can be wasteful. Managers trade speed over waste. If you can make it work unoptimized, ship a 150 GB installer and 80 GB day1 patch do it NOW. Money today, not when you're done making it "better" for the user.
Sci-Fi answer: We wouldn't be playing the same type of games. Why would we have to rely on something like our representation of graphics? If the cognition would be 20x faster and more powerful we probably wouldn't need abstractions but would have found a way to dump data into the cognition stream more directly.
I think the idea that 20x faster cognition would just mean "could watch a movie at 480fps" is too limited. More like you could play 24 movies per second and still understand what's going on.
I think the frame of wasteful is not correct. It's wasteful not to use resources if other resources are restricted and can be substituted by the plentiful. Of course the allocation of current resources can be debated but that is not caused by extra CPU performance, storage and RAM that is available.
We've had quite a ride from 8 bit machines with toggle switches and not even a boot rom, nor floating point, to systems that can do 50 trillion 32 bit floating point operations per second, for the same price[1].
Remember that Lisp, a high level language, was invented in 1960, and ran on machines even slower than the first Altair.
The end of "free money" is over, as is the era of ever more compute. It's time to make better use of the silicon, to get one last slice of the pie.
[1] The Altair was $500 assembled in 1975, which is $2900 today. I'm not sure how best to invest $2900 to get the most compute today. My best guess is an NVidia RTX 4080.
Probably higher IQ as the IQ lowering social media we use would barely work.
I think the only solution to the problem is to keep the memory and disks space very low.
The sharding of the developer has made things more inefficient in some ways.
If instead of getting major hardware wins each year it will be a decade things will be much better because now there is pressure to make it so.
See eg. the countless HN posts "hey look! I've used X to do Y" showing off some cool concept.
The proper thing would be to take it as that: a concept. Play with it, mod it, test varieties.
Like it? Then take the essential functionality, and implement in resource-efficient manner using appropriate programming language(s). And take a looong, hard look at "is this necessary?" before forcing it onto everyone's PCs/mobile devices.
But what happens in practice? Proof-of-concept gets modded, extended, integrated as-is into other projects, resource frugality be damned. GHz cpus & gobs of RAM crunch trough it anyway, right? And before you know it, Y built on top of X is a staple building brick that 1001 other projects sit on top of. Rinse & repeat.
A factor 20 is 'nothing'. And certainly not the issue here. Just look what was already possible (and done!) when 300 MHz cpus were state-of-the-art.
Wirth's law very much applies.
If there is a prolonged economic slowdown (not crash, please!), then resources will be allocated to optimizing CPU cycles and all that hype-based developments will have less resources allocated to them.
It can be for some of us an imperative to fight for efficiency but we shouldn't do it in a way which is in a all or nothing approach. Know its advantages and disadvantages and work within that knowledge-framework.
And in some cases, multi-threading would be the only way to do things. Where right now, single-threaded file copy, decompression or draw-calls are largely a thing because it's way easier to do and there is no need to change it outside professional applications.
Also, some things might actually be better than they are right now. Having to wait for pointless animations to finish before a UI element becomes usable should not be a thing. If there was no CPU performance for this kind of nonsense, they wouldn't be there.
Please don't mix clockspeeds with performance. A Athlon™ 5350 from 2014 is >20x slower single threaded than a Core i9-14900K. Yet it's 2 GHz vs. 5.8 GHz. Architecture, Cache and Memory Speed matter A LOT.
What? I played Quake 1-3, TFC over 56k with 300ms latency, on a CPU at least 20x slower than modern CPUs. Tribes 2 with 63 other players. Arguably more fun than the prescriptive matchmaking in games these days.
Games are a product of their environment. You don't let a pesky think like lag stop people having fun.
What would that even mean, being 20x faster than the speed of light? What does it imply?