Debugging predictable single thread single core system was also child's play compared to distributed networked beasts each running on lots of cores and thousands of threads.
Nineties problems were contained in a small box. Oh, and no internet like today, so needed to order books and magazines. And to use BBS and usenet. Even then, a lot of it was reinventing the wheel again and again.
Modern problems are sometimes nearly uncontained (think software like web browsers, etc.).
In the 90's the code I wrote was more difficult. Today coding is much easier (better languages, tooling, etc..). However the systems I build are many times more complex.
In the 90's we hired the best programmers, today we hire the people who are best at managing ambiguity and complexity.
All of this is very hand-wavy as there are certainly still disciplines where pure programming skill is most important, but those seem to be fewer and fewer every day.
Back than you felt pretty skilled I am sure, now I feel anybody could do my job, often. Unless you are working for the big 4 or similar, many jobs don't give you that excitement.
He gets a simple graphical effect going on the Amiga in only a few lines of assembly. Doing the same thing using DirectX in c++ would take you all day!
This is an interesting line of argument - in what way, and what could be done to improve the ergonomics?
> He gets a simple graphical effect going on the Amiga in only a few lines of assembly. Doing the same thing using DirectX in c++ would take you all day!
This is absolutely true, but in something like ShaderToy you can go back to producing complex pixel-bashing effects with a huge amount of processing power.
It's just the external Tower of Babel from boot to usability has got a lot larger.
In 68k you had 8 32-bit data registers, (d0-d7) and 8 address registers (a0-a7)
If you wanted to access bytes or 16 or 32 bits you could do so like this:
move.w #123,d0 ; move 16bit number into d0
move.b #123,d1 ; move byte into d1
move.l #SOME_ADDRESS,a0; set address reg a0 to point to a memory location.
move.b d1,(a0) ; move contents of d1 to memory location a0 is pointing to.
Nice and easy to work with and remember.On x86, thanks to its long and convoluted history you have all kinds of doubled up registers which you have to refer to by different names depending on what you are doing, and tons of historical cruft.
On top of the CPU, the old home computers had no historical cruft and it was very easy to talk to the hardware or system firmware; usually you'd just be getting and setting data at fixed memory locations. I can read an Amiga mouse click in one line of 68k. I've no idea how you'd do it on a modern PC or even Java! Modern systems just aren't as integrated, for better and worse.
Assembly language was also part of mainstream programming back then. You'd learn Basic then go straight to assembly if you wanted to do anything serious. So there were computer magazine articles on assembly, childrens books[1]. My first assembler, Devpac, came from a magazine coverdisk with a tutorial from Bullfrog, Peter Molyneaux's old game company[2].
So there were a whole range of cultural and technical reasons for assembly language being much more of a human-useable technology back in the day.
>It's just the external Tower of Babel from boot to usability has got a lot larger.
Yes I agree, I kinda miss being able to see the ground, which is probably why I find retro programming so appealing.
[1]https://archive.org/details/machine-code-for-beginners [2]https://archive.org/details/amigaformat39/page/n61
Due to the enormous complexity of modern CPUs I'm not sure there's anything that could be done. With the 486 and contemporary (and earlier) uarches you could largely expect the CPU to execute exactly what you wrote so understanding the performance impact of any given bit of assembly was pretty straightforward. Then CPUs started adding features like superscalar, speculative, and out-of-order execution, branch prediction, deep pipelines, register renaming, and multi-level caching that massively complicate modeling the performance of any given code.
For example you may need to explicitly clear an architectural register before reusing it for a new calculation to avoid creating a false dependency in the uarch which would prevent the CPU from executing the calculations in parallel. Knowing when this is necessary can be hard and the rules are usually different between different uarches, even within the same uarch family.
Good assembly programmers who are aware of all this complexity can still beat compilers but they certainly can't do that for the scale of code that compilers routinely generate. Thankfully compilers are generally "good enough" these days and assembly only needs to be hand-written for very hot inner loops for performance-critical code or for cryptographic code where the exact performance characteristics of the code could potentially leak information if they're not handled correctly.
In a way that's true. I also enjoyed writing assembler back then, especially on 68k.
However, to really extract the last cycle you often ended up generating a block of code at runtime (kind of very basic JIT).
Sometimes self-modifying code provided the final oomph to get something running fast enough. The reasons varied: sometimes it was because of running out of registers, sometimes dynamically changing the performed operation without branch.
Too bad for the later CPUs with instruction prefetching and caching...