If I was a teacher, I think that's how I would react. The next punishment would be handwritten though, or something else, you only get one shot with that trick.
I admit that in the early 80s, most teachers were not very savvy with computers, so it is possible that he really believed your brother typed it all. But also keep in mind that teachers are pretty good at smelling bullshit, and it is pretty obvious that when pupils do things in an unusual way, it is not to make it harder on themselves, so he probably knew that there was some trick and just let it pass. I mean, these are just lines, nothing worth persecuting clever kids for.
I plonked out a Bash oneliner in a couple minutes and gave the result. My disgruntled superior had me do it again by hand.
Hold up, mip mapping is not a way to save RAM in fact it consumes more RAM because not only do you need to have the full-size texture in RAM you also need to keep smaller copies in RAM.
What it does save on is memory bandwidth, because you don't need to sample from the full-size texture for objects that are far away. And it improves visual quality, since you can use expensive scaling algorithms to produce high-quality downscaled copies.
While that's how it's mostly used today, mipmapping refers to storing scaled-down versions of the same texture along with the original texture. You don't have to load all of them to RAM (or VRAM). You can simply store the mipmaps on a hard drive, and only load the required mip levels to the GPU, thus saving VRAM. This way, you are using mipmaps as texture LODs. Think about satellite images of Google Maps.
>What it does save on is memory bandwidth, because you don't need to sample from the full-size texture
Not exactly. It does save memory bandwidth, because you're sampling less texels and reducing cache misses, not because you're sampling from a smaller texture. You're simply not downscaling the texture on the fly, instead using precalculated downscaled pixel values.
Of course one can drop the higher resolution textures, but AFAIK that technique came long after mipmapping, as it requires high cpu-to-gpu bandwidth, which wasn't there in the earlier days.
I can barely begin to imagine all the pain involved in creating AAA games. Decades of whitepapers to read and build upon only to get to a point where consumers aren't downright angry at the quality of graphics being given to them.
How large is a "large" tilemap?
And what is it that makes them particularly hard in the AAA space?
Turns out the trick is to not submit each individual tile to the GPU. You have to roll them up into larger, chunked meshes and submit those meshes to lessen the impact.
I'm sure this is game dev 101, but I've never been close enough to the GPU to have to actually care about this sort of stuff. I cut my teeth learning the performance characteristics of the DOM not the GPU.
I wasn't trying to imply that I was making an AAA game. Just that, from the outside, it seemed like the difficulty would be in designing game mechanics, making something actually fun, marketing, etc., but in reality there's an ever-steepening technical learning curve when you want to deliver nicer looking stuff. When I look at how many days I've spent working on my game, and how terrible it looks, and then I look at something like Cyberpunk 2077... I struggle to conceive how we as humans were able to create those levels of graphics in a timeframe that allows for the game to be profitable.
https://github.com/MeoMix/symbiants btw. Feel free to browse the code (or come volunteer! I've got a few other HNers tinkering alongside me these days)
(Non native speaker)
I think a lot of devs who go into comp-sci and land into tech right after college have very little understanding of how hard life can be outside of tech. This isn't an easy industry, but goddamn it's so much better than the vast majority of other options out there.
To the extent that if I ever went back, I would probably immediately focus on roboticizing most of the work.
It's like, yeah, in an ideal world, all 8+ billion humans could pick up craft coffee as they stroll 5 minutes to their big tech employer who doesn't actually care when they show up. There's legitimate criticisms for how cities have been designed, but going further by vilifying car owners in the present and suggesting everyone could just walk or ride a bike to work is a level of being out of touch that only Silicon Valley types could achieve. Even politicians aren't as brazen with such silliness. As if the rest of the world can choose where their employer is located, how far their groceries are, or how much child support they have.
Just be a tech bro, bro.
The journey towards enlightenment or personal growth is highly individualized and subjective, making comparisons with others is not only unhelpful but also irrelevant.
I think the thing about programming is that so much of the "old" world misunderstood how it was hard. There is a lot more creative process stuff involved with programming, and not nearly as much that is codified, that you need to study, as in other fields like physics. And so, it gives people wide leeway to footgun themselves. Just as much, going the other way, it puts a premium on the ability to think and reason in abstractions, to which point, we don't have as strong a "liberal arts" for the STEM world as we do for the literary / humanities / cultural world.
So yes, on one hand, programming is not as hard as physics (to make one example of it). But on the other hand, I knew a person or two doing research in physics who complained how much their work was held back because their colleagues didn't appreciate the effort it took to process all the data that came from their experiments. Folks didn't appreciate how critical all that code was to their research. We see this sort of dynamic replicated throughout the rest of the world.
I've never experienced that same end-of-day exhaustion before or since doing construction (and I was a fit & healthy college student).
No matter how bad my day is, sitting at my desk and typing for a LOT of money, I know it's better than many alternatives.
Battlezone is actual 3D, and 1980.
I'm not saying that is first, either, just that 1980 < 1981 and that Tempest is more of a form of 2.5D. (I remember it being an interesting, fast-paced game with good sound, but fundamentally it's just Space Invaders rolled into a tube.)
> How to workaround too little RAM problem? Let’s load pixel graphics if player is far away and call it mip mapping:
MIP mapping is not primarily a RAM problem solving device. It's a way of precomputing the shrinking of texture images for when they appear in the distance, so that they don't have to be antialised on the fly.
Thus is a CPU saving device, and requires extra storage. However, not much more! The half-size texture needs only 25% more storage, and the quarter-size another 6.25% or so.
The renderer has to consider the depth and index into the appropriate scale of the texture. I don't remember all the details, but I see to remember that it's dynamic. Like when the textured surface is in deep perspective, the distant pixels are derived from the smaller scale data, while the near part from the larger scale.
It could save RAM if we can avoid the detailed textures for objects that require them and are all far away. I.e. lazily load the detailed texture as the objects are approached, and free the memory when they recede again.
- it's a static perspective view, less complex than sprite based 2D
- it's implemented on a vector screen, where the cathode ray gun is drawing the lines instead of the software, not a bitmapped one
- almost everything can be precalculated and only vertex locations need be stored
Of course it's still a creative game, but not very hard to implement.
It should be noted that Battlezone wasn't actual 3D since everything happened on a fixed plane, but it had dynamic rotating shapes and the like so much more advanced.
What game developers are good at is not making slow programs. Everything else, from point of sale to web pages to phone apps, gets slower and slower.
That isn't true. Many games, even really popular/enjoyable games, have really poor performance for what they do. And much of the performance that does exist is thanks to the people who work on the engine and not the game itself.
I'm not sure the average game-dev is any better at writing performant software than the average non-game-dev.
My coworker, bless him, who comes from a background in the financial industry once added a feature that involved making copies of 5 vectors of strings. Both the vectors and their strings were regenerated every single tick from c strings that permanently resided in memory, by way of multiple concatenations for each string. Profiling informed me we were now making some hundreds upon hundreds (sometimes thousands) of (re)allocations every 16.7ms. I diplomatically pointed out the wastefulness and he (partially) fixed the issue by changing 2/5 of the getter functions to return a reference. I could tell he was quite annoyed with me, and didn't really understand the issue. It's just allocating memory, how bad could that be?
https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...
Second of all, all these difficult things are difficult things that game ENGINE developers are responsible for tackling nowadays.
In the 90s and early 2000s, most of the people building engines were laying down games at the same time, which is very much akin to building a railroad while running the train. For most folks (like me; I'm developing an indie VR game on the side) this is all handled by Unreal, Unity, or Godot.
I've tried to build a simple game engine in the past... you wind up reinventing the wheel a lot...
and things that sound very easy, can be very hard or impossible.
I also like the “uncrop” photo enhancement from Red Dwarf.
Bonus(appear easy to do, but are actually hard): Micro-optimizations that target a specific platform: a combination of CPU-compiler-OS factors.
Porting software: There are tons of subtle unportable things and assumptions that hide under the surface.
Rewriting X in Y:lots of footgun and like inexact translation in natural languages can cause subtle bugs.
Makefiles and shell scripts: they look so simple, but try debugging a moderately complex makefile or shell scripts. Bash is especially sinister example.
Anything that talks to a closed/blackbox API or close-source library: the dependence on code that you can't debug in general.
Maintaining software in general as it grows in complexity.
Honestly, grab a random human and sit them next to you while you code a bubble sort algorithm or a prime number finder to see what I mean.
I started reading this story, got about halfway down, and the story stopped and I got this big centered H2 reading "Create an account to read the full story."
Fine, I want to read the rest. I create an account. I go back to the story. Now the story still stops at the same place but the big centered H2 has changed to "Jorgi, read this story from Tom Smykowski — and all the best stories on Medium" and there's an "Upgrade" button. That was not our deal, Medium! You said I could create an account to read the full story.
So I click the button and it takes me to the signup page, of course on the "pay annually tab," with a small white button on a white background that reads 'monthly' for other options. Fuck you, Medium.
I can only imagine substack will end up the same in a few years time. These platforms get all hyped up and end up drowned in spam and low quality stuff when they go mainstream.