You're right, but in a production deployment, that extra ram might mean the difference between a close call that you patch the next day and an all hands emergency to call in devops and engineers together during peak usage.
source: been there
(and to be honest the way Linux does acts on OoMs are quite debatable)
If you don't have any more disk space for swap, or memory pressure gets too high, you get the "You've ran out of application memory" dialog box with a list of applications you can force quit, and macOS leaves it up to the user on what to kill instead of the system choosing automatically.
Going to also echo the comment that this isn't an RTS
OK, hear me out over here:
We are not in an RTS.
Edit: in real-world settings lacking redundancy tends to make systems incredibly fragile, in a way that just rarely matters in an RTS. Which we are _not in_.
And as for the money analogy, what's the idea there, that memory grows interest? Or that it's better to put your money in the bank and leave it there, as opposed to buy assets or stocks, and of course, pay for food, rent, and stuff you enjoy?
1. Store your money in a 0% interest account—leave RAM totally unused—or put it in an account that actually generates some interest—fill the RAM with something, anything that might be useful.
2. Store your money buried in your backyard or put it in a bank account? If you want to actually use your money, it's already loaded into the bank.
Imperfect analogies because money is fungible. In either case though, money getting spent day-to-day (e.g. the memory being used by running programs) is separate.
(Weird side question: are you by any chance the Jason Farnon who wrote IBFT?)
Yes, generally. That's the entire idea behind the stock market.
For slightly different reasons. My game drive is using about 900 GB out of 953 GB usable space - because while I have a fast connection, it's nicer to just have stuff available.
Same for some projects where we need to interface with cloud APIs to fetch data - even though the services are available and we could pull some of the data on demand, sometimes it's nicer to just have a 10 TB drive and to pull larger datasets (like satellite imagery) locally, just so that if you need to do something with it in a few weeks, you won't have to wait for an hour.
The memory operation itself is O(1), around 100 ns, where at a certain point we are doing full ram fetches each time because the odds of it being in CPU cache are low?
Typically O notation is an upper bound, and it holds well there.
That said, due to cache hits, the lower bound is much lower than that.
You see similar performance degradation if you iterate in a double sided array the in the wrong index first.
Their experimental results would in fact be a flat line IF they could disable all the CPU caches, even though performance would be slow.
This is an incorrect conclusion to make from the link you posted in the context of this discussion. That post is a very long-winded way of saying that the average speed of addressing N elements depends on N and the size of the caches, which isn't news to anyone. Key word: addressing.
And no, the articles you linked is about caching, not RAM access. Hardware-wise, it doesn't matter what you have in the cells, access latency is the same. There is gonna be some degradation with #read/write cycles, but that is besides the point.