NVME is around two orders of magnitude slower than DDR3 RAM, so as soon as your heap is tapped out you'll hit a performance wall.
As I sit here and type this, my 10.15.7 Catalina desktop is sitting at 12.25Gb of used memory with 2 Edge tabs and an open Citrix session. Any actual, professional use will put you way north of even 16gb.
There's no need for personal attacks.
> as soon as your heap is tapped out you'll hit a performance wall.
Yes, probably. Two points though:
1. My comment was that _if_ SSD performance was comparable to RAM, there would be no need for the latter.
2. I have very rarely swapped on my 16Gb M1 Air, and when I did I only noticed later when looking back at graphs. I'm sure there was a performance hit but I never felt it.
> Any actual, professional use will put you way north of even 16gb.
I do plenty of actual, professional use on my 16Gb M1 Air. Right now I have 3 VS Code windows open compiling Go code, running acceptance tests and whatnot. I also have Mail, Safari (with tens of tabs) and Firefox (with >100 tabs) open. A few minutes ago I also had Slack open on it. I regularly start iTerm to do terminal tasks.
You must have an Intel machine. I had a powerful fully specced 32Gb i9 16" MBP. This cheap and humble 16Gb passively cooled M1 blows it right out of the water in every single aspect. It's even better at running Intel Docker images!
Well, I use my M1 (16 GB) for heavy video editing and music sessions, plus professional programming, with IDEA, VMs, and so on. And I don't seem to ever need even close to 16GB, much less "way north", or even saw it be slow.
>As I sit here and type this, my 10.15.7 Catalina desktop is sitting at 12.25Gb of used memory with 2 Edge tabs and an open Citrix session. Any actual, professional use will put you way north of even 16gb.
Maybe it's time to come over to M1 and 12.4?
This is the very type of bamboozlement I was alluding to - instruction set and/or silicon architecture doesn't actually change the amount of data you use.
Citrix will still need to buffer the same (compressed?) 4K worth of pixels and Edge will still to load up the full DOM, cache all the sources, stand up a sandbox with javascript virtual machine etc.
There's nothing magical there... a single frame of 16 bit 8K RAW will allocate the exactly same amount of heap on M1, M2, Intel or anything else for that matter.
I'm now playing a 4K movie on Plex. It's using a little over 1Gb RAM for that. After I started it, my computer swapped ~250Mb to the SSD. Firefox is still snappy. Safari is still running normally. VS Code is still doing its thing. I even started a `brew update` on iTerm, and I have the App Store installing a couple apps. Mail still runs fine, sends and receives, and I can switch between mail accounts and messages.
So, you may be technically right about the memory usage and the need for swapping. What you're missing is the fact that it doesn't hurt the user experience.
This is not marketing or bamboozlement. This is me on the same laptop I'm typing this answer.
You should probably do a deep inspection of your beliefs and stop denying the actual, practical experience of many people who are responding to you with real world experience.
Oh, but it does when it has a hardware memory compression engine. The very different GPU design also means it can use less (or more) memory in different situations.
Even more so for other pipelines, involving CPU+GPU.
MacOS also has memory compression, assisted by the CPU.
And there are other ways an OS can use to keep memory usage lower given a different CPU architecture...
No. Dual channel DDR3 went up to about 18 GB/s read and 14 GB/s write. [1]
The latest NVMe PCIe 5.0 SSD are about 13 GB/s read and 12 GB/s write. [2]
Apple SSD are about half that now, but most DDR3 users didn't have highest clocked dual channel RAM either. So roughly a factor of 2 or so at best, about a 50x difference compared to two orders of magnitude :-)
[1] https://www.anandtech.com/show/2792/5
[2] https://www.tweaktown.com/news/86395/apacer-is-first-with-pc...
The M1/M2 apple hardware specifically seems to be around an order of magnitude from what I've seen (ddr4x 60-70gb/sec vs 7gb/sec for the NVME or thereabouts).
The obvious observation here is that the fab yields for high memory apple silicone must not be all that great, which is why they're mostly shipping 8 and 16gb versions.
You were wrong, it's ok...welcome to the SSD future :-)
I'm sitting here on an M1 MacBook Pro with 64GB of memory. I'm running Chrome with a few dozen tabs open, Slack, VS Code, and Terminal. Apparently 32GB of memory is being "used".
Do you believe I would experience noticeable performance loss if I was running 16GB of memory?
Over 22GB of that is cache. I’m sure some of that is useful but the overwhelming majority of it is just being used because the memory is freely available so why not? If it improves performance it’s not in anything I’ve ever managed to notice between my 16GB work machine and 64GB personal one (which are used for largely similar tasks, though I’d say the work one is a bit more heavily stressed).
Sure some cleanroom laboratory benchmark will say the 64GB version is faster. And for some workloads I’m sure it’s a bigger deal. But a human just isn’t going to perceive much practical benefit from a full Chrome restart being 5% faster or whatever.
Plenty of people will be more than happy with 8GB of RAM on an M1. I had a base model M1 Mac Mini and it worked great even for gaming.
Obviously if you're running VMs or editing video or doing any other memory intense workload you're going to need more memory. But for anyone who doesn't need an absurd (>16gb) amount of memory (college students, many programmers, those who use computers just for web browsing and Netflix), 8GB or 16GB will feel snappy.