I built multiple iOS apps and went through two start up acquisitions with my M1 MBA as my primary computer, as a developer. And the neo is better than the M1 MBA. I edited my 30-45 min long 4k race videos in FCP on that air just fine.
Before I was a professional software developer, I used a scrawny second-hand laptop with a Norwegian keyboard (I'm not Norwegian) because that was what I could afford: https://i.imgur.com/1NRIZrg.jpeg
This was the computer I was developing PHP backends on + jQuery frontends, and where I published a bunch of projects that eventually led to me getting my first software development job, in a startup, and discovering HN pretty much my first day on the job :)
The actual hardware you use seems to me like it matters the least, when it comes to actually being able to do things.
i have a computer that benchmarks literally 10x faster and with 32x the amount of RAM, but i miss that little thing that helped me build my career from nothing
After all, the actual server ran the code, I just needed text editors, terminal windows, and web browsers.
But I'm planning to do a big jump: Soon I will switch to a 2012 Mac Mini as my primary linux server!
Running neovim on termux was fine. Developing elixir was no problem, the test suite took 5s on my phone, and takes 1s on my laptop. Rust and cargo compiling was slow enough that I didn't really enjoy it though.
Meant that I could just pack up instantly and have an agent do review workflows while I was out and about as well in my pocket, and didn't really notice a big battery hit.
Not sure the difference other than weight, but I wasn't carrying it day to day when i could leave it in my hotel room.
[1] https://www.zdnet.com/article/how-to-use-the-new-linux-termi...
(Maybe the fans sometimes sound like they're a jet engine taking off…)
Finally just put an order in for a new 16" MBP M5 Max with 48GB memory only because it looks like they're going to stop supporting the Intel stuff this year and no more software updates. It'll probably be obsolete in six months with the rate things are going, but I've been averaging seven years between upgrades so it should be good!
So, the m5 with 48gb of ram will be amazing.
Is still handling the load well, though at times, fans get quite loud, especially with all the background processes and VM setups.
Hope to get a new MBP this year, as being on Intel means lots of software that won't run on it (ie, Codex app for example, won't run on Intel Macs)
I also was using an Intel MacBook Pro with 16GB at the time. Doing the same thing there was much smoother and snappier. On the whole, it actually made me want to just the laptop instead since it "felt" nicer. (This isn't measuring build times or anything like that, just snappiness of the OS.)
8gb has ALWAYS been fine in Apple Silicon Mac OS. RAM usage on a fresh boot is a meaningless statistic (unused RAM is wasted RAM). And they're just plain capable!
The worst corner they cut is no keyboard backlighting. That saves them what, $1 BoM per MacBook Neo? Especially because now they have to put up an entire new keyboard production line instead of just piggybacking off of the Air keyboard production line.
No 8GB, compressed or whatever was not enough for MacOS when M1 was released. Even for simple outlook, web browser, excel type of workflows.
After 3-4 hours of work, the window manager process itself is consuming gigabytes of memory. Not even considering any browser or electron apps.
My M1 Mac mini was choking up so much that I had to trade it in. That was back in 2021. Today apps are even more bloated.
I am jealous of my wife’s 13” M5 iPad Pro though, that oled screen is gorgeous, a wonder of modern engineering.
Those apps don’t need every single byte of memory you see in Activity Monitor to be active in RAM all of the time. The OS swaps out unused parts to the very fast SSD. If you push it so far that active pages are constantly being swapped out as apps compete then you start to notice, but the threshold for that is a lot higher than HN comments seem to think.
Can we please just move on? Maybe get your hardware checked if you’re legitimately still having these issues.
I developed some work that keeps tens of thousands of people alive every day on a $100 Acer netbook almost 15 years ago. The tools are always there, I don't think anyone thinks the work is actually impossible to do on a limited machine.
Any modern Mac is more than capable. I had the baseline M1 Macbook Air that I did work on as well, just to see how that fared. Much better than this machine - 10x the price, but more than 10x the performance. This one is great as a "I don't mind if I break it or lose it" device.
Also a browser sneeze takes more than 4gb.
The other I just owned the front end infra and was on the growth team. The rest of the folks were the stars on that one.
Edit: I guess I brought that up because I guess I don't know any more "real work" that that, ha. What is 'real work'?
But... you can do the same exercise with a $350 windows thing. Everyone knows you can do "real dev work" on it, because "real dev work" isn't a performance case anymore, hasn't been for like a decade now, and anyone who says otherwise is just a snob wanting an excuse to expense a $4k designer fashion accessory.
IMHO the important questions to answer are business side: will this displace sales of $350 windows machines or not, and (critically) will it displace sales of $1.3k Airs?
HN always wants to talk about the technical stuff, but the technical stuff here isn't really interesting. The MacBook Neo is indeed the best laptop you can get for $6-700.
But that's a weird price point in the market right now, as it underperforms the $1k "business laptops" (to avoid cannibalizing Air sales) and sits well above the "value laptop" price range.
And, the whole shittiness of the experience will even distract you attempting real work: the horrible touchpad, the bad screen, the forced windows updates when you trying to start the machine to do something urgent, ads in Windows, the lack of proper programmability of Windows (unless you use WSL).... Add the fact that the toy is likely to break in a year or two. These issue exist on far more expensive Windows machines, how much more a $350 machine.
Leaving Windows machines and OS behind for more than a decade has been a continuing breath of fresh air. I have several issues with the Apple devices and macOS (as I have with Linux too), but on the whole they are far better than Windows. The only good thing about Windows that I miss on Macs is the file explorer and window management, not sure why Apple stubbornly refuses to copy those.
It is completely feasible, and the battery life - amazing. Even when running a whole pile of Kubernetes services.
Using older hardware has helped me not accidentally build slow stuff. Although at some point I gotta upgrade and just add more performance tests :) but nothing replaces feeling it yourself.
i just got an m5 max with 128gb of ram specifically to run local llms
But damn I like that design
I used to think this way about Apple and its jarring to read with it 10-15 years behind me.
It reads as aggro and oddly tribalistic / sports fan-y.
(what people? who thinks its slower than an M1? who thinks you can't code on it? what will you coding on it prove to these people that the benchmarks they read can't? with all that, why get so invested you're buying a machine you don't want to use day to day? what does "handicapped" mean in this context?)
Only sharing b/c I never understood why people would roll their eyes at me, and apparently I finally reached my own graybeard moment, and I am now rolling my eyes at both of my selves :)
Having said that duckDB is awesome. I recently ported a 20 year old Python app to modern Python. I made the backend swappable, polars or duckdb. Got a 40-80x speed improvement. Took 2 days.
Claude suggested to just use DuckDB instead and indeed, it made short work of it.
Outside of the king's ransom you now have to pay for it, you can fit 99% of problems into RAM.
I benchmarked I4i at ~2GB/s read, so let's say I7i gets 3GB/s. The Verge benchmarked the 256GB Neo at 1.7GB/s read, and I'd expect the 512GB SSD to be faster than that.
Of course, an application specific workload will have its own characteristics, but this has to be a win for a $700 device.
It's hard to find a comparable AWS instance, and any general comparison is meaningless because everybody is looking at different aspects of performance and convenience. The cheapest I* is $125/mo on-demand, $55/mo if you pay for three years up front, $30/mo if you can work with spot instances. i8g.large is 468GB NVMe, 16GB, 2 vCPUs (proper cores on graviton instances, Intel/AMD instance headline numbers include hyperthreading).
> Here's the thing: if you are running Big Data workloads on your laptop every day, you probably shouldn't get the MacBook Neo.
> All that said, if you run DuckDB in the cloud and primarily use your laptop as a client, this is a great device
It’s staggering. Jaw dropping. Bandwidth is even worse, like 10000X markup.
Yet cloud is how we do things. There’s a generation or maybe two now of developers who know nothing but cloud SaaS.
I watched everyone fall for it in real time.
You're either underestimating how big cloud instances can get or overestimating how much it costs to rent a cloud instance that would beat an M1 Max at any multi-core processing.
According to Geekbench, the M1 Max macbook pro has a single-core performance of 2374 and multicore of 12257; AWS's c8i.4xlarge (16 vCPUs) has 2034 and 12807, so relatively equivalent.
That c8i.4xlarge would cost you $246/mo at current spot pricing of $0.3425/hr, which is, what, 20% of the cost of that M1 Max MBP?
As discussed recently in https://news.ycombinator.com/item?id=47291906, Geekbench is underestimating the multi-core performance of very large machines for parallelizable tasks -- the benchmark's performance peaks at around 12x single-core performance. (I might've picked a different benchmark but I couldn't find another benchmark that had results for both the M1 Max and the Xeon Scalable 6 family.)
If your tasks are _not_ like that, then even a mid-range cloud instance like a 64-vCPU c8i.16xlarge (which currently costs $0.95/hour on the spot market) will handily beat the M1 Max, by a factor of about 4. The largest cloud instances from AWS have 896 vCPUs, so I'd expect they'd outperform the M1 Max by about 50-to-1 for trivially parallelizable workloads. Even if you stay away from the exotic instances like the `u7i-12tb.224xlarge` and stick to the standard c/m/r families, the c8i.96xlarge has 384 vCPUs (so at least 24x the compute power of that M1 Max) and costs $3.76/hr.
If your application won't ever require more resources than a single server or two, then you are better off looking at other alternatives.
Where I live, our government-funded clam research programs are mostly shutting down. Very sad.
Did a PoC on a AWS Lambda for data that was GZ'ed in a s3 bucket.
It was able to replace about 400 C# LoC with about 10 lines.
Amazing little bit of kit.
The terminal and CLI app within ran locally on a smartphone, which was the premise of the experiments within the linked post.
They also weren't comparing a Swift app on an iPhone with their Android run, they were comparing both against "... the system in the research paper that originally introduced vectorized query processing[.]"
I wish more companies would do showcases like this of what kind of load you can expect from commodity-ish hardware.
I’m guessing so many devs started out on 32gb MacBooks that the NEO seems underpowered. but it wasn’t too long ago that 8gb, 1500mb/sec IO & so many cores was an elite machine.
I did a lot of dev work on a glorified eePC Chromebook when my laptop was damaged. You don’t need a lot of ram to run a terminal.
I’m hoping NEO resets the baseline testing environment so developers get back to shipping software that doesn’t monopolize resources. “Plays nice with others” should be part of the software developer’s creed.
- c8gd.4xlarge - this has a single 950 GB NVMe SSD.
- c5ad.4xlarge - this has 2 x 300 GB disks, which I put in a RAID 0 array. There are no c6ad.4xlarge instances, so this is the closes NVMe-enabled approximate to ClickBench's most popular choice, c6a.4xlarge.
I also added results from my local dev machine, a MacBook M1 Max with 64 GB RAM and 10 cores.
Here are the results:
| machine | cold_run_avg | cold_run_sum | hot_run_avg | hot_run_sum |
| -------------- | -----------: | -----------: | ----------: | ----------: |
| macbook m1 max | 0.48 | 20.68 | 0.43 | 18.60 |
| macbook neo | 1.39 | 59.73 | 1.26 | 54.27 |
| c8gd.4xlarge | 0.51 | 22.04 | 0.24 | 10.36 |
| c5ad.4xlarge | 1.29 | 54.14 | 0.55 | 22.91 |
| c6a.4xlarge | 3.37 | 145.08 | 1.11 | 47.86 |
| c8g.metal-48xl | 3.95 | 169.67 | 0.10 | 4.35 |
On the cold run, the MacBook is on par with the c5ad.4xlarge. The c8gd.4xlarge is about ~2.5x faster on the cold run.I know this is moving the goalpost, however, it's quite interesting that both of these cloud instances with instance-attached storage are still outperformed by the M1 Max (which is 4+ years old) on the cold run. And they would quite likely lose against the latest MacBook Pro with the M5 Pro/Max on both the cold and the hot runs. But that's an experiment for another day.
Props for identifying the issue immediately, but armed with that knowledge, why not redo the benchmark on a different instance type that has local storage? E.g. why not try a `c8id.2xlarge` or `c8id.4xlarge` (which bracket the `c6a.4xlarge`'s cost)?
That couldn't be more accurate
The Neo is neat and for someone who mostly does surfing and standard office work kind of stuff I suspect it’s a pretty great little laptop for way less than Apple usually charges.
But it’s not going to compete with an M5 anything.
the laptop is gonna have some local code, maybe a lot, but if I'm doing legitimate "big data" that data is living i the cloud somewhere, and the laptop is just my interface.
My good old LG Gram (from 2017? 2015? don't even remember) already had 24 GB of RAM. That was 10 years ago.
A decade later I cannot see myself being a laptop with 1/3rd the mem.
If it didn't, Apple has other laptops today with more RAM.
Or am I missing something?
I ran TPC-DS SF300 now on the c6a.4xlarge. It turns out that it's still quite limited by the EBS disk's IO: while 32 GB memory is much more than 8 GB, DuckDB needs to spill to disk a lot and this shows on the runtimes. Running all 99 queries took 37 minutes, so about half of the MacBook's 79 minutes.
> Command being timed: "duckdb tpcds-sf300.db -f bench.sql"
> Percent of CPU this job got: 250%
> Elapsed (wall clock) time (h:mm:ss or m:ss): 37:00.96
> Maximum resident set size (kbytes): 25559652
Their numbers are a bit outdated. M5 Macbook pro SSDs are literally 5x this speed. It's wild.
That's decently fast but not especially remarkable, most Gen4 NVMe drives can hit 6-7GB/sec.
https://www.apple.com/newsroom/2026/03/apple-introduces-macb...
"The new MacBook Pro delivers up to 2x faster read/write performance compared to the previous generation reaching speeds of up to 14.5GB/s..."
That's not tldr, that's just subheader.
:shrug: as to whether that makes the laptop or the giant instance the better place to do one's work…
2025-09-08 : "Big Data on the Move: DuckDB on the Framework Laptop 13"
"TL;DR: We put DuckDB through its paces on a 12-core ultrabook with 128 GB RAM, running TPC-H queries up to SF10,000."
https://duckdb.org/2025/09/08/duckdb-on-the-framework-laptop...
With I/O streaming and efficient transformation I do big data on my consumer PC and good old cheap HDDs just fine.
I’m really surprised just how competitive it was in their benchmark. I was expecting “sure it doesn’t compete but it works and you can use it”, not “it beat an Amazon instance, though not a really powerful one”.
I guess they’re using a different definition?
very much so…
You have phones that are faster than cloud VMs of the past. You can use bare metal servers with up to 344 cores and 16TB of ram.
I used to share your definition too, but I now say that if it doesn’t open in Microsoft Excel, it’s big data.
Google has big data. You are not google.
I just thought it was neat. It’s a phone chip, we’ve never been able to do stuff like this on an Apple phone chip before. No one was porting this to the iPhone to run there.
In my mind this is purely a curiosity article, and I like that.
There is always a trade-off of cost/convenience/power, and some folks are going to end up the the Neo end of the spectrum.
It would be a surprise if more than 0.1% of Macbook Neo users have even heard of DuckDB.
Which means that this article is probably just riding the hype.
Also there are countless reports of bricked M1 8GB MacBook Airs that are bricked because the SSD used up it's write cycles
No.
>Do I reject a world where all of the above is necessary to realize value from an entry-level MacBook?
In theory, yes.