- I already do cross-arch development day-in and day out between x86 and ARM, and have only run into hard blockers on a library or tool a handful of times. The solve was generally pretty straightforward to either use an ARM-compatible alternative, or to cross-compile it myself.
- We've done this many many times before and it's not that bad. I know I'm not the only one old enough here to remember the days of having heterogeneous fleets across PPC, SPARC, and x86. Or even more recent - different extensions for x86 with different chipset manufacturers.
In short: I feel Apple’s consumer-oriented direction is starting to be at-odds with what they need to do in order to remain a compelling general development platform.
Remember that macOS became a favourite for web-application development only around 12-13 years ago (prior to that it was seen as an OS for creative-types) - because they were selling nice hardware with an equally nice Unix-family OS with a compelling desktop experience - take a look at typical Linux desktop distros from around the same time: visual eyesores and incompatible with most laptops thanks to OEM driver issues. Apple wasn’t specifically targeting software developers at all - they were even showing ominous signs of disinterest by discontinuing their X Windos server and going-back on their promise of establishing Java as a pillar of the OS.
With the move to ARM on laptops I think Apple will just lock-down the bootloader and won’t look back.
What’s funny now is Windows 10’s WSL and Windows Terminal, Docker support, etc are suddenly making Microsoft look good as an OS for writing code for non-Microsoft platforms. And at least with a Windows laptop - even ARM Windows laptops - you can tinker with the bootloader and fire-up Slackware if you really wanted to.
Interestingly enough - for personal hacks (mostly cross-compiling Golang to ARM, natch) I'm actually using WSL lately, and it's definitely good enough. Not perfect, but nothing much is.
In fact, let's bring back Ultrix/OSF/1, DG-UX, Solaris! (... we can skip HP/UX and SCO because they're truly awful). Note that OpenVMS has already apparently made it's x86_64 comeback!
It's true it will cause some pain in the first year or two, but even as a heavy VMWare Fusion user I am really looking forward to the benefits of a vertically integrated laptop.
Xhyve and HyperKit (used by Docker for Mac) uses Hypervisor.framework exclusively. The last time I tried Hypervisor.framework on x86-64, the CPU performance was quite fine (matches that of VMware/VirtualBox), but I/O was pretty abysmal. Emulating x86-64 on ARM is probably going to be a role of something similar to QEMU.
[1]: https://developer.apple.com/documentation/hypervisor/apple_s...
The very first hypervisors worked using dynamic binary translation. They would run a "guest" operating system by executing a stream of native instructions directly on the host CPU. This stream would be dynamically translated to remove and trap in software any privileged operations so the hypervisor could handle them. Modern hypervisors take advantage of hardware features that allow you to more efficiently trap on privileged operations. ARM started to add some of these features starting in 2013 [1]. In contrast Intel first started adding these features to the Pentium 4 in 2005 [2]. When such hardware features were released, they actually were not faster than the software translation. These days the hardware based options are faster. There is even hardware support for running nested hypervisors. So the first question we need to ask is how hypervisors implemented with ARMs hardware features stack up to Intel. I have no doubt that parity at a minimum will be reached I just don't know what the current state of play is. As indicated in my original comment, if I had to bet on release we wont quite have the performance or feature set you would be used to with a product like VMWare Fusion.
The second question we need to ask is whether there is a way to efficiently emulate x86-64 processors on ARM hosts. Even better if you can do this while taking advantage of the supporting infrastructure hypervisors already have in terms of the emulated devices and other features. QEMU just gets you the CPU and a short list of a devices. The fully experience of a seamlessly virtualized guest requires a lot more than that. But at the core you are right that it is going to require QEMU-TCG, Rosetta 2, or some similar technology because the silicon just is not there to execute x86-64.
Exciting stuff! We'll see where it all lands.
[1] https://lwn.net/Articles/557132/
[2] https://en.wikipedia.org/wiki/X86_virtualization#Intel-VT-x
Whether you can stick an emulated x86-64 CPU in there is another matter. It's a much bigger engineering lift and unless Apple puts some resources into it it's not clear to me a virtualization company by themselves would want to incur the cost. I hope there is enough demand for it and that someone will provide it. For me personally the only reason I run VMWare Fusion is to access x86-only Windows applications for which there is no replacement.
[1] https://blogs.vmware.com/vsphere/2019/10/esxi-on-arm-at-the-....
[2] https://twitter.com/VMwareFusion/status/1275466832002945024
It will all come down to whether this move gives Apple a significant performance and/or battery life advantage. If Apple pulls it off it will force Microsoft and other vendors to respond.
I know that a big complaint about the move is "great, now I'm doing ARM locally and deploying to x86". I think this is a legitimate concern, for now, but I also strongly believe it is inevitable that, within the next decade, deploying to x86 in the Cloud will be as "weird" as ARM would be today. The benefits are way too numerous.
Well, more accurately, I think it'll be a "I'm on Fargate, oh wow, Fargate runs on ARM, I had no idea" kind of thing. Ok, the article outlines why you may need some idea, but come on; we're talking about one line where I'm downloading the x86 version of a dependency instead of an ARM version. That's an easy fix.
I don't know what this means for open accessibility of hardware. Right now, I could go buy and run locally the Intel Xeon chip powering my app in the cloud; when things move to ARM, it absolutely will be "AWS Graviton" (not sold outside AWS) or "Azure ARM Whatever" (not sold outside Azure). This sucks for accessibility, but, actually, does it? ARM enables the cloud providers to do this; they could never design their own x86 chips. As long as we're all standardized on the same ISA, and the chips generally have the same characteristics, I'm looking forward to a very bright future where vendors are now also competing against one-another in the silicon. And I may not be able to buy an AWS Graviton, but I'm sure (well, hopeful) that one day I'll be able to build an ARM desktop that isn't a Raspberry Pi. AWS will have their chips, Quallcomm has theirs, Apple has theirs, Microsoft and Google have some, and they're all competing against one another.
Ok, maybe this is a pipe dream. But, I'm definitely in the short Intel camp, at least on the long-term.
No-one does now and it's not obvious who would as we speak today. But if the demand is there then even with lots of obstacles to overcome, of course, then they can and will.
[1] https://www.anandtech.com/show/15737/arm-development-for-the...
It would be better to measure something more related to what docker users will actually do, like container build time of a common container, and/or latency of HTTP requests to native/emulated containers running on the some container.
One reason to feel positive about the virtualization issues is that Rosetta 2 provides x86->ARM translation for JITs which an ARM-based QEMU could perhaps integrate into it's own binary translation [2].
[0] https://ianix.com/pub/comparing-dev-random-speed-linux-bsd.h... [1] https://superuser.com/questions/359599/why-is-my-dev-random-... [2] https://developer.apple.com/videos/play/wwdc2020/10686/
I'm glad somebody said something! Yes the gzip perf test is pretty silly, but illustrates a significant difference. /dev/urandom throughput on this setup was about 100 MB / s so it wasn't a bottleneck for this test - the bottlneck was gzip.
Feel free to come up with a performance test yourself! I personally want to know what an HTTP test would look like. You can run an ARM image by running:
docker run -it arm64v8/ubuntu
Unfortunately, Rosetta 2 is not going to help here. Rosetta 2 translates x86 -> ARM, but only on Mac binaries. It does not translate Linux binaries, and cannot reach inside a Docker image.You can probably use qemu-user-static to translate x86-64-only binaries in a Linux container on an ARM machine, too, but I have never tried.
> Emulators can run a different architecture between the host and the guest, but simulate the guest operating system at about 5x-10x slowdown.
I think this is a misleading statement because it implies that there is a constant performance overhead associated with CPU emulation. In reality, the performance relies heavily on the workload, more so with JIT-ed emulators.
Regarding this specific benchmark, I think there are two main factors contributing to the poor performance. The first factor is that the benchmark completes in a short period of time. With JITs, performance tends to improve for long running processes because JITs can cache translation results allowing you to amortize the translation overhead. Another factor is that your benchmark is especially heavy on I/O, meaning that it spends a lot of time translating syscalls instead of running native instructions.
I'd also like to add that CPU emulators sans syscall translation should work for any binaries, even those targeted for Linux. It would require a copy of the Linux kernel, but Docker won't work without it anyways.
If changing the base image is all that's needed and both Dockerfiles otherwise assume ubuntu, this should not take too long.
Why did you have to switch from Alpine to Debian? Alpine supports ARM quite happily, and it looks like they're shipping Docker images for ARM (and other architectures, too).
1. If emulating aarch64 (arm64) on x86_64 is 6x slower (on your system, btw, it's not an universal constant), it doesn't mean emulating x86_64 on aarch64 will be 6x slower. It'd probably be worse, or at least that's my gut feeling.
2. Generic container images like the Ubuntu mentioned usually have aarch64 (arm64) support, so running the x86_64 image makes no sense for the presented use-case.
3. You won't be able to use most software because they don't release ARM binaries ... and the example uses `wget` && `tar xf`, with no binary signature check. As someone who has been porting stuff from x86_64 to aarch64 for a couple of years, I admit I've seen this pattern frequently. The most obvious solution is to build from sources, which would have been better off on x86_64 too, instead of fetching a prebuilt (and unverified) binary from the internet. Maybe there are some CPU flags the compiler could notice and apply optimizations which are not included in the prebuilt binary.
I'm not an Apple fan and I'm certainly not a fan of cross-architecture development either. I do agree with the general idea behind the article, however I find it a bit hand wavy.
I think the argument here is you can't build your own docker images that you use in production and run them on your mac without emulation (unless your production workload also runs on ARM).
> 1. If emulating aarch64 (arm64) on x86_64 is 6x slower (on your system, btw, it's not an universal constant), it doesn't mean emulating x86_64 on aarch64 will be 6x slower. It'd probably be worse, or at least that's my gut feeling.
Yup, performance benchmarks are inherently flawed and nobody knows anything right now without the hardware. However if ARM -> x86 emulation is anything like x86 -> ARM emulation, I would expect a really big performance loss.
> 2. Generic container images like the Ubuntu mentioned usually have aarch64 (arm64) support, so running the x86_64 image makes no sense for the presented use-case.
Ah actually I address this in the article, and even run an arm64 image. The short version is, it would be a lot of work to convert your whole backend infrastructure to ARM just because you got a new laptop.
> 3. You won't be able to use most software because they don't release ARM binaries ... and the example uses `wget` && `tar xf`, with no binary signature check. As someone who has been porting stuff from x86_64 to aarch64 for a couple of years, I admit I've seen this pattern frequently. The most obvious solution is to build from sources, which would have been better off on x86_64 too, instead of fetching a prebuilt (and unverified) binary from the internet. Maybe there are some CPU flags the compiler could notice and apply optimizations which are not included in the prebuilt binary.
Yes, if only everything were built from source! I'm not saying there's no solution, just that the solution would be a lot of work. If the library is obscure enough and the errors are strange enough, it might be so much work as to be impossible to the busy web developer.
My goal was to write a kind of hand-wavy article to get people talking about this problem.
someuser@some-aarch64-machine:~$ docker run arm64v8/ubuntu bash -c 'dd if=/dev/urandom bs=4k count=10k | gzip > /dev/null'
10240+0 records in
10240+0 records out
41943040 bytes (42 MB, 40 MiB) copied, 2.18298 s, 19.2 MB/s
someuser@some-aarch64-machine:~$ docker run amd64/ubuntu bash -c 'dd if=/dev/urandom bs=4k count=10k | gzip > /dev/null'
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
10240+0 records in
10240+0 records out
41943040 bytes (42 MB, 40 MiB) copied, 6.72324 s, 6.2 MB/sEven if you are talking about doing ARM Cortex-A series, you aren't going to be using the same libraries on the embedded device that you use on a Mac. You'd most likely be using either Linux (ala Raspberry Pi) or an RTOS; either way you have a different compiler and stdlib to use.
Most tools are adopting Linux remote build + remote debug, wherein you ssh in and hook into the compiler and debugger all from the comfort of CLion/VS2019/VSCode.
If they don't have remote build, there is often building locally, with a copy of the root filesystem, using a cross-compiler, then remote deploy + debug. The most annoying part of this process is fixing all the symlinks not supported on NTFS.
Expensiver niche workstation = $500 dev kit directly representative of your target, but with everything exposed.
The interesting thing is now we need ARM -> x86 remote build or cross-compilation tools, of which I know of none.
I cross compiling Linux kernels daily. I think Clang makes this simpler, but missing C runtime for cross compiling userspace executables still leaves much to be desired.
I think Zig is doing interesting things here. Clang should just straight up adopt this, IMO. https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...
32 bit on the phones, a real pain in the ass to cross compile, but it's a fun learning experience (I'm just a noob to any programming). I'd love to get paid for this tbh :D
Or, you could use already-extant Debian ARM releases and spend minutes rather than months switching over.
Running stuff on your laptop makes it run slow, get hot, and burn battery. I've considered getting a small x86 or ARM media appliance as a (physically local) remote server for when I can't count on an Internet connection. A media PC costs how much? The big holdup has been the tyranny of choice I'm confronted with. (Suggestions are welcome!)
I think very few people would be surprised if the coming of ARM Macs will, along with AWS's ARM moves (and Microsoft's), drive acceptance and adoption of ARM-based server computing. The mechanism won't be anything formal, just the vague pressure that comes from people wanting their programs and libraries to compile locally.
I would expect about a 5x slowdown running Docker images.
Docker on a Mac utilizes a hypervisor. Hypervisors rely on running the same architecture on the host as the guest, and are about about 1x - 2x as slow as running natively.
Since you're running ARM Mac, these hypervisors can only run ARM Linux. They can't run x86_64 Linux.
What will happen instead? These tools will fall back on emulators.
Most of the software I run in Docker already supports ARM. I'd imagine that a lot of (most of?) us that use Docker do, too.But also: Getting a cloud windows station or an el-cheapo-$500-under-the-desk-when-you-really-need-it Windows machine is probably worth it if you're doing professional work. It would quickly cost much less than the time you lose when rebooting to the the other OS, from my experience.
However, I think Apple has been a far greater threat to Linux adoption than Microsoft. Why? Because it gives techies the *nix environment they want, with the software and hardware support no one will give them on Linux.
There is real value in proprietary commercial end-user application software. Most companies who make such software couldn't care less about supporting Linux. So if you want to use Linux, you have to use F/OSS alternatives and continue to try convincing everyone that somehow they're better than the commercial options... even when the rest of the world has agreed that they're really not.
The whole incentive structure around F/OSS development really doesn't work for software where the profit motive is in the product itself... Not some nebulous "support contract" that you don't actually need. (Which is a far bigger issue for end-user applications.)
The UNIX experience on the Mac is pretty shitty. Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor. No package management. Funny Docker quirks.
The hardware used to be pretty nice, but honestly I'm still having trouble forgiving them for getting rid of the physical ESC key and turning volume control into a two-step routine on the TouchBar.
Honestly if I'm doing server-side development, I much prefer using my ThinkPad (Ubuntu) over my MacBook. About the only thing I miss is the far superior touchpad on the Macbook. That's it.
With WSL, you basically get an actual Linux userland (with WSL2, I think you get an actual Linux kernel too), not just a Unix that's like Linux but different enough to be annoying. But I'm not sure that will be enough to convince people to move to Windows.
Most devs only want some kind of CLI and POSIX like capabilities.
What makes ARM so exciting? Maybe battery use will be better? Maybe it will be slightly faster? Maybe? There's also been a lot of tuning done for laptop workloads on x86, it's definitely a maybe. I expect the only noticeable changes for most users to be somewhat better battery life, some apps not working, and occasionally having to know which package to download.
But Apple can't just pay-up every cross-platform software developer. Smaller developers will have to re-evaluate whether macOS remains a viable target platform for them. Which can translate to a dev-gain for Linux. The catch is that Linux is in much better position to translate an influx of developers to an influx of new users: Linux runs on what you have, while macOS requires you buy Apple hardware.
And let's not forget about macOS as a gaming platform. Linux has made a huge leap forward with Steam Proton. On macOS there's still a ton of games not supporting x86_64 (Catalina), and situation won't get better by transition to ARM.
You can't run your x86 docker image on your ARM mac without emulation. You can't run your x86 Windows VM without emulation etc.
Of course there are solutions like using a remote server or a VM in the cloud, but if you're buying a decent machine then you would normally expect to be able to run these things locally.
For example, back in 2017 Cloudflare was basically looking at this as a question of which hardware ran most cost-effectively rather than having engineering heroics first: https://blog.cloudflare.com/arm-takes-wing/
The one use case when this might be viable is targeting AWS Graviton2. Does anybody know if you can run an emulated Graviton2 on ARM Mac?
Things like the pinebook pro (and hopefully more linux ARM devices) will keep pushing this further.
Only the host OS is going to have the right drivers for the trackpad, wi-fi, GPU, power management, etc. etc. Through virtualization, the guest OS doesn't have to worry about constantly evolving hardware models.
Virtualized OS performance is already very good, and USB passthrough has existed for a while. Snapshots are a godsend.
What won't work are things like CUDA for eGPUs over Thunderbolt 3, and you'll have to share disk and RAM with the host OS.
But for most use cases it's probably the right choice. (This doesn't address the author's concern about moving away from x86.)
I don't see why this would be so hard. If anything, I expect to see a massive upswing in things like AWS Graviton2 uptake, and a lot of common Docker images being built with ARM versions out of the box. It might be about a year or so, but eventually we'll be able to just go ARM-native the whole way.
What Apple needs to do is make a first-class, WSL-tier implementation of Docker for Mac for ARM.
This has no chance of happening. The common cloud CI systems do not support ARM at all (travis, circle CI and co). There is only a minority of developers with macbook and the rest is not going to spend $2000 to buy one just to build some docker images.
https://docs.travis-ci.com/user/multi-cpu-architectures
GitHub lists it as a feature now:
https://github.com/features/actions
I'd be very surprised if this didn't become more common given the high levels of interest people are showing towards ARM server offerings in the cloud space.
I travel the world meeting developers from multiple communities. I very rarely see one without a MacBook.
I benchmarked some `t2a.nano`s against some `a1.medium`s and found that the `nano`s were sufficient for my needs, so I went with them (they are cheaper than `a1.medium`s, even if the `a1`s have a better price-to-performance ratio).
I didn’t find it too difficult to rebuild any of these projects for cross-architecture usage. Even Janus, which has a TON of C/C++ dependencies (some of which have to be compiled from a particular version of the source) easily built for ARM with no change in the Dockerfile.
So I kind of feel like OP is exaggerating the effort required to migrate servers to ARM. Sure it might be a hassle when you have tons of microservices, but you can move them incrementally, and most things recompile with no changes. And regardless of what architecture your dev machine is, you’ll want to be able to compile for and work with both architectures if you want to get the most out of the infrastructure on offer in 2020.
[1] Shameless plug: https://chrisuehlinger.com/blog/2020/06/16/unshattering-the-...
Did you notice at the end that you did NOT end up choosing ARM? You ended up going with x86_64 because that's what made more sense for your backend. That's part of my point - developers should choose their backend architecture based on the performance and pricing of their backend, not their development laptop. And if that decision is "we should keep using x86", then there will be a big performance hit in development.
This is just another CPU story, no big deal.
Having a proper competitor for x86/x64 is a good thing.
The fact docker is slower on ARM (at the moment!) is mostly due to the lack of interests for optimizations.
With Apple starting to produce MacARM machines, and maybe more ARM servers in the wild, docker (and other platforms/frameworks) will start to get more performant on ARM as well.
one can’t run an x86 OS on an ARM architecture
This is the limitation. There is an ARM version of Windows, but the comments from Microsoft don't sound terribly promising: “Microsoft only licenses Windows 10 on ARM to OEMs. We have nothing further to share at this time.” [1]
And Apple has more firmly stated that this won't be an option: “We’re not direct booting an alternate operating system,” says Craig Federighi, Apple’s senior vice president of software engineering. “Purely virtualization is the route. These hypervisors can be very efficient, so the need to direct boot shouldn’t really be the concern.” [1]
1: https://www.theverge.com/2020/6/24/21302213/apple-silicon-ma...I have no idea what plans they might have but I would be surprised if you couldn’t install some ARM Linux distros on their laptops sometime next year.
> However, this would take months
(infomercial arms)
I'm sure it won't make sense for everyone, but I'm just as sure it will make sense for many.
... thus making Android development better on Macs?
It won't be a trivial task (hoping for pre-existing code to port over maybe?) but we have the other pieces like using Hypervisor.framework for x86 already, and being able to cross compile the other code for arm64, so that would be the only major task left.
On the subj. of better GPU support, it depends on what it's actually like using the drivers, but from previous experience with the GPUs and drivers shipped with macOS, there shouldn't be any special kind of trouble at least. We may have to use Metal if Apple also gets rid of opengl support on those new machines, but there are also existing translators for gles and vk to metal. The graphics hw itself, is actually the least of our worries due to how consistent the hardware is likely to be---we'd have to deal with a much fewer set of hw/driver quirks versus other host OS platforms.
OpenGL is deprecated but still supported on Apple Silicon, even for arm64 apps.
At best, you can say that you can run ARM-only games at native speed now, but as a developer you won't really notice much different (assuming the processors aren't slower than Intel's)
m6g, c6g, r6g each support 6 sizes for a total of 24
C6g, M6g, and R6g (powered by AWS Graviton2) each support 8 sizes, along with bare metal. A1 instances (powered by AWS Graviton) have 5 sizes, along with bare metal.
That's a total of 33 distinct instance sizes.
That still leaves storage optimized and GPU optimized instances missing. I'm guessing storage should be easy enough to add, but what about GPU?
From my novice experience with GPUs, they need finicky drivers that must be ported by the GPU manufacturers, so I figure it might take a while to get competitive ARM GPU instances.
That doesnt sound right to me. Perhaps on IO bound tasks, if you are using emulated devices. On CPU bound tasks you should see near native performance.
Slightly longer (but not much) answer: https://www.youtube.com/watch?v=Hg9F1Qjv3iU&feature=youtu.be...
Apple is finally killing x86.
At last, the future will surely be bright!
Is there any suggestion that the architecture that Apple is using is any different to what is being used by lots and lots of other licensees? If not then it's much more open than x86.
If you mean that you can't buy an ARM CPU today to plug into your own motherboard then understood but that's probably now a matter of time. At least making such a CPU is possible - no-one is going to make x86 more open.
- Bgfx
- Blender
- Boost
- Skia
- Zlib-Ng
- Chromium
- cmake
- Electron
- FFmpeg
- Halide
- Swift Shader
- Homebrew
- MacPorts
- Mono
- nginx
- map
- Node
- OpenCV
- OpenEXR
- OpenJDK
- SSE2Neon
- Pixar USD
- Qt
- Python 3
- Redis
- Cineform CFHD
- NumPy
- Go
- V8The reality seems to be that their top MacOS developers have been busy laying groundwork for the ARM transition. There's so much to be done.
I'm guessing new ARMv8 ISA features, PAC/BTI/MTE?
I use virtualization continuously, but not for anything that needs to be as fast as possible.
I won’t hesitate to get an ARM Mac once I can run x64 Windows VMs on it. (Presumably VMWare Fusion or Parallels, and for once I won’t feel ripped off by the upgrade pricing.)
Docker on Mac doesn’t work that well today, so I don’t have any workflows that depend on it.
If you want to develop containers for x86 systems on an ARM system, you'll have to cross compile your containers, which I'm not sure if docker actually supports outside of emulation.
If you are only a consumer of containers, most of the popular ones have been compiled for multiple architectures.
Perhaps this will spur some people over to running ARM servers in the cloud...
We can have the experience of developing on ARM to deploy on x86 right now with ARM workstations. A 16 core barebones costs around $700
No need to take months. `docker buildx` can build multi-arch images without using real ARM instances.
And it’s 20% slower? Well, most of the build time is slow for all sorts of reasons. I honestly don’t think I’ll even notice.
Honest question: What sort of development regularly requires using docker or virtualized OSes?
1) I wanted to provide a Jupyter notebook with IBM DB2 support for a university course. (Why DB2? Because its optimizer can transparently use materialized views for query optimization which PostgreSQL can't AFAIK.) IBM provides a Jupyter magic which requires the Python package ibm_db. ibm_db requires a DYLD_LIBRARY_PATH hack which macOS doesn't support unless you disable System Integrity Protection. I don't want to disable SIP on my system and I can't ask our students to do that. A Docker image provides a convenient solution to the problem.
2) I do a lot of disparate project development using Flink and Hadoop. These get deployed on Linux machines. My preferred way to develop on my Macbook is to boot up a headless Ubuntu system inside VirtualBox, SSH into it, and then start a TMUX session. This has a number of advantages. a) The dev environment more closely resembles the production environment. b) I can setup my TMUX session, save the state of the VM, and then restore it months later to the exact state just by booting up the VM. c) I don't have to pollute my macOS environment with dev tools that I rarely use. It's not strictly necessary but it's actually quite convenient. I can even do development in IntelliJ on macOS, run the services inside the VM, and thanks to remote JVM debugging use the IntelliJ debugger on macOS to step through the code running inside the JVM.
If you sit back for a second, you can see that things like systemd provide nearly everything you'd want from a container, that, like, who cares is the approach you could have, for the top ten languages.
If you are running a massive stack then maybe you do need Docker and VirtualBox but at this point, shouldn't you be using a staging server anyway?
Almost all the compiler toolchains for embedded devices will not run under anything but some esoteric specific version of Linux or Windows.
I generally write code on Mac native (C or C++), then put it into a VM configured with all the right tools and such to build and install.
Usually that VM is a copy of the VM I use for CI/CD with gitlab.
I don't know how common that use case is, but I know everybody I know who does embedded work does it that way or something like it.
Also, there ARM images for Docker. You don't HAVE to run x86-64 binaries.
EDIT: I suppose I should clarify; I don't totally disagree. I personally run Ubuntu on my laptop and servers. But plenty of people are quite happy developing on Darwin and deploying to some sort of GNU/Linux.
I think their reasoning is no longer valid. The hardware has gotten worse (keyboard, touchbar), the OS has gotten more hostile, meanwhile the state of Linux on Laptop has gotten a lot better. So... I used to understand them, but I no longer do.
https://i.kym-cdn.com/photos/images/newsfeed/001/016/674/802...