Offer customers iOS apps and games on the next MacBook as a straight swap for Boot Camp and Parallels. Once they’ve moved everyone over to their own chips and brought back Rosetta and U/Bs they’re essentially free to replace whatever they like at the architecture level.
In their reveal I noticed that they only mentioned ARM binaries running in virtual environments. It makes sense if you don’t want to commit to supporting GNU tools natively on your devices (as it would mean sticking with an established ISA)
Apple is large enough that if they want to break from ARM in the future they can do so by forking an LLVM backend. That's not a large job if it's a very small change, but once it's been done once they have plenty of resources to provide ongoing support (like they do for webkit).
The dividends of doing so are potentially massive. Given that they've been able to make really large gains with the A series chips to date on mobile (not least because they've been able to offload tasks to dedicated co-processors that general-purpose ARM cores don't ship with), the rewards for having chips that are a generation ahead of other like-for-like computers will outweigh the cost of maintaining the software.
I guess they have some ISA flexibility (which is remarkable). But not much; each transition was still a very special set of circumstances and a huge hurdle, I'm sure.
As in, most of the work occurs in the low-level parts of the Operating system. After that the OS should abstract the differences away from User-space.
First of all: there's lots of software that's not the OS. The OS is the easy bit: everything else: grindy, grindy horrorstory. A lot of that code will be third-party. And if you think, "hey, we'll just recompile!", and you can actually get them to too - well, good luck, but performance will be abysmal in many cases. Lots and lots of libraries have hand-tuned code for specific architectures. Anything with vectorization - despite compilers being much better than they used to be - may see huge bog downs without hand tuning. That's not just speculation; you can look at software that's gets the vectorization treatment or was ported to arm from x86 poorly - perfomance falls off a cliff.
Then there's the JITs and interpreters, of which there are quite a few, and they're often hyper-tuned to the ISA's they run on. Also, they can't afford to run something like LLVM on every bit of output; that's way too slow. So even non-vectorized code suffers (you can look at some of the .net core ARM developments to get a feel for this, but the same goes for JS/Java etc). Webbrowsers are hyper-tuned. regexengines, packet filters, etc etc etc
Not to mention: just getting a compiler like LLVM to support a new ISA as optimally as x86 or ARM isn't a small feat.
Finally: at least at this point, until our AI overloads render that redundant - all this work takes expertise, but that expertise takes training, which isn't that easy on an ISA without hardware. That's why Apple's current transition is so easy: they already have the hardware; and the trained experts some with over a decade of experience on that ISA!. But if they really want to go their own route... well, that's tricky, because what are all those engineers going to play around on to learn how it works; what's fast, and what's bad?
All in all, it's no coincidence transitions like this take a long time, and that's for simple (aka well-prepared) transitions like the one's Apple's doing now. Saying they have ISA "flexibility", like they're somehow interchangeable is completely missing the point on how tricky on those details are, and how much they're going to matter on how achievable such a transition is. Apple doesn't have general ISA flexibility, it has a costly route from specifically x86 to specifically ARM, and nothing else.
Also likely the small tweaks they will want from time to time should be "easy" to follow internally, if you can orchestrate everything from top to bottom and back.