All of these extensions are mandatory in the RVA22 and RVA23 profiles and so will be implemented on any up to date RISC-V core. It's definitely worth setting your compiler target appropriately before making comparisons.
The RISC-V ecosystem being handicapped by backwards compatibility does not make sense at this point.
Every new RISC-V board is going to be RVA23 capable. Now is the time to draw a line in the sand.
Nobody really forces you to use x64 if you don't like it, just as nobody forced you to use Itanium — which Intel famously failed to "shove down the customers' throats" btw.
I only use it for microcontrollers and it's really nice there. But yeah I can imagine it doesn't perform well on bigger stuff. The idea of risc was to put the intelligence in the compiler though, not the silicon.
Depends on what the instruction does. If it goes through a four-loads-four-stores chain that VAXen could famously do (with pre- and post-increments), then sure, this makes it impossible to implements such ISA in a multiscalar, OOO manner (DEC tried really, really hard and couldn't do it). But anything that essentially bit-fiddles in funny ways with the 2 sets of 64 bits already available from the source registers, plus the immediate? Shove it in, why not? ARM has bit shifted immediates available for almost every instruction since ARMv1. And RISC-V also finally gets shNadd instructions which are essentially x86/x64's SIB byte, except available as a separate instruction. It got "andn" which, arguably, is more useful than pure NOT anyway (most uses of ~ in C are in expressions of "var &= ~expr..." variety) and costs almost nothing to implement. Bit rotations, too, including rev8 and brev8. Heck, we even got max/min instructions in RISC-V because again, why not? The usage is incredibly widespread, the implementation is trivial, and makes life easier both for HW implementers (no need to try to macrofuse common instruction sequences) and the SW writers (no need to neither invents those instruction sequences and hope they'll get accelerated nor read manufacturers datasheets for "officially" blessed instruction sequences).
Itanium did this mistake. Sure, compilers are much better now, but still dynamic scheduling beats static one for real-world tasks. You can (almost perfectly) statically schedule matrix multiplication but not UI or 3D game.
Even GPUs have some amount of dynamic scheduling now.
Less than 7 years from ratification of the initial RV{32,64}GC spec.
Less than 5 years from the first mass-produced roughly original Raspberry Pi level $100 SBC: AWOL Nezha, shipped June 2021.
Nope. See https://github.com/llvm/llvm-project/issues/110454 which was linked in the first issue. The spec authors have managed to made a mess even here.
Now they want to introduce yet another (sic!) extension Oilsm... It maaaaaay become part of RVA30, so in the best case scenario it will be decades before we will be able to rely on it widely (especially considering that RVA23 is likely to become heavily entrenched as "the default").
IMO the spec authors should've mandated that the base load/store instructions work only with aligned pointers and introduced misaligned instructions in a separate early extension. (After all, passing a misaligned pointer where your code does not expect it is a correctness issue.) But I would've been fine as well if they mandated that misaligned pointers should be always accepted. Instead we have to deal the terrible middle ground.
>atomic memory operations are made mandatory in Ziccamoa
In other words, forget about potential performance advantages of load-link/store-conditional instructions. `compare_exchange` and `compare_exchange_weak` will always compile into the same instructions.
And I guess you are fine with the page size part. I know there are huge-page-like proposals, but they do not resolve the fundamental issue.
I have other minor performance-related nits such `seed` CSR being allowed to produce poor quality entropy which means that we have bring a whole CSPRNG if we want to generate a cryptographic key or nonce on a low-powered micro-controller.
By no means I consider myself a RISC-V expert, if anything my familiarity with the ISA as a systems language programmer is quite shallow, but the number of accumulated disappointments even from such shallow familiarity has cooled my enthusiasm for RISC-V quite significantly.
The ISA is open so there's no greedy corporation trying to upsell you. I mean there's an implementation and die area cost for each extension but it's not being set at an artificial level by a monopolist.
RyanAir is about exploiting consumers, with bait-and-switch and shitty terms and conditions.
RISC-V's modularity is about giving choice to hardware designers, so they can pick and choose just those features that their solution needs, and even allow for custom extensions.
RISC-V's modularity is for academia. 1) for education, where students learn/use/work on simple processors, 2) for research in new types of hardware and extensions, where ease of implementation or ease of creating a custom extension is important.
And where it actually mattered they did not introduce a separate extension. Integer division is significantly more complex than multiplication, so it may make sense for low-end microcontrollers to implement in hardware only the latter.
I would be ok with that if it was a valid analogy.
It is valid in microcontroller land. There, the chip and the software are provided by the same party. So you can select for exactly the RISC-V features you need and save yourself some silicon. That sounds like a win to me.
At the application level, like a server or a desktop, that would be a disaster because I get my hardware and software from different people. How do the software guys know what hardware to target? Well, that is exacly why RVA23 exists.
What does RVA23 mean? It is the RISC-V "Application" profile. It allows you to build software to a single hardware target and trust that hardware makers will target the same proifle. RVA23 is like saying x86-64v4. Both are simple names for a long list of extensions (flags) and assumptions that you expect the hardware to honour. So, when Ubuntu 26.04 says it requires RVA23, it means that all the software built on it can assume those features. No a la carte.
The reason RVA23 is geting so much attention is that it has essentially the same feature set as modern ARM64 or x86-64. Software will be able to target this profile for a long time. There may be a new profile in a few years time, like RVA30, but hardware that implements that will still run RVA23 software (just as x86-64v4 hardware will run x86-64v1 software). Hardware built for profiles before RVA23 may be missing features modern applications expect.
I guess you could say that RVA23 is British Airways Business Class.
If you really want to support hardware designed before RVA23, almost everything you would want to run pre-built software on supports RVA20. And again, your RVA20 stuff will run fine on RVA23 hardware (but with fewer features--like no vectors). So maybe no in-flight meal, but it will get you there.
As for `seed`, if you're running on a microcontroller you can just look up the data sheet to see if it's seed entropy is sufficient. By the time you get to CPUs where portable code is important a CSPRNG is probably fine.
I agree about page size though. Svnapot seems overly complicated and gives only a fraction of the advantages of actually bigger pages.
It's a terrible attitude to have towards programmers, but looking at misaligned ops, I guess we can see a pattern from RISC-V authors here.
Most programmers do not target a concrete microcontroller and develop every line of code from scratch. They either develop portable libraries (e.g. https://docs.rs/getrandom) or build their projects using those libraries.
The whole raison d'être of an ISA is to provide a portable contract between hardware vendors and programmers . RISC-V authors shirk this responsibility with "just look at your micro specs, lol" attitude.
aka, Zicclsm / RVA23 are entirely-useless as far as actually getting to make use of native misaligned loads/stores goes.
If you'd start from a clean sheet today you'd probably end up with a somewhat bigger base page size. Not hugely larger though, as that wastes a lot of memory for most applications. Maybe 16k like some ARM chips use?
X86-64 also has “profiles” which tell you what extensions should be available. There is x86-64v1 and x86-64v4 with v2 and v3 in the middle.
RVA23 offers a very similar feature-set to x86-64v4.
You do not end up with a mess of extensions. You get RVA23. Yes, RVA23 represents a set of mandatory extensions. The important thing is that two RVA23 compliant chips will implement the same ones.
But the most important point is that you cannot “just use x86-64”. Only Intel and AMD can do that. Anybody can build a RISC-V chip. You do not need permission.
No, anybody can’t build a RISC-V chip. That’s the same mistake OSS proponents make. Just because something is open source doesn’t mean bugs will be found. And just because bugs are found doesn’t mean they will be fixed. The vast majority of people can’t do either.
The number of people who can design a chip implementation of the RISC-V ISA is much, much smaller, and the number who can get or own a FAB to manufacture the chips smaller still. You don’t need permission to use the ISA, but that is not the only gate.
2. Also, fundamentally all modern CPUs are still 64-bit version of 80386. MMU, protection, low level details are all same.
Uh, because you can't? It's not open in any meaningful sense.