> it's code bloat smeared across the entire binary.
That's probably not true in the usual case. Most arch's are 64 bit nowadays. If you are working on something that isn't 64 it you are doing embedded stuff, and different rules and coding standards apply (like using embedded assembler rather than pure C or Rust). In 64 bit environments only pointers are 64 bits by default, almost all integers remain 32 bit. Checking for a 32 bit overflow on a 64 bit RISC-V machine takes the same amount of instructions as everywhere else. Also, in C integers are very common because they are used as iterators (ie, stepping along things in for loops). But in Rust, iterators replace integers for this sort thing. There still is an integer under the hood of course, and perhaps it will be bounds checked. But that is bounds checked - not overflow checked. 2^32 is far larger than most data structures in use. Which means while there may be some code bloat, the lack full 64 integers in your average Rust problem means it's going to be pretty rare.
Since I'm here, I'll comment on the article. It's true the lack of carry will make adds a little more difficult for multi precision libraries. But - I've written a multi precision library, and the adds are the least of your problems. Adds just generate 1 bit of carry. Multiplies generate an entire word of carry, and they almost a common as adds. Divides are no so common fortunately, but the execution time of just one divide will make all the overhead caused by a lack of carry look like insignificant noise.
I'm no CPU architect, but I gather the lack of carry and overflow bits makes life a little easier for just about every instruction other than adc and jo. If that's true, I'd be very surprised if the cumulative effect of those little gains didn't completely overwhelm the wins adc and jo gets from having them. Have a look at the code generated by a compiler some time. You will have a hard time spotting the adc's and jo's because there are bugger all of them.