> A serious mistake in C.
Well, that's arguable. This "mistake" could be fixed in C tomorrow without breaking the semantics of any existing C code, but notice that this hasn't been "fixed" in any of the latest C standards, so perhaps it's still there for a reason.
And the reason it hasn't been "fixed" is that compilers can optimize code better if they can assume that signed addition won't overflow.
So it's more of a trade-off rather than strictly being a disadvantage.
It's also something you can "fix" in your own code if you want to, by passing a compiler flag (-fwrapv in gcc), although arguably, at that point your code wouldn't be strictly C-standard compliant anymore. Or by using some library that handles arithmetic overflow by wrapping around, which could be implemented in standard C.
> Another example is that Go requires explicit integer casts (disallowing implicit integer casts) to avoid what is now understood to be an enormous source of confusion and bugs in C.
I agree with you on this, although forcing explicit casts also makes the code more verbose and can make it harder to understand what's going on.
I think a balanced approach is requiring explicit casts only for the "tricky" cases, i.e. when the values might become truncated, and possibly also when sign-extension might be necessary and therefore might result in something the programmer didn't expect.
But if I were to design a language I'm not sure that I would require explicit casts for e.g. promoting a uint16_t to a uint32_t...
> You can understand Go as an improved C, designed for a world where parallel computing (e.g., dozens of CPU cores) is commonplace.
That's a bit of a hot take :) Let me know when the Linux kernel starts to get rewritten in Go ;)