Except for: "OMG-API-3: Unsigned integers are preferred over signed".. I feel they're on the wrong side of history with this one.
"Prefer unsigned" only works if you can do 99% of your codebase this way, which, besides LLVM, probably doesn't work for anyone. Having a codebase that is 99% signed is much more feasible. The worst is a codebase with plenty of both, which will be guaranteed endless subtle bugs and/or a ton of casts. That's what they'll end up with.
Apparently the C++ committee agrees that size_t being unsigned was a huge mistake (reference needed), and I would agree. Related discussion: https://github.com/ericniebler/stl2/issues/182 https://github.com/fish-shell/fish-shell/issues/3493 https://wesmckinney.com/blog/avoid-unsigned-integers/
Even LLVM has all this comical code dealing with negative values stored in unsigneds.
The idea that you should use unsigned to indicate that a value can't be negative is also pretty arbitrary. Your integer type doesn't represent the valid range of values in almost all cases, enforcing it is an unrelated concern.
But I am going to defer to authority here: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p142...
In retrospect, I think I was wrong and that "unsigned as default" works better.
I think the domain is important here. We're building a game engine, so there's actually plenty of bit fiddling. We also make use of the "overflow wraparound" of unsigneds in a lot of places.
I think in our case having 99 % unsigned is more feasible than having 99 % signed. There are actually not many things that we would need to use a signed integer for.
FYI, in our codebase right now we have
10038 uint32_t 123 int32_t 7843 uint64_t 58 int64_t
So at the moment we're actually 98.998 % unsigned.
I know it's not 99 %, but it's pretty close :)
It helps that I get to compile all my code -Wconversion.
#pragma once
#ifdef __cpluspus // <-- should be __cplusplus
extern "C" {
#endif
#include "api_types.h"
// ...
#ifdef __cplusplus
}
#endif
Can't say I'm a fan of OMG-CODEORG-3, however, it sounds like compilation time is a key metric for them.. I prefer a John Lakos style "physical components" set up which emulates a type-as-module inclusion style. At least OMG-CODEORG-3 clearly states that include order becomes important as a result.I wasn't sure about OMG-CODEORG-3 in the beginning either, but after using it for over a year and a half now, I'm strongly in favor.
The only situation where inclusion order matters is when there's (pseudo-) inheritance, and we don't use that a lot, so in practice it is not a big issue.
Actually, I've had MORE problems with inclusion order in previous projects that didn't use this rule. What would happen is that some header (included from some other header, included from some other header) would include <windows.h>. Then some other header (from some other header, etc) would include something that conflicts with the (many) #defines in <windows.h>.
Trying to sort out this mess was always a PITA. First you have to figure out where the include is coming from. Then you have to figure out how to fiddle with the include order and the defines to fix it. When using OMG-CODEORG-3, this is pretty simple, because all the includes happen in the .c file, so it is easy to rearrange them to fix include order problems. Not so easy when the includes are scattered all over multiple .h files.
Another big win with OMG-CODEORG-3 is that you see exactly what other pieces of code the .c file is dependent on, you don't need to follow multiple header chains to figure it out. You also only depend on the things you really need which is nice. In projects with liberal header inclusion, dependencies can grow as O(n^2) which increases complexity.
On the same lines the template 'cute tricks' are where you get your performance, stability and readability from C++. I definitely agree that you should drop into assembly to see what the compiler is doing with your code but that can and should apply to heavily templated code too.
On large projects bad header hygiene can cause significant compilation overhead.
This can have surprising and sometimes unpleasant consequences; see https://0.30000000000000004.com
No ambiguity for the programmer as to what the underlying units are, and no unnecessary int/float conversions. All the book-keeping and conversions are taken care of by the compiler with zero run-time size or perf overhead.
Using an uint64_t (instead of uint32_t or double) to carry "opaque ticks", and a handful conversion function to convert to real-world time units is fine and just a few lines of code.
http://www-users.math.umn.edu/~arnold//disasters/patriot.htm...
view-source:https://ourmachinery.com/files/guidebook.md.html
<meta charset="utf-8" emacsmode="-- markdown --">
Ahh, straight out of the zen of Python!
$ python -c "import this"