Google's C++ Style Guide discourages using unsigned ints to represent nonnegative numbers (like sizes or counts). It recommends using runtime checks or assertions instead.
http://google-styleguide.googlecode.com/svn/trunk/cppguide.x...
Unsigned ints make sense for bit twiddling, but you should probably use a fixed-size uint32_t or uint64_t to ensure the results are consistent across various architectures.
The problem is that the riskiest place for a signed/unsigned mismatch is when calling an unsigned API with a signed value. Simply deciding to not use unsigned at all doesn't fix this because ANSI C and STL use unsigned types throughout (f.e. memcpy)
if (size <= 10) {
// Yay, I have plenty of space
memcpy(buffer, src, size);
}
The code looks fine, but if "size" is an int with the value -1 there's a hard-to-spot bug. Plenty of security holes have been caused by just this sort of mistake. If you don't fight against the types that libc uses you don't have this problem.There will still be spots where you'll need to compare signed and unsigned values, but the compiler will warn you about these. You'll have to cast one side or the other but that's a GOOD thing. Since neither a signed-compare nor an unsigned-compare is always what you want you want to be explicit about it.
There are other advantages to using unsigned types. For instance, it gives an explicit hint to the person reading the code about the range of the value. I think this makes interfaces clearer. For instance if you see a function signature of "void foo(const uint8_t *, size_t)" you'll immediately guess that you're dealing with a memory buffer and its explicit size without even seeing the names of the parameters.
Actually, if I had my way "int" would default to being unsigned and you'd have to specifically request "signed" if that's what you want. I find that I probably use unsigned types 5x as often as signed ones.
This is, without doubt, the worst reason for using unsigned types, and it's the primary reason (IMHO) for the flaws in the C API that force you to use unsigned types unnecessarily. Unsigned types are not a documentation feature, and they are not merely an advert for an invariant; they are opting in to a subtly different arithmetic that most people are surprised by. It would be better to have a range-checked types, like Pascal, than to infect the program with unsigned arithmetic.
I find that most programs deal with values for their integer types with an absolute value of under 1000; about the only excuse for using an unsigned type, IMO, is when you must have access to that highest bit in a defined way (for safe shifting and bit-twiddling).
I think that's a "citation needed" moment there. It's true that any native integer type will strange if you go outside of its defined range. The only way to avoid that is to use a language that automatically converts to bignums behind the scene (Common Lisp, etc)
What I don't agree with is that this is something that "most people are surprised by" If anything, the word "unsigned" is a pretty good hint about what behavior you'll get.
And even when you play fast-and-loose with the rules, it usually turns out ok:
unsigned a, b, c, d;
a = b + (c - d);
even if d > c, this will do the expected thing on any 2's compliment architecture. Now, this will break if a and b were instead "unsigned long long". I think that case is fairly rare -- it's not a mistake I've seen commonly in real life (especially compared to the dangerous "botched range-check of a signed value" error)But you are correct that it's not "merely an advert for an invariant" -- it's advertising that the compiler actually reads. It gives you better warnings (I've had plenty of bugs prevented by "comparison of signed and unsigned" warnings) It also allows the compiler to optimize better in some cases: compare the output of "foo % 16" with foo as signed and unsigned.
> It would be better to have a range-checked types, like Pascal
Adding runtime checks to arithmetic is the type of costs that are never going to be in C. This is no different than saying "C should have garbage collection" or "C should have RTTI". They're perfectly valid things to want in a language, but they're anathema to the niche that C holds in the modern world. With C I want "a + b" to compile down to one instruction -- no surprises.
And even if you DID do a range-check, what do you do if it fails? 1. Throw an exception? Sounds logical... oh wait, this is C there's no such thing as an exception 2. Clamp the value? Now you have behavior that is just as bizarre as an integer overflow 3. Crash? Not very friendly.. 4. Have a user-definable callback (i.e. like a signal) What is the chance that the programmer will be able to make meaningful recovery though?
There are, however, some additions to the C99 type system that I think would be useful.. for example C++11's strongly typed enum's are a good idea.
> I find that most programs deal with values for their integer types with an absolute value of under 1000
I find that most programs deal with values greater-than-or-equal-to zero.
Isn't this why you should compile with all warnings on?
But if you really want to eliminate the potential for a bug from this warning, you have to go back through and tweak/check the values you're testing, all the way back to their source, fixing signedness along the way. At this point you may as well have settled on a default to begin with.
The real pain comes when you have to interface with external code. Even in the standard library, you'll find size_t (eg fread(3)) and ssize_t (eg read(2)). You're going to have a mismatch with one or the other.