I'm confused by this. The third argument provides the destination length, so what good would a "maximum destination length" do? I guess he must mean that because the length is often computed, you'd need a fourth argument to ensure the length isn't greater than some sane upper bound. But you can easily fix that using an if statement around the memcpy.
Maybe memcpy_oobp (out of bounds protection) signature could be:
memcpy_oobp(void* dst, size_t dst_size, void* src, size_t src_size);
Then again, I guess you could just as well do: memcpy(dst, src, min(dst_size, src_size));
But having to explicitly specify both destination and source sizes might have prevented a lot of buffer overwrite bugs.A good way to prevent this is to have a buffer abstraction, where the size is a property of the type, e.g.,
typedef struct {
size_t bytes_used;
size_t capacity;
void *data;
} buf_t;
int buf_init(buf_t *buf);
void buf_cleanup(buf_t *buf);
void buf_copy(buf_t *dst, buf_t *src);
/* ... */
Of course, it doesn't prevent people from using memcpy directly. memcpy_s (void *dest, size_t destSize, const void *src, size_t count);
which is effectively equivalent to your memcpy_oobp function.However the Microsoft function also returns an error code which must be checked (because count might be larger than destSize), thus providing another way for the programmer to screw up. I'm not sure if this is better or worse than just copying the min() as in your second example. It probably depends on the situation.
And yes, having something like "if (strlcat(buffer, src, sizeof(buffer) >= sizeof(buffer)) { abort(); } " is much better than buffer overrun. But security does not always seem to be a real concern, compared to politics.
C is dangerous partly because of swaths of undefined behaviour and loose typing. Eliminating much of undefined behaviour either by defining the behaviour or forcing the compiler to refuse compile undefined behaviour could be of some help. There are still classes of undefined behaviour that cannot be worked around but narrowing that down to a minimal set would make it easier to deal with it. Strong typing would help build programs that won't compile unless they are correct at least in terms of types of values.
C is dangerous partly because of the stupid standard library which isn't necessarily a core language problem as other libraries can be used. The standard library should be replaced with any of the sane libraries that different projects have written for themselves to avoid using libc. It's perfectly possible not to have memcpy() or strcpy() like minefields or strtok() or strtol() which introduce the nice invisible access to internal static storage, fixed by a re-entrant variant like strtok_r(), or require you to do multiple checks to determine how the function actually failed. The problem here is that if there are X standards, adding one to replace them all will make it X+1 standards.
Yet, good programmers already avoid 99% of the problems by manually policing themselves. For them, C is simple, productive, and manageable in a lot more cases and domains than it is for the less experienced programmers.
I really wish Bell Labs had been allowed to sell UNIX.
Kernel drivers and embedded system bare metal firmware.
The problem with C is that in any bigger project something always slips through even the best programmers, reviewers, static analysis and unit tests. And that something can lead to disastrous crashes and security vulnerabilities.
You can still overwrite memory but it suddenly became much less likely.
That's also true of all the other languages.
C is a very useful language and one you basically have to know if you're interested in low level software but it's very, very far from flawless.
If you look at many high profile software vulnerabilities of late (heartbleed, goto fail, etc...) many can be traced to the lack of safety and/or bad ergonomics of the C language.
We need to grow up as an industry and accept that using a seatbelt doesn't mean that you're a bad driver. Shit happens.
This doesn't actually refute the assertion that C is dangerous :)
Control and increased safety are not mutually exclusive. I'll take safe-by-default, unsafe-when-asked any day. It's not 1972 anymore.
Even if the complete userspace of Aix, HP-UX, *BSD, GNU/Linux, OS X, iOS, Solaris,.... gets re-writen in something else, there will always be the kernel written in C.
Hence why improving C's lack of safety is so important to get a proper IT stack.
I've always felt that C is near the sweet spot. I'd rather see a minimal change to C that broke backwards compatibility (because it has to) and fixed the top ten simple problems.