Modern C++ is really safe if you use the subset that involves automatic storage duration, well bounded arrays, etc and use all the warning flags of your compiler, run static analysis, have a robust test framework, etc.
Consider iterator invalidation, null pointer dereference (which is undefined behavior, not a segfault -- and you can't get away from pointers because of "this" and move semantics), dangling references, destruction of the unique owner of the "this" pointer, use after move, etc. etc.
* Iterator invalidation: if you destroy the contents of a container that you're iterating over, undefined behavior. This has resulted in actual security bugs in Firefox.
std::vector v;
v.push_back(MyObject);
for (auto x : v) {
v.clear();
x->whatever(); // UB
}
* "this" pointer invalidation: if you call a method on an object that is a unique_ptr or shared_ptr holds the only reference to, there are ways for the object to cause the smart pointer holding onto it to let go of it, causing the "this" pointer to go dangling. The simplest way is to have the object be stored in a global variable and to have the method overwrite the contents of that global. std::enable_shared_from_this can fix it, but only if you use it everywhere and use shared_ptr for all your objects that you plan to call methods on. (Nobody does this in practice because the overhead, both syntactic and at runtime, is far too high, and it doesn't help for the STL classes, which don't do this.) class Foo;
unique_ptr<Foo> inst;
class Foo {
public:
virtual void f();
void kaboom() {
inst = NULL;
f(); // UB if this == inst
}
};
* Dangling references: similar to the above, but with arbitrary references. (To see this, refactor the code above into a static method with an explicit reference parameter: observe that the problem remains.) No references in C++ are actually safe.* Use after move: obvious. Undefined behavior.
* Null pointer dereference: contrary to popular belief, null pointer dereference is undefined behavior, not a segfault. This means that the compiler is free to, for example, make you fall off the end of the function if you dereference a null pointer. In practice compilers don't do this, because people dereference null pointers all the time, but they do assume that pointers that have been successfully dereferenced once cannot be null and remove those null checks. The latter optimization has caused at least one vulnerability in the Linux kernel.
Why does use after free matter? See the page here: https://www.owasp.org/index.php/Using_freed_memory
In particular, note this: "If the newly allocated data chances to hold a class, in C++ for example, various function pointers may be scattered within the heap data. If one of these function pointers is overwritten with an address to valid shellcode, execution of arbitrary code can be achieved." This happens a lot—not all use-after-free is exploitable, of course, but it happened often enough that all browsers had to start hacking in special allocators to try to reduce the possibility of exploitation of use-after-frees (search for "frame poisoning").
Obligatory disclaimer: these are small code samples. Of course nobody would write exactly these code examples in practice. But we do see these issues in practice a lot when the programs get big and the call chains get deep and suddenly you discover that it's possible to call function foo() in one module from function bar() in another module and foo() stomps all over the container that bar() was iterating over. At this point claiming that C++ is memory safe is the extraordinary claim; C++ is neither memory safe in theory (as these examples show) nor in practice (as the litany of memory safety problems in C++ apps shows).