It sounds like you're saying that the only productivity gain from ARC (automatic reference counting) is that you don't have to manually annotate what type of pointer you want. I don't agree with that.
Yes, if you forget to mark a ref-counted pointer as weak, then you may get a memory leak. Memory leaks can be disastrous, but they can be benign, and they're always better than use-after-free.
In C++ it is easy to create a raw pointer (with & or *). It's unsafe. Soon enough, you have some lambda inside another function, but the lambda is executed after the enclosing function returns, and you've captured a variable with &. Oops. You thought that the lambda would get executed during the enclosing function's execution, but you misread the API you were using.
IMO the big productivity gain is being able to write code where I don't have to think too hard about whether the code is memory-safe. Modern C++ code makes this easier, but languages with ARC (like Objective-C or Swift) make this even easier. Code is mostly safe by default, and you can visually inspect code to look for unsafe behavior, more easily than you can with C++.
There are also hybrid ref-counted + tracing GC options. CPython uses this approach.