There are uses for this, especially since some code will run in environments where you can not simply handle it, but it's also just cleaner this way; you don't have to worry about the different behaviors between operating systems and possibly CPU architectures with regards to error recovery if you simply don't generate any.
Since there are these edge cases where it wouldn't be possible to handle faults easily (e.g. some kernel code) it needs to be considered unsafe in general.
In Rust, the CPU exception resulting from a stack overflow is considered safe. The compiler uses stack probing to ensure that as long as there is at least one page of unmapped memory below the stack (guard page), the program will reliably fault on it rather than continuing to access memory further below. In most environments it is possible to set up a guard page, including Linux kernel code if CONFIG_VMAP_STACK is enabled. But there are other environments where it’s not, such as WebAssembly and some microcontrollers. In those environments, the backend would have to add explicit checks to function prologs to ensure enough stack is available. I say “would have to”, not “does”: I’ve heard that on at least the microcontrollers, there are no such checks and Rust is just unsound at the moment. Not sure about WebAssembly.
Meanwhile, Go uses CPU exceptions to handle nil dereferences.
That said, I actually entirely forgot Go catches nil derefs in a segfault handler. I guess it's not a big deal since Go isn't really suitable for free-standing environments where avoiding CPU exceptions is sometimes more useful, so there's no particular reason why the runtime can't rely on it.