"Runtime", or "REPL" time, or "doc time" are all ways you can just "look at it". Do you think knowing "x" is type "Foo" tells you anything about what you can do with "x", other than use it where you can use "Foo"s? You need to figure out what you can do with "Foo" by reading the docs around "Foo". Yeah, I know a priori I can call a method on Foo that returns an int and pass that to other int-taking methods, but is that what I want to do? Which method returning an int do I want to call? You need to look at what "Foo" is to determine this. Rich's example is HttpRequestServlet in
https://www.youtube.com/watch?v=VSdnJDO-xdg I'd recommend watching that and ignoring me. A lot of the issues he raises, as contextfree points out, are around OOP rather than static typing in general so take these as my views and not necessarily his...
If "Foo" is just dynamic data, perhaps in a map, composed of other data, and it's values all the way down, any of those three points of time are easy ways to look at "Foo" and can generally be faster than reading a JavaDoc page or ctrl+spacing figuring out what you have access to and what it all means and what you want to do with the data and code associated with "Foo". Since you're going to be looking at the thing you want anyway, I find it very uncommon in practice that I accidentally divide a string by 2. Static typing would let me know before I run the thing that I was an idiot (maybe, see next paragraph), but I'll find out almost as quickly when the code I just worked on (either in the REPL, which is very nice, but even from the starting state) actually does what I want -- you do that regardless of static/dynamic typing, right? You don't just write code and never verify it does what you want, trusting that it compiled is good enough? Anyway when I run it, I'll immediately hit a runtime error (which can be handled in Lisp in ways way more flexible than exceptions in other languages) and realize my stupidity. So I don't find that particular situation or class of bugs very compelling.
Nor do the presence of declared types alone guarantee my safety -- I need to know about any conversion rules, too, that may let a String be converted to an int for division purposes, or if there is operator overloading that could be in play. Conversion rules and operator overloading are interesting regardless of the static/dynamic typing situation because too much or too few can lead to annoyances and bugs. Java lets you concatenate ints into strings with "+", Python doesn't, but Python lets you duplicate strings with "*".
What I have run into in both static and dynamic languages is division-by-zero errors, which, unless you go all the way to something like Shen that supports dependent typing, your type system with "int" declarations will not help you with. I've run into errors (in both static and dynamic languages) parsing string contents to extract bits of data. I've run into errors around the timings of async operations. I've run into null pointer exceptions (though much less frequently in dynamic languages, maybe once the Option type is in wider use I can rely on that too but its issue is poisoning the AST). I've run into logic errors where I thought the system should be in state C but instead it transitioned to state E. The bugs that take the longest to diagnose and fix are simply due to not understanding the problem well enough and having an incorrect routine coded, so I have to figure out if it's a logic error or an off-by-one error or simply using the wrong function because it had a suggestive name and I didn't read through the doc all the way. Rich made an interesting point in one of his talks that all bugs have both passed your type checker (if you have one) and your test suite. My point is that I'm not convinced static typing as I'm given in the languages I work in is worth it for any reasons beyond performance because I don't see any other improvements like making me more productive or writing less bugs. I'm not opposed to types and schemas themselves in general, just static typing. I'd rather see pushes towards full dependent typing and systems like TLA+ (which has seen more industry usage than Haskell) and Coq than things like core.typed in Clojure.
The last point I'd raise is that I agree with you developer efficiency is more important than code efficiency. But this has been one of the mainstay arguments from the pro dynamic typing camp for many decades, with static typing advocates conceding that yes the developer will be more productive in a dynamic language but the performance is terrible for anything serious. If you want to argue static typing was actually more efficient all along from a developer perspective than dynamic typing, you have a lot of work to do. I'm not particularly interested in that argument (at least today) but the point still needs to be raised.