It's true that in a lot of cases, deeply understanding discrete numerical algorithms is a lot easier if you can analyze the continuous versions, which of course cannot be executed directly. But you can get really far with just the discrete versions, and you can understand useful things about the continuous versions without knowing what a derivative or an integral is.
And I don't just mean that you can use Unity or Pure Data to wire together pre-existing algorithms and get interesting results, although that's true too. You don't even need to understand any calculus to write a ray-tracer from scratch like http://canonical.org/~kragen/sw/aspmisc/my-very-first-raytra..., which is four pages of C.
You could maybe argue that it's using square roots, and calculating square roots efficiently requires using Newton's method or something more sophisticated. But Heron of Alexandria described "Newton's" method 2000 years ago, although he hadn't generalized it to finding zeroes of arbitrary analytic functions, perhaps because he didn't have concepts of zero or functions.
You could argue that it's using the pow() function, but it's using it to take the 64th power of a dot product in order to get specular reflections. People were taking integer powers of things quite a long time ago.
Even using computers for really analytic things, like finding zeroes of arbitrary analytic functions, can be done with just a minimal, even intuitive, notion of continuity.
Alan Kay's favorite demo of using computers to build human-comprehensible models of things is to take a video of a falling ball and then make a discrete-time model of the ball's position. A continuous-time model really does require calculus, and famously this is one of the things calculus was invented for; a discrete-time model requires the finite difference operator (and maybe its sort-of inverse, the prefix sum). Mathematics for the Million starts out with finite difference operators in its first chapter or two. You don't even need to know how to multiply and divide to compute finite differences, although a little algebra will get you a lot farther with them. A deep understanding of the umbral calculus may be inspirational and broadening in this context, and may even help you debug your programs, but you can get by without it.
I agree that calculus is really powerful in extending the abilities of computers to model things, but I think you're overstating how fundamental it is.