As long as the lower blur layers are not fully occluded by opaque content, then yes - they all need to be evaluated, and sequentially due to their dependency. This is also true if there is transparency without blur for that matter, but then you're "just" blending.
Note that there are some differences when it's the display server that has to blur general output content on behalf of an app not allowed to see the result, vs. an app that is just blurring its own otherwise opaque content, but it's costly regardless.
(There isn't really anything like on-screen vs. off-screen, just buffers you render to and consume. Updating window content is a matter of submitting a new buffer to show, updating screen content is a matter of giving the display device a new buffer to show. For mostly hysterical raisins, these APIs tend to still have platform abstractions for window/surface management, but underneath these are just mini-toolkits that manage the buffers and hand them off for you.)