So when the data is visualised, the perspective in the SAR image is actually 90deg offset from the imagining direction. This is done because the collected data only contains distance, rather than distance and angle (which you would get with LiDAR).
So places where building look transparent are caused by the fact that visualisation perspective is different to the imagining perspective. In an image that appears to have been taken with the camera south of a target, it was actually imaged from the north (this is a simplification). So you will always get overlapping data, because the imaging and apparent visualisation perspectives are different. And the reason for not aligning the perspectives is because there isn’t actually enough data to do that (no return or transmission angle is collected along side the distance data).
It’s important to note that I say “apparent” perspective. Because the imaging perspective doesn’t actually change, I think the perspective change is caused by our brains recognising shapes, and computing a perspective angle. But this angle is actually incorrect because the RADAR imaging process is completely different to how our eyes work.
For those still confused about this. Try thinking about how the shadows are being cast, they’re not from the sun, and likely the only thing admitting radar signals to “illuminate” objects is the satellite emitting the RADAR chirps, and also doing the imaging.
Could someone tell me if I’ve got this right?