I've been watching compressed sensing (CS) from the sidelines for the past decade, while wearing experimental-cosmology, synthetic-aperture RADAR, and instrumentation/FPGA hats. (Not electrodynamics or optics, per your question, but nearby.) Here's my perspective.
If you're doing compressed sensing correctly, it transforms the entire instrument from electrically complex and algorithmically simple, to algorithmically complex and electrically simple. Unfortunately this isn't a strategy you can bolt on to a traditional architecture and turn off as a de-risking strategy if it doesn't work. From the perspective of a team building a tool to accomplish a task, it's an all-or-nothing gamble. Consequently, the challenges to aggressive adoption of CS are both technical (can it work?) and programmatic (is it feasible given human, political, and financial constraints?)
In the fields I've been exposed to, state-of-the-art instrumentation is complex enough that domain experts spend their entire careers understanding the quirks of a few established instrument topologies. Where CS is applicable, it would take a big leap of faith from an entire team to build an instrument around CS. Before that leap of faith is accessible, the team would have to be conversant in CS research and the implementation of CS algorithms in practice. And, after the leap of faith, a successful CS project requires secondary leaps of faith from funding agencies to get these instruments built. These barriers are highest for exactly the kind of complex, expensive projects where CS is supposed to shine.
As an aside, the latest IEEE Signal Processing magazine (https://ieeexplore.ieee.org/document/8653526) has an interesting article on hardware architectures for compressed sensing. As CS progresses, and as CS researchers transition from pure-CS research to applied-CS techniques, the use of CS in physics will probably grow.