https://www.dni.gov/files/NCSC/documents/Regulations/Technic...
Most people I deal with seem to think it means not having an internet connection which is true, but that's simply not enough.
An internet air-gap is probably enough for a vast majority of use cases.
There's lots of talk about engineering here along the lines of "good engineering is knowing how to make a bridge barely stand up", but in Security, especially IT sec there's often little discussion about real risk and impact. And striking a reasonable balance.
Places I've worked consider their product and information high security whilst embargoed (mostly financial). The IT security at these companies matched that posture. But people all drank together, shared everything over drinks and had terrible personal security.
I'm not a security skeptic at all, I just think that the simple stuff goes a long way and that it's somewhat unhelpful to compare regular IT use to CIA style IT use.
Wouldn't that be easy to filter out because it's spectrally flat?
Why don't they use millions of pre-recorded bogus and bait radio signals playing on all the commonly used frequencies instead?
It's not Wi-Fi. The article title is misleading clickbait. They are just using a simple script to exercise the RAM in a way that produces more or less radio noise, and then using a debug feature in an off the shelf Wi-Fi chipset to measure the channel noise and transfer data that way (at an extremely low rate of a few bits per second). At no point are Wi-Fi signals involved. Both sides need to collude to make this work. It only takes a few hours to put together this kind of demo.
He did the same thing with GSM a few years ago - the exact same concept with 800MHz RAM - but he's so bad at it that even though he was using an open source fully documented GSM stack as a base for his receiver (osmocombb), he couldn't get more than a few bits per second out of it, even though you could obviously get a lot more data through with access to the DSP hardware at the receiver like he did.
This guy basically runs a paper mill where every few months he comes up with a new side channel, builds the minimum viable PoC, and produces no research of value. He makes no attempt to measure theoretical maximum channel bandwidths, he doesn't optimize the data coding, nothing. He just picks a new random idea, like using screen brightness or network activity LEDs to encode information, and cranks out a paper. And he's really good at clickbaiting his way through news cycles, which I'm sure keeps the funding going.
You can implement a PoC at the same level of some of his papers in a one line shell script that blinks the camera LED on a machine to transfer a file bit by bit:
https://twitter.com/marcan42/status/1339156243517095936?s=19
The IO clock runs at 1.2GHz, and the data lines run at double that.
As an overview of the issues designing for DDR4, I liked this technical note: https://media-www.micron.com/-/media/client/global/documents...
“Prior to designing the card, it is useful to decide how much of the timing budget to allocate to routing mismatch. This can be determined by thinking in terms of time or as a percentage of the clock period. For example, 1% (±0.5%) at 800 MHz clock is 6.25ps (1250ps/200). Typical flight times for FR4 PCB are near 6.5 ps/mm. So matching to ±1mm (±0.040 inch) allocates 1% of the clock period to route matching.”
I bet that I can come up with a new one off the top of my head...alright, how about malware that imperceptibly dims/brightens the screen, which could be interpreted as a 1-bit symbol and picked up even when the screen is facing away from the receiver (by observing the reflection off of a nearby surface)?
See also https://news.ycombinator.com/item?id=28394826. These aren't "attacks" - these are methods of data exfiltration between two compromised devices. There are attacks that e.g. steal private RSA keys by capturing the EM radiation during cryptographic operations, but this is not that.
Also the amount of man hours required to set up a working receiver that is outside of detectable range would be too expensive to consider using this tech against anything but an extremely valuable target. There are of course other possible methods of pulling this off such as if a nearby mobile device is compromised it could be used to listen and repeat the weak RF signal coming from the memory but that would be another level of complexity keeping this from being an efficient technique.
Also of note is that there would be no way for the listener to reliably communicate with the compromised airgapped device to update their code if there is a problem so the communication would be similar to one way UDP.
This is why classified information is typically handled in a SCIF (Sensitive Compartmentalized Information Facility, "skiff").
It's an implementation of defense-in-depth; the idea that an adversary must breach several layers of physical and digital security to compromise your secrets.
These attacks are of little use at the National Defense level because they would require installing e.g. a receiving antenna within 6 feet of a compromised machine. In other words, to leverage this attack you'd have to achieve a total collapse of your adversary's classified processing security posture.
Well, if you can effect that then you don't need this attack.
It was impressive then, and it's still impressive.
I'm actually curious if something like this is behind the logic of why a minimum 6-foot gap is required between classified and unclassified workstations in the same building. But actual SCIFs don't allow radio waves through the walls and don't allow any sort of radio-enabled devices that may be able to read this signal and send it back home inside. You definitely can't bring IOT devices anywhere remotely close to a high-security environment.
I also discovered that one of my ham radios leaked RF when my computer's keyboard started randomly hitting buttons, including windows key plus other buttons.
The stock RTL device only goes to 30mhz or so, and up to about 2ghz, so any older/low power computers will spray hash all over that bandwidth. Spread spectrum on the fsb or whatever actually reduces the noise floor as the noise is distributed across a wider bandwidth, necessarily reducing the perceived power.
Hardware wallets are essentially a kind of highly specialized air gapped computer, so I wouldn't be surprised if a similar exploit could be used for larger air gapped systems.
edit; yep, power draw exploits are definitely a thing, https://www.zdnet.com/article/how-safe-is-your-air-gapped-pc...