People frequently say this, but never really explain it. As far as I can tell, it translates to "Nobody cares about physical security" - except it's clear that people /do/. Things like Boot Guard are only really relevant to physical attacks. DMA protection in firmware is only really relevant to physical attacks. It's extremely obvious that the industry is attempting to avoid short term physical access to a device being sufficient to compromise it, and research that demonstrates that it's still possible is valuable.
That's a different kind of attack than what people usually mean by "physical access" though. The thing where they drop a bunch of malicious flash drives in the parking lot or put a malicious USB charger in an airport isn't the same thing as the attacker having unsupervised physical access to the machine, and the former is certainly worth defending against even if the latter is hopeless.
> Things like Boot Guard are only really relevant to physical attacks.
One could argue that they are also relevant to purposely locking the device owner into specific operating systems.
As an example of "physical access and you're screwed," one way to compromise a machine is to install a microphone anywhere near the machine and then wait for the user to type their passphrase. It's possible to deduce what keys are being pressed from the sounds they make and the timing, so now the attacker has your passphrase. The same can be done with covert video surveillance.
Another possibility is to measure electromagnetic emissions to much the same effect. Most computer keyboards are not exactly TEMPEST certified and even if they were, someone with physical access could make adverse modifications.
Protecting a machine against unsophisticated attackers is pretty easy, to the point that the likes of Boot Guard are not even required, but protecting a machine against physical access by a sophisticated attacker is pretty hopeless.
An extreme example a pentester imparted to me once was, if someone could spend sufficient time alone with my laptop, they could remove my hard drive and insert it into an identical laptop with a hardware or firmware backdoor preinstalled. We were discussing nation-state adversaries, but the general principle applies.
Another example is attacks on encrypted drives (so-called "evil maid" attacks). If a computer is booted and the drive is decrypted, an attacker with physical access could open the computer, remove the RAM, and download it's contents, thereby stealing the encryption key. If the computer is powered down, it's still vulnerable to other attacks; enrypted drives necessarily have cleartext code for accepting the password & decrypting the drive. You could modify this code to log the decryption key, or broadcast it over your device's radios.
There's also the classic Windows "sticky key" exploit, where you replace the sticky key binary with a program that gives you administrator access, reboot the computer, and then activate sticky keys.
You could install a keystroke logger. You could install a device to record monitor output. You could log network traffic.
I've yet to find a kiosk environment that I couldn't break out of. Once I was able to break out of a scanning kiosk environment, and into a Windows desktop, by turning the quality settings all the way up and crashing the kiosk. That was one of the more difficult examples; most of the time all you need is to find a way to right-click. (I had the proper authority to investigate these kiosks.)
The point is that the list goes on.
It is true, as you say, that there has been progress in implementing mitigations, and that there are people who care deeply about these issues. A counterexample might be SIM cards, TPMs, and other HSMs. These systems are able to provide better guarantees by encapsulating their peripherals and being willing to self destruct. But that could describe a cell phone, tablet a laptop, too.
Maybe in the future this "law" won't be so hard and fast.
Keeping attackers away from your computer is certainly the best solution, just as keeping your computer off the network is the simplest answer to avoiding network security issues. But that's not always an option, so we still need to care about it.
> An extreme example a pentester imparted to me once was, if someone could spend sufficient time alone with my laptop, they could remove my hard drive and insert it into an identical laptop with a hardware or firmware backdoor preinstalled.
That'll be detected with any properly implemented remote attestation solution (switching the machine will change the endorsement key, so attestation will fail)
> If a computer is booted and the drive is decrypted, an attacker with physical access could open the computer, remove the RAM, and download it's contents, thereby stealing the encryption key.
Removing soldered-on RAM from a motherboard fast enough to maintain the contents is not a straightforward attack. Not theoretically impossible, but you're not going to have a good time of it.
> If the computer is powered down, it's still vulnerable to other attacks; enrypted drives necessarily have cleartext code for accepting the password & decrypting the drive. You could modify this code to log the decryption key, or broadcast it over your device's radios.
Will be detected via remote attestation.
> There's also the classic Windows "sticky key" exploit, where you replace the sticky key binary with a program that gives you administrator access, reboot the computer, and then activate sticky keys.
How do you do that with an encrypted drive? Look, yes, it's not easy to guard against physical attacks. But some organisations that genuinely do have to deal with state level attackers care about physical security and care about mitigating it, and we have moved well beyond the "physical access means you've lost" state of affairs. Finding new cases that allow attackers with physical access to subvert our understanding of the security boundaries of a machine is of significant interest.
Does that make having a layer of stickers on one's laptop also a layer of defense?
If you have that kind of access it doesn’t really matter though because you can copy the drive, then add a device that monitors the keyboard so you get the key when the user enters it and then you can just clear or disable the TPM chip.
All of this is a bit silly though, because physical intervention implies a level of commitment that lends itself to more reliable approaches: https://xkcd.com/538/
My favorite "security interface failure" is the fact that OSX apps frequently demand a user login and password in a popup window. E.g., Slack does this. It would be so easy for an app render this popup (even on a webpage!) and I would totally type my password into it. I feel like the only answer to this is to have a sacred corner of the screen that only the OS is allowed to write to
If you follow defense in depth as a security architecture philosophy, which the industry does, then you still implement defenses against physical attacks, but you recognize that those defenses are either (1) defenses against opportunists, or (2) last ditch defenses.
But many do and it’s a difficult problem that impacts the efficiency of the business. I’ve had to deal with it often and end of the day, you need to keep important data off of mobile or other client devices, and have controlled workarounds for exceptions.
Some of the tougher compliance standards recognize this and essentially prohibit many types of remote access without the entity owning the remote computer.