Instead I would assume, in order
- my config broke it
- OS update broke it
- the bios doesn’t properly handle any case that isn’t “preinstalled OEM windows”
I had a laptop that as far as I could tell, could only boot into windows’ default bootmgr.efi. I could turn off secure boot, and tamper with that efi to boot Linux, but it refused to acknowledge other boot loaders from within the bios. It wouldn’t surprise me in the slightest if secure boot isn’t properly handled. I’ve had too many issues with cheap computers having janky bioses.
A natural question is whether Secure Boot is the right place to protect against the type of attack mentioned in the post. Given that we've already invested a lot of effort in fixing kernel privilege escalations, and any program able to install BIOS rootkits can access all data and modify any program anyway, what justifies the extra complexity of Secure Boot (which includes all the extra design necessary to make it secure, such as OS'es robust to tampering even with kernel privileges)? I mean, why invest so much in Secure Boot when you could harden your kernel to prevent tampering BIOS in the first place?
https://pages.nist.gov/800-63-3/sp800-63b.html
So I'm basically agreeing with you, that a lot of people "in security" are just cargo culting.
Still, as a consumer I reject it for personal use because I believe boot malware is rare since other forms of attack have been vastly more effective and I also don't have an evil maid.
I just hope we don't get to a ridiculous situation where my shitty bank gets panic if I root my phone and wants to extend that behavior to PCs. "Trusted computing" is a failure in my opinion and "security" on mobile devices is an example where it significantly impacts the usefulness of the devices themselves. Of course this might be more driven by ambitions to lock down phones than real security, but still.
Secure boot might be useful for devices you administer remotely. But any secure boot validation doesn't mean anything to me, the system could be infected without secure boot noticing anything. It probably only gets in the way of OS installations.
Am I correct that Secure Boot purely exists to prevent this attack vector: malware gets root on the OS, hardware allows updating firmware via OS now owned by malware, but Secure Boot means you have to wipe only the hard drive instead of the firmware to eliminate the malware.
It seems like it would be a lot simpler and more reliable to add a button to motherboards that resets the firmware to the factory version (on memory that can't be written by a malicious OS).
If the process changes so the hardware only loads signed firmware, which only loads a signed boot loader, which only loads a signed kernel, etc. that avenue of attack is closed. It also makes it possible to trust a used computer.
The problem is that other than Apple nobody has really been committed to doing it well - it’s begrudging lowest-bidder compliance and clearly not something many vendors are taking pride in.
There are at least two solutions:
1. Deploy your own Secure Boot keys and protect them with a firmware password whatever mechanism your particular system has to lock down Secure Boot settings.
2. Use TPM-based security so that even knowing the passphrase doesn’t unlock FDE unless the PCRs are correct.
#1 is a bit of a pain. #2 is a huge pain because getting PCR rules right is somewhere between miserable and impossible, especially if you don’t want to accidentally lock yourself out when you update firmware or your OS image.
Of course, people break PCR-based security on a somewhat regular basis, so maybe you want #1 and #2.
I believe Chromebooks also do this fairly well.
Thankfully all this complexity is not the only thing that allows to trust a used computer. There are other options, like not having a modifiable SW (that is SW not stored in non-replaceable ROM) run prior to handing off control to bootloader loaded from external media.
There's still simple vector of attack by installing hardware keylogger to the keyboard wires.
the signing method only offers buying more time before the innevitable data is "breached" by a theat actor - its the same buying-time for any and all encryption. the system can get too complex, and the underlying problems of humans will always exist (and amplified by more points of failure).. (accidents, data breaches, exploits, ect). the system needs to be immutable, but also mutable at the same time (for updates, ect) - and thats not exactly something easy to accomplish.
and with apple.. they try yes, but it is forever a walled garden. we've already seen their secure enclave bloatloader shinanigans get exploited on phones- and it was not fun for those people where their phones were compromised. apple suffer from us humans, too (we will never be perfect, nor will our software)
What secureboot is designed to prevent is malicious changes to the OS bootloader (a conventional rootkit), which is usually shimx64.efi or grubx64.efi on linux/dualboot machines, or bootmgfw.efi on windows. Secureboot checks the signature of .efi files before they're allowed to run during boot, ensuring they were signed by one of the trusted keys. And unless you've made changes to your secureboot config, that means microsoft and/or the hardware vendor.
There are systems out there that do this, and having something like Secure Boot is essential to their design (as is measured boot, which is the main mechanism TPMs leverage).
However, this solution is utterly unworkable for the personal computer market. Instead, we have a bunch of general purpose kernels signed to run on any computer, but which are willing to run any userspace you through at them.
Obviously you need some read+write storage elsewhere on the same computer, but you could reliably freeze large chunks of stuff in a way that would be impervious to viruses or hackers.
Edit: A quick search reveals that, of course, you can still buy them today. I have not felt a need for one in ages.
Or if you want to make it simpler, any time you're reinstalling the OS.
1. Allowing unattended/automatic BIOS updates from a running OS at all
2. Being so paranoid about attacks by a spy with physical access to the computer that the keys cannot be replaced or revoked
I'm not a security researcher, but to just shoot the breeze a bit, imagine:
1. The OS can only enqueue data for a proposed BIOS update, actually applying it requires probable-human intervention. For example, reboot into the currently-trusted BIOS, and wait for the user to type some random text shown on the screen to confirm. That loop prevents auto-typing by a malicious USB stick pretending to be a keyboard, etc.
2. Allow physical access to change crypto keys etc, but instead focus on making it easy to audit and detect when it has happened. For example, if you are worried Russian agents will intercept a laptop being repaired and deep-rootkit it, press a motherboard button and record a values from a little LED display, values that are guaranteed to change if someone alters the key set and/or puts on a new signed BIOS. If you're worried, they'll simply replace the chipwork itself, then you'd need a way to issue a challenge and see a signed verifiable response.
The problem here is in trusting, nay expecting, your average motherboard maker to either know anything about key management or give a shit about key management.
Shooting the breeze as well...
Have some (non-modifiable, non-updatable) portion of the firmware that, on boot, calculates a checksum or hash of the important bits at the beginning of the chain of trust (efi vars, bios).
Then have it generate some sort of visualization of the hash (thinking something like gravatar/robohash) and draw it in the corner of the screen. Would need some way to prevent anything else from drawing that section of the screen until you're past that stage of boot.
That way every time you boot your computer you're gonna see, say, a smiling blue kitten with a red bow on its head. Until someone changes your platform key / key exchanges or installs a modified bios, and now suddenly you turn the computer on and it's a pink kitten with gray polka dots.
That way you don't have to actively _try_ and check the validity. It'd be very obvious and noticeable when something was different.
Perhaps the kitten's bow is pink, instead of red, etc. Even a little bit of wiggle room makes the attacker's job a lot easier, much like the difference between creating something that resolves to a known SHA256 hash versus something which matches a majority but not all of the bits.
A simpler approach would be for the small piece of trusted code to discard and replace the hash/representation With a completely new sufficiently-different one whenever anything changes.
> Would need some way to prevent anything else from drawing that section of the screen until you're past that stage of boot.
It might need to prevent drawing anything on the entire screen. Otherwise a program might be able to modify the resolution, refresh rate, etc, to try to hide the picture or to display a different one.
0. All of the BIOS code and other hardware code should be FOSS. This should be printed in the manual as well. A simple assembly language might be preferable, and if the hex codes are also printed next to it, they can also be entered manually if necessary.
1. The operating system cannot update the BIOS at all. To do so requires to set a physical switch inside of the computer which disables the write protection of the BIOS memory, and also disallows the operating system from automatically starting.
2. Require keyboards, etc to be connected to dedicated ports, not to arbitrary USB ports. (This is possible with USB but is a bit difficult; PS/2 would be better.)
3. You can program it manually (whether or not the BIOS memory is write protected) without starting the operating system (this makes the computer useful even if no operating system is installed); perhaps with an implementation of Forth. When BIOS memory is write enabled, then such a program may be used to copy data from the hard drive to the BIOS memory.
4. Like you mention, it should make it easy to audit and detect when keys have been changed. An included display might normally display other stuff (e.g. boot state, temperature measurement, etc), but a switch can be used to display a cryptographic hash. If you always fill all of the memory (even if part of it would not otherwise be used) then it can be difficult to tamper with in the case of an unknown vulnerability.
5. I had seen suggestion to add glitter and take a picture of it, to detect physical tampering. This can help in order to avoid alterations of the verifications themself. If it is desirable, you can have multiple compartments which can be sealed separately, each one with the glitter. If some of these compartments are internal, a transparent case around some of them might help in some ways (as well as to detect other problems with the computer that are not related to security).
However, even the above stuff will need to be done correctly to avoid some problems, since you will have to consider what is being tampered with. (You might also consider the use of power analysis to detect the addition of extra hardware, and the external power can then be isolated (and a surge protector added) to mitigate others attacking your system with power analysis and to sometimes mitigate problems with the power causing the computer to malfunction.)
1. There are some things that may need to be updated from time to time that need to be applied before the OS is loaded - microcode updates being one of these. I would still like a physical write-enable switch.
2. Making a keyboard that is not a real keyboard is easier than ever with things like Arduino and Raspberry Pi, and it doesn't matter the interface. There is probably not a way to verify physical presence that can't be duplicated remotely. At some point humanity has to get beyond the primitive mentality of "this stuff on a computer monitor/from a speaker looks/sounds just like real stuff so it is the real stuff" and we have to accept that computers are machines and not in and of themselves a proxy for reality unless specifically considered so.
3. Funny, the original 1981 PC booted to ROM BASIC if it couldn't boot off of anything, so it was useful without an OS. I really wish UEFI firmware was on a replaceable SD card and the system would literally have no firmware if it was not present. I would pay the 2 cents more it would cost OEMs. With all the capability in modern chipsets I feel like this would be trivial to do.
4. Good idea. I wish computers had a separate display that is attached through some legacy interface like RS-232 and that doesn't go through VGA at all for this purpose, like a cheap LCD screen.
5. The old punched cards were very low density, but had one really nice property: you could physically see the data with nothing more than your eyes. It's funny that a stack of punched cards could potentially be more secure than millions of instructions of code hidden in a NAND or ROM that you cannot see or verify except with another device that you also have to trust and run on a platform you trust. Even then you can't really see the bits on a NAND or ROM without special expensive equipment. It'd be cool if there could be a high-density storage device where the binary contents are somehow physically viewable and discernable without a CPU needed. Something like QR codes but much, much more high density.
This would be so much more advanced than we have now.
Reverting to an approach proven so superior over more decades would not be a step backward by comparison to UEFI.
You really need to once again be able to reflash your motherboard using a clean image and have no possibility of any malware remaining on-board after that if things are going to be as advanced as it once was also.
For decades I thought it was always going to be normal for a quick reflash of the bios to give complete confidence and trusted validation that you could then rapidly rebuild a verifiably clean system from scratch using clean sources every time.
Progress can surely occur without advancement :/
So today a decade+ later there still isn't a standard way to automatically enroll a linux distribution's keys during initial install in any of the distributions (AFAIK).
but still, since the attack for this to be worth is out of this world rare... very few orgs bother to even document it in the main guides because it gives zero protection and infinite support load
The distro installer should, if it detects setup mode, automatically be asking the user if they wish to replace the all the existing keys and enroll distro supplied certs, keys and dbx entries. Except none of the distros have this infrastructure built, outside of their dependence on Microsoft.
And no, none of this is needed if all you want is to be able to self sign a kernel/etc because its possible to install a MOK key to shim, but that isn't the point, the point is that the vast majority of linux users aren't setup to protect a cert/key chain from an attacker. Which is the entire reason for secure boot. If your attacker is sophisticated enough they will be stealing the signing keys from your machine/org and signing their own updates. Which is why MOK and self signing is a mistake for ~100% of Linux users.
There's a guide for both approaches here: https://wiki.archlinux.org/title/Unified_Extensible_Firmware.... You'll need to make sure whatever distro you use has the right hooks to sign the boot images after each upgrade (i.e. an apt callback rather than a pacman callback) if you're not using Arch, of course.
Using the sbenroll tool, the process is three commands (generate keys, enroll keys, sign current bootloaders) plus whatever extra BIOS interfacing logic your computer needs on top of normal secure boot stuff like unlocking the BIOS through a password.
Basically the installers should be replacing the existing certs and keys, with distro supplied ones which are maintained along with global DBX entries by the distro itself, with a distro supplied KEK/etc where the private keys are stored in a high security environment not available to most users.
Its really the kind of project the linux foundation should be sponsoring so the infra could be shared cross distro.
It's unconscionable to tell users this is here to keep you safe, but that you have no control over it & if something goes wrong well then too bad, at best we might provide an update.
(Also that governments can probably force these root-of-trust companies to sign payloads to circumvent security is also pretty icky to me.)
Of course, if the key used to sign the firmware is compromised, the root of trust is still technically what it is supposed to do - verifying signatures, it's just that that it becomes irrelevant in terms of security / integrity.
The OP states that the vendors could have revoked the compromised platform key with a firmware update. They just didn't bother.
This time it's AMI. Cannot get bigger.
I suppose you could also break it down and say that the particular idiot who hardwired a test key in an SDK or whatever should have known that both the rest of AMI and everybody at the OEMs would be idiots, and found a way to make it relatively hard for them to stay with that key. But however far you dig, it's idiots all the way down.
This leads to accumulation of "power", and monopolization of it in systems leads to vulnerability. One point of failure is enough to compromise entire ecosystem.
Just reminds me that apple checked every application you run, for "safety reasons" (rather checks app certificates, but that is nearly the same).
This phrase does not sit well with me.
However, in the US, they don't need to hack; they can just ask for data lawfully.
Also, people have played with Microsoft's keys, for sure.
Or is that just a protection against rootkits?
I still fail to understand what secure boot is protecting against: if a machine is compromised remotely, does secure boot prevents installing a rootkit that's invisible from virus scanners?
Or a free OS.
High time.
1: would need to onshore all work in the bootchain (software and hardware).
2: would put liability on inidividual engineers and take liabilty away from the corporate organisiation.
3: accountability requires that engineers have authority.
4: in a team environment accountability is going to be blamed on the weakest members
5: it would completely fuck any inidividual open source development.
If you want to come up with better ideas for accountability, then read up on the witchhunts that occur after deadly failures in other engineering disciplines, and check that your idea fixes the problems you see.
The world is far more interconnected now than when my grandad was a certified engineer.
Probably came as part of the dev kit from AMI.
https://www.documentcloud.org/documents/24833831-cellebrite-...
[System.Text.Encoding]::ASCII.GetString((Get-SecureBootUEFIPK).bytes) -match "DO NOT TRUST|DO NOT SHIP"
, gives me cryptic errors and gpt of no help.
...
then remembered I'm using custom platform keys
tbh. I don't understand why secure boot is build around global root of trusts instead of ad-hoc per device trust (i.e. like custom platform keys but with better support), at most supported by some global PKI to make bootstraping on initial setup easier
this would not eliminate but massively reduce how much "private key" got leaked vulnerabilities can affect secure boot chains (also move most complexity from efi into a user changeable chain loaders, including e.g. net boot, etc.)
PS:
To be clear " I don't understand why" is rhetorical, I do understand why and find it a terrible bad idea.
Its root of trust is the BIOS/Firmware, which can be updated from a running OS. There is no hardware root of trust.
How Secure Boot Works
Secure Boot ensures that a device boots using only software trusted by the Original Equipment Manufacturer (OEM). Here's a high-level overview:
1. Power On and Initialization: The CPU initializes and runs the BIOS/UEFI firmware, which prepares the system for booting.
2. Platform Key (PK) Verification: The firmware verifies the Platform Key (PK), which is used to validate Key Exchange Keys (KEKs).
3. Key Exchange Keys (KEK) Verification: The KEKs validate the allowed (whitelist) and disallowed (blacklist) signature databases.
4. Signature Database Verification: The firmware checks the allowed (db) and disallowed (dbx) signature databases for trusted software signatures.
5. Bootloader Verification: The firmware verifies the bootloader’s signature against the db. If trusted, the process continues.
6. Kernel and Driver Verification: The bootloader verifies the OS kernel and critical drivers’ signatures.
7. Operating System Boot: Once all components are verified, the OS loads.
Apple Secure Boot Process
Apple adds hardware-based security with the Secure Enclave:
1. Secure Enclave Initialization: Separate initialization handles cryptographic operations securely.
2. Root of Trust Establishment: Starts with Apple's immutable hardware Root CA.
3. Immutable Boot ROM Verification: The boot ROM verifies the Low-Level Bootloader (LLB).
4. LLB Verification: The LLB verifies iBoot, Apple's bootloader.
5. iBoot Verification: iBoot verifies the kernel and its extensions. The Secure Enclave ensures cryptographic operations remain protected even if the main processor is compromised.
For more details, check out:
- <https://uefi.org/sites/default/files/resources/UEFI_Spec_2_8...>
- <https://www.apple.com/business/docs/site/Security_Overview.p...>
I would really love to have a hardware root of trust on a Linux or other open system, with a hardware security module of sorts that is programmable, so I decide what the root keys are, and is able to measure the firmware boot process, establishing a proper audit trail or chain of trust.
I can't remember the HN formatting rules, so expect an edit shortly to make this look better.
Edit: I did a little more poking. It's not quite as bad as I thought, because at least in theory, the BIOS will verify a digital signature of a BIOS update before flashing it.
"We sold you this house with a front door designed where our key will always let us in". Why do we put up with this shit?
This joke never gets stale, wait it is not a joke ?
I still believe the only reason for this to exist is to eventually turn general computing devices into a locked down Cell Phone Spying Device.
The PC is the lone outlier in the locked-down, walled-garden world of consoles, cell phones, tablets, smart TVs, EVs, etc. I think there's a concerted effort to change that.
The problem is when these other keys are pre-shipped they invalidate the entire "ensures only [...] kernels I signed" part. And just removing the pre-shipped keys can cause other problems: https://github.com/Foxboron/sbctl/wiki/FAQ#option-rom
Or maybe for that option in the future, the device will cost thousands of USD more.
Or you need a special professional license to get a non-locked down device, and the license will cost more than a house in a rich suburb.