I would love to see more focus on device manufacturers protecting the user instead of trying to protect themselves.
Prime example where the TPM could be fantastic: embedded devices that are centrally coordinated. For example, networking equipment. Imagine if all UniFi devices performed a measured boot and attested to their PCR values before the controller would provision them. This could give a very strong degree of security, even on untrusted networks and even if devices have been previously connected and provisioned by someone else. (Yes, there’s a window when you connect a device where someone else can provision it first.
But instead companies seem to obsess about protecting their IP even when there is almost no commercial harm to them when someone inevitably recovers the decrypted firmware image.
https://arxiv.org/abs/2304.14717
It’s not hard to protect an FDE key in a way that one must compromise both the TPM and the OS to recover it [0]. What is very awkward is protecting it such that a random user in the system who recovers the sealed secret (via a side channel or simply booting into a different OS and reading it) cannot ask the TPM to decrypt it. Or protecting one user’s TPM-wrapped SSH key from another user.
I have some kludgey ideas for how to do this, and maybe I’ll write them up some day.
[0] Seal a random secret to the TPM and wrap the actual key, in software, with the sealed secret. Compromising the TPM gets the wrapping key but not the wrapped key.
https://www.usenix.org/system/files/conference/usenixsecurit...
What a TPM does is provides a chip with some root key material (seeds) which can be extended with external data (PCRs) in a way which is a black box, and then that black box data can be used to perform cryptographic operations. So essentially, it is useful only for sealing data using the PCR state or attesting that the state matches.
This becomes an issue once you realize what's sending the PCR values; firmware which needs its own root of trust.
This takes you to Intel Boot Guard and AMD PSB/PSP, which implement traditional secure boot root of trust starting from a public key hash fused into the platform SoC. Without these systems, there's not really much point using a TPM, because an attacker could simply send the "correct" hashes for each PCR and reproduce the internal black-box TPM state for a "good" system.
TPMs work great when you have a mountain of supporting libraries to abstract them from you. Unfortunately, that's often not the case in the embedded world.
I am using TMP for this on x86 machines that I want to boot headless. If I need to replace the disk I can just do a regular wipe and feel pretty comfortable.
I'd use a Yubikey or other security token with the Pi, but the device needs to boot without user intervention and the decryption code I'm aware of forces user presence whether or not the Yubikey requires that.
Nothing prevents all the parties (the one you are attesting to and the central authority you use for indirection) to save everything and cross reference at any point in the future.
The same problem and often worse is present in DRM systems.
In the case of Widevine DRM you are actually leaking a static HWID to every license server, no collusion required. This is because there is no indirection involved, you give the license server the public key of the private key fused in the secure enclave for this purpose. The only safeguard is that every license server needs a certificate from Google to function (secure enclave will reject forming a request on invalid cert).
There are a lot of license servers.
As a side note, this is how they impose a cost on pirates. They employ forensic watermarks for the content streamed to subscribers - at the CDN level, they can do it cheaply using A/B watermarking, the cost is to store double the size of every file. When that content shows up in p2p piracy they trace it to the account and the device's DRM system public key and revoke its ability to view content (on the level of the license server) and ban the account.
ARM may have the market now… but RISC-V is the fastest growing and it may be poising to eat ARM’s lunch
Essentially, TPM is a standardized API for implementing a few primitives over the state of PCRs. Fundamentally, TPM is just the ability to say "encrypt and store this blob in a way that it can only be recovered if all of these values were sent in the right order," or "sign this challenge with an attestation that can only be provided if these values match." You can use a TEE to implement a TPM and on most modern x86 systems (fTPM) this is how it is done anyway.
You don't really need an fTPM either in some sense; one could use TEE primitives to write a trusted application that should perform similar tasks, however, TPM provides the API by which most early-boot systems (UEFI) provide their measurements, so it's the easiest way to do system attestation on commodity hardware.
https://github.com/OP-TEE/optee_ftpm
Or you mean dedicated TPM?
If you don't need the TPM checkbox, most vendors have simple signing fuses that are a lot easier than going fTPM.
TPMs can be reprogrammed by the customer. If the device needs to be returned for repairs, the customer can remove their TPM, so that even the manufacturer cannot crack open the box and have access to their secrets.
That's only theory though, as the box could actually be "dirty" inside; for instance it could leak the secrets to obtained from the TPM to mass storage via a swap partition (I don't think they are common in embedded systems, though).
So yes incorporating a separate secure element\TPM chip into a design is probably more secure, but ultimately the right call will always depend on your threat model.