"Only on the user's device", right.
Does the decryption only occur on the user's device?
Is this master password not reused for the account or has account authentication been changed to use a cryptographic proof produced on-device?
If the key is ever decrypted on vendor's servers, everything else is theater.
And this is all of course also excluding auto-updating vendor-supplied authentication code from the threat model because the industry is not ready for that conversation yet.
That also means if you dislike the idea of some big company holding all your keys in cloud backed-up vault, you can just use one of the dozens of hardware FIDO key manufacturers.
On iOS, the keys are stored in iCloud Keychain, which is also the password auto-fill vault.
These keys are protected with two levels - iCloud encryption and an effective HSM cluster of apple security enclaves.
There is no master passphrase/secret exposed to the user, it is synchronized by phones on the account. You must join the 'ring' of personal devices in addition to using the iCloud login to decrypt iCloud information.
This means unlike basic iCloud encryption (which has a recovery HSM used to help people gain access to their accounts and which legal processes may grant access to read data), you need to perform additional actions to get access to this vault.
Each 'passkey' (Web Authentication Credential) is a locally generated secp256r1 key pair in that keychain, with associated site information and storage for additional information such as the site-specified user identifier and friendly display name.
There's basically three levels of protection for the data
1. whatever the cloud hosting provider has for data at rest
2. the per-account iCloud encryption key, which is never shared with the hosting provider but exists on an Apple recovery HSM
3. the per-account device ring key, which is not visible to Apple.
so no, the credential's private key itself is never visible to Apple.
Apple does have a mechanism (if you go into Passwords) to share a passkey with another person's Apple device. You need to be mutually known (e.g. need to have one another as contacts, with the contact record containing a value associated with their Apple ID) and it needs to be done over Airdrop for a proximity check. Presumably, this uses the public key on their account to do an ECDH key agreement and send a snapshot of the private information over the encrypted channel.
Auto-updating vendor-supplied authentication code for iPhones is complex because of the split between the operating system code and the Secure Enclave firmware, the (misuse) of that API via a compromised operating system, and the potential to get malicious changes into the Secure Enclave firmware itself.
No, because PBKDFs are not a good mechanism for creating encryption keys. Instead you have an actual random key, and your devices gate access to that key with your device password.
> Does the decryption only occur on the user's device?
Yes, because only the user's devices have access to the key material needed to decrypt. Apple cannot decrypt them.
> Is this master password not reused for the account or has account authentication been changed to use a cryptographic proof produced on-device?
Not sure what you're asking here?
> If the key is ever decrypted on vendor's servers, everything else is theater.
As above the vendor/apple cannot decrypt anything[1] because they do not have key material.
> And this is all of course also excluding auto-updating vendor-supplied authentication code from the threat model because the industry is not ready for that conversation yet.
Don't really agree. The malicious vendor update is something that is discussed reasonably actively, it's just that there isn't a solution to the problem. Even the extreme "publish all source code" idea doesn't work as auditing these codebases for something malicious is simply not feasible, and even if it were ensuring that the course code you get exactly matches the code in the update isn't feasible (because if you assume a malicious vendor you have to assume that they're willing to make the OS lie).
Anyway, here's a basic description of how to make a secure system for synchronizing anything, including key material (secure means "no one other than the end user can ever access the key material, in any circumstance without having broken the core cryptographic algorithms that are used to secure everything").
Apple has some documentation on this scattered around, but essentially it works like this:
* There is a primary key - presumably AES but I can't recall off the top of my head. This key is used to encrypt a bunch of additional keys for various services (this is fairly standard, the basic idea is that a compromise of one service doesn't compromise others - to me this is "technically good", but I would assume that the most likely path to compromise is getting an individual device's keys in which case you get everything anyway?)
* The first device you use to create an iCloud account or to enable syncing generates these keys
* That device also generates a bunch of asymmetric keys and pushes public keys to anywhere they need to go (i.e. iMessage keys)
* When you add a new device to your account it messages your other devices asking to get access to your synced key material, when you approve the addition of that new device on one of your existing ones, that existing device encrypts the master key with the public key provided by your new device and sends it back. At that point the new device can decrypt that response and use that key to then decrypt other key material for your account.
All this is why in the Apple ecosystem if you lose all your devices, you historically lost pretty much everything in your account.
A few years ago Apple introduced "iCloud Key Vault" or some such marketing name for what are essentially very large sets of HSMs. When you set up a new device that device pushes its key material to the HSMs, in what is functionally plaintext from the point of view of the HSMs, alongside some combination of your account password and device passcode. You might now say "that means apple has my key material", but Apple has designed these so that it cannot. Ivan Krstic did a talk about this at BlackHat a few years back, but essentially it works as following:
* Apple buys a giant HSM
* Apple installs software on this HSM that is essentially a password+passcode protected account->key material database
* Installing software on an HSM requires what are called "admin cards", they're essentially just sharded hardware tokens. Once Apple has installed the software and configured the HSM, the admin cards are put through what Krstic called a "secure one way physical hashing function" (aka a blender)
* Once this has happened the HSM rolls its internal encryption keys. At this point it is no longer possible for Apple (or anyone else) to update the software, or in any way decrypt any data on the HSM.
* The recovery path through requires you to provide your account, account password, and passcode, and the HSM will only provide the key material if all of those match. Once your new device gets that material it can start to recover all the other material needed. As with your phone the HSM itself has increasing delays between attempts. Unlike your phone once a certain attempt count is reached the key material is destroyed and the only "recovery path" is an account reset so at least you get to keep your existing purchases, email address, etc.
You might think it would be better to protect the data with some password derived key, but that is strictly worse - especially as the majority of passwords and passcodes are not great, nor large. In general if you can have a secure piece of hardware gate access to a strong key is better than having the data encrypted to a weak key. The reason being that if the material is protected by that key rather than enforced policy then an attacker can copy the key material and then brute force it offline, whereas a policy based guard can just directly enforce time and attempt restrictions.
[1] Excepting things that aren't end-to-end encrypted, most providers still have a few services that aren't E2E, though it mostly seems to be historical reasons.
From a post linked in the article:
> Passkeys in the Google Password Manager are always end-to-end encrypted: When a passkey is backed up, its private key is uploaded only in its encrypted form using an encryption key that is only accessible on the user's own devices. This protects passkeys against Google itself, or e.g. a malicious attacker inside Google. Without access to the private key, such an attacker cannot use the passkey to sign in to its corresponding online account.
> Additionally, passkey private keys are encrypted at rest on the user's devices, with a hardware-protected encryption key.
> Creating or using passkeys stored in the Google Password Manager requires a screen lock to be set up. This prevents others from using a passkey even if they have access to the user's device, but is also necessary to facilitate the end-to-end encryption and safe recovery in the case of device loss.