It's so common now to let CDNs (primarily cloudflare) run your TLS frontend that this article apparently doesn't even consider the idea of hosting an app entirely from servers the app author controls.
That said, it's true that a TLS cert is necessarily more exposed than an app signing cert can be. If you're serious about security, your app signing cert will be on an airgapped machine. The TLS cert however has to be available on a networked machine in order to sign messages.
https://tools.ietf.org/html/draft-ietf-tls-subcerts-07
The certificate is public, it's fine for copies of that to be in all edge devices, the problem today is that the associated private key has to be on those edge devices too, and that's what Delegated Credentials solves.
While Signed HTTP Exchanges were originally developed for a more nefarious purpose (to allow the URL to be changed by a trusted proxy), I think the idea or one like it can apply to serving trusted web content. Think of it as instead of your current TLS cert verifying your host, it would also verify the full URL and content including headers. It's a bit untenable for regular use, but some apps could leverage it for extra trust.
> When designing E2EE protocols for persistent vs ephemeral applications, we need to figure out where we need long-term identity in terms of cryptographic keys, and where we don’t.
I would hope that web apps always lean towards ephemeral key use whenever possible (i.e. key generation and post of public key in browser upon authentication, with private key only in local JS memory for just that page). If this means the webapp has to be built to work with 20 different keys for a user because they opened 20 tabs, so be it. I know people are afraid of doing anything like key generation in the browser, but we can't ride-off the possibility of e2ee web apps altogether. I fear the browser allowing access to the OS's key management or the system's TPM for key storage because it may lead to overuse/over-reliance on long-term keys, but I'm sure it'll happen if it hasn't already.
There is already a little trick[0] that can be done with bookmarklets (or locally saved files) which allow you to bootstrap a page with a known set of JavaScript code running on it, but it has the disadvantage that the URL bar doesn't contain a familiar domain. If the <portal> spec[1] ends up supporting SRI[2] integrity hashes in a sensible way, this little bootstrapping technique could actually be practical.
[0] https://news.ycombinator.com/item?id=17776456
Has anyone compiled a list of sites which offer Signed HTTP Exchanges/Real URL AMP feeds? Is there a straightforward way to make one?
Another thing is non exportable WebCrypto keys that can limit the damage even if the page is compromised.
I'm not a crypto expert--so forgive my ignorance.
https://doc.libsodium.org/key_exchange
Knowing only each others public keys, two parties can exchange session keys for bidirectional encryption.
Do you even need a "protocol" if the clients trust each other?
Client A generates a random key, maybe a nonce - and a session Id - then encrypts that with Bs public key, signs with As private key - and sends that to B. Only B can decrypt the message, A and B now share a key.
Or maybe that is the protocol.
Anyway, if you know someone's public key and they know yours - you're already bootstrapped for a secure channel?
Ed: m seeing the page, I see this is more à link to the api for libsodium, and that obviously makes sense - to have standard implementation (and I guess this does some tricks for generating public/private session keys from long lasting public keys?
I think I would call this TFSU (Trust For Single Use). Trust On Any Use sounds like complete and total trust.
Google Duo does NOT support E2EE group calls on web... They actually don't support ANY group calls in the web app.
Lack of good support for e2ee multiparty calls is probably why - the hope is that adoption of insertable streams will change that.
You can't have E2E on mobile devices, you can't have E2E on any other OS. (And you'll probably have a hard time finding the right combination of hardware and Linux distro to have it on Linux)
What if the monitor is backdoored and sends copies of the display buffer to The Secret World Government? What if the keyboard has a hardware keylogger? What if we’re all living in an elaborate computer simulation of a global pandemic?
As an alternate comparison: it’s still end-to-end encrypted communication if I take the securely received message, print out a copy, and tape it to a bulletin board at the town square.
The “end-to-end” refers to the transmission path. It’s a defense against MITM, and can be accomplished by plenty of systems that aren’t Linux.
But people attribute security properties to it that it doesn't have!
What good is protection against MITM if I can just read it off your device while you type it?
You have no security with mobile devices. It is foolish to think so.
The moment the information is unencrypted and made available via a userinterface, you've lost all control.
You don't control the iOS rendering loop. You don't control the Android rendering system. (You might think you do though as much of Android is open source).
You don't control the OS core libraries, you don't control the microcode of the CPU. You don't control the blitting to a screen device or the recording of photons on a camera. And I'm not even talking about external manipulation to exfiltrate data.
You might control the content of the IP packages sent. You don't control any other IP packages sent.
You don't control anything.
The times where we had complete control over our hardware seem to be over.
Would also like to know about the current state of Open Hardware.