[0]: https://www.daemonology.net/blog/2009-06-11-cryptographic-ri...
(Unrelated) see also the more recent https://www.latacora.com/blog/2018/04/03/cryptographic-right...
If you're using a "good" hash algorithm, then MAC-ing is simple: hash over your key and message.
It's pretty weird that SHA-256 has been king for so long, when SHA-512/256 (which, as I've noticed people don't understand, means SHA-512 truncated to 256 bits) was there from the beginning and is immune from this attack.
Anyway, in general it's a pet peeve of mine that many people so often say "HMAC" when really they just mean MAC.
Now that your cookie looks like this (probably also base64 encoded):
{"id": 42, "display_name": "John", "is_admin": false, "session_end_at":1726819411}
You don't have to hit the DB to display "Hi John" to the user and hide the jucy "Admin" panel. Without HMAC, an attacker could flip the "is_admin" boolean in the cookie.You could also create a cookie that is just random bytes
F2x8V0hExbWNMhYMCUqtMrdpSNQb9dwiSiUBId6T3jg
and then store it in a DB table with similar info but now you would have to query that table for each request. For small sites it doesn't matter much and if it becomes a problem you can quite easily move that info into a faster key-value store like Redis. And when Redis also becomes too slow you are forced to move to JSON Web Tokens (JWT) witch is just a more standardized base64 encoded json wrapped with HMAC to avoid querying a database for each request.But even if you are using random bytes as your session identifier, you should still wrap it in a HMAC so that you can drop invalid sessions early. Just for making it harder for someone to DDOS your DB.
I understand it's nice to never have to worry about it regardless of scale, but generating sessions with an established CSPRNG and being able to invalidate them at will is an order of magnitude simpler. It's also standard and abstracted away for you already if you use any framework
The JWT and similar cookies exist for when you want to do scaling and such. You don't need much more than a user ID and a user name for many pages of a web application, your database may be in another continent, so you may as well store some variables in the client side. This has the added benefit of being able to put down as many frontends as you may need, integrating nicely with technologies like Kubernetes that can spawn more workers if the existing workers get overloaded.
By also encrypting the cookie, you can get rid of most of the backend state management, even for variables that should be hidden from the user, and simply decrypt+mutate+encrypt the cookie passed back and forth with every request, stuffing as many encrypted variables in there as can you can make fit.
They're also useful for signing in to other websites without the backend needing to do a bunch of callbacks. If a user of website A wants to authenticate with website B, and website B trusts website A, simply verifying the cookie with the public key (and a timestamp, maybe a n\_once, etc.) of website A can be enough to prove that the user is logged into website A. You can stuff that cookie into a GET request through a simple redirect, saving you the trouble of setting up security headers on both ends to permit cross-website cookie exchanges.
In most cases, signed cookies are kind of overkill. If all your application has is a single backend, a single database, and a single frontend, just use session cookies. This also helps protect against pitfalls in many common signed cookie variants and their frameworks.
The problem is that many people are using web frameworks that automatically turn body and query into some kind of hash map data structure. So when you tell them "use the request body as the HMAC message", they go "OK, message = JSON.stringify(request.body)", and then it's up to fate whether or not their runtime produces the same exact same JSON as yours. Adding a "YOU MUST USE THE RAW REQUEST BODY" to the docs doesn't seem to work. We've even had customers outright refuse to do so after we ask them to do so in the "why are my verifications failing" ticket. And good luck if it's a large/enterprise customer. Get ready to have 2 different serialization routines: one for the general populous, and one for the very large customer that wrote their integration years ago and you only now found out that their runtime preserves "&" inside JSON strings but yours escapes it.
Rant over...
What escaping of "&" inside JSON are you talking about? Some unholy mix of JSON and urlencode?
Before post-quantum cryptography concerns, KEM were indeed mostly built on top of Diffie-Hellman key agreement, but you could also build one on top of RSA, or on top of some lattice constructs. But you wouldn't build one yourself, there are good constructions to choose from! The OP actually has a 3-part series on KEMs, although I don't think it addresses post-quantum issues [2].
[1]: https://www.latacora.com/blog/2024/07/29/crypto-right-answer... [2]: https://neilmadden.blog/2021/01/22/hybrid-encryption-and-the...
It's also super simple: It's almost literally just concatenating the secret and the message you want to authenticate together, and take an ordinary hash (like SHA256) of that, the rest of it is just to deal with padding.
It's super intuitive how HMAC works: If you just mash secret and message together on your side, and get the same answer as what the other side told you, then you know that the other side had the secret key (and exactly this message), because there's obviously no way to go from SHA256 to the input.
HMAC is also useful if you want to derive new secret keys from other secret keys. Take an HMAC with the secret key and an arbitrary string, you get a new secret key. The other side can do the same thing. Here's the kicker, the arbitrary string does not have to be secret to anyone, it can be completely public!
Why would you do that? Well, maybe you want the derived key to have a different lifetime and scope. A "less trusted" component could be given this derived key to do its job without having to know the super-secret key it was derived from (which could be used to derive other keys for other components, or directly HMAC or decrypt other stuff).
It's not quite as simple as that. The output of the first hash is hashed a second time (to prevent length extension attacks).
We may accidentially end up with non-repudiation of attribute presentation, thinking that this increases assurance for the parties involved in a transaction. The legal framework is not designed for this and insufficiently protects the credential subject for example.
Instead, the high assurance use cases should complement digital credentials (with plausible deniability of past presentations) with qualified e-signatures and e-seals. For these, the EU for example does provide a legal framework that protects both the relying party and the signer.
The opponent may still claim that the car rental place is showing a copy that was obtained illegally, and not in holder presentation. To avoid such a claim, the car rental company should ask for a qualified e-signature before providing the car key. The signed data can include any relevant claims that both parties confirm as part of the transaction. To provide similar assurance to the customer, the company should counter-sign that document, or provide it pre-sealed if it is an automated process.
Note that with the EU Digital Identity, creating qualified e-signatures is just as easy as presenting digital credentials.
This reminds me of a specific number that Americans have to give in plain text as proof of digital identity that they only get one of and can't change it ever. Lol.
You can get up to ten replacements of your card in your lifetime. They do all have the same number though.
Attribute presentation is not designed for this feature. When attribute presentation becomes non-repudiable, it creates legal uncertainty:
1. In court, the verifier may now present the proof of possession as evidence. But this is, at least in the EU, not recognised by default as an e-signature. It is yet unknown if it would be interpreted as such by a court. So the verifier keeps a risk that will be difficult for them to assess.
2. Even if it would be recognised as evidence, the holder may argue that it is a replay of a presentation made in another transaction. Presentation protocols are not designed for timestamp assurance towards third parties, and generally do not include verifiable transaction information.
3. The verifier may protect itself by audit-logging attribute presentation input and output along with publicly verifiable timestamps and verifiable transaction information, and by editing its terms and conditions to claim a priori non-repudiation of any presentation. Typically such a solution would not create the same evidence files at the holder’s side. So the holder would not be able to present as strong evidence in court as the verifier. (This asymmetry aspect needs some more elaboration.)
Non-repudiation is well arranged in EU law for e-signatures. If anyone would want the same for attribute presentation, this should involve changes in law. As far as I can see, non-repudiation is now opportunistically being considered in mDL/EUDI just from an isolated technical perspective.
1. plausible deniability of the document’s issuer seal
2. plausible deniability of having presented the document
The second is great for legal certainty for the user. The first has problems. It would be incompatible with qualified e-sealing; stakeholders have no evidence if issuer integrity was compromised.
Also, it would mean that issuance happens under user control, during presentation to a relying party. In a fully decentralised wallet architecture, this means including the trusted issuer KEM key pair on the user’s phone. Compromising the issuance process, for example by extracting the trusted issuer KEM key pair, could enable the attacker to impersonate all German natural persons online.
The advantage would have been that authenticity the content of stolen documents could be denied. This potentially makes it less interesting to steal a pile of issued documents and sell it illegally. But how would illegal buyers really value qualified authenticity e-seals on leaked personal data?
I'd avoid trusting FAANGs in courts when the fate of political leaders is at stake.
The entire purpose of DKIM is not to prove that the individual behind john.smith@gmail.com sent the message, but that a legitimate server owned and operated by the entity behind gmail.com sent the message. It's mostly there to reduce spam and phishing, not to ensure end-to-end communication integrity.
This has nothing to do with the particular companies involved nor their particular trustworthiness.
If Google was evil (but in reality it's not), it could have forged and signed an email from john.smith@gmail.com with valid DKIM, sent on other mail servers or not (since we talk about leaked emails, we just need a file), when in reality the Google user john.smith@gmail.com never sent that email. To me, John Smith could have plausible deniability in court, depending on if everyone trusts Google to be 100% reliable. If the stakes are higher than what the company would risk to lose if found to have forged the email, what's stopping them?
I think digital signatures and third party verification are an incredibly useful feature. The ability to prove you received some data from some third party lets you prove things about yourself, and enables better data privacy long-term, especially when you have selective disclosure when combined with zero knowledge proofs. See: https://www.andrewclu.com/sign-everything -- the ability to make all your data self-sovereign and selectively prove data to the outside world (i.e. prove I'm over 18 without showing my whole passport) can be extremely beneficial, especially as we move towards a world of AI generated content where provenant proofs can prove content origin to third parties. You're right that post quantum signature research is still in progress, but I suspect that until post-quantum supremacy, it's still useful (and by then I hope we'll have fast and small post quantum signature schemes).
EU's digital signatures let you do this for your IDs and https://www.openpassport.app/ lets you do this for any country passport, but imagine you could do this for all your social media data, personal info, and login details. we could have full selective privacy online, but only if everyone uses digital signatures instead of HMACs.
If anything, the hardest part of making an anti-AI proof system is ensuring people don't lie and abuse it.
This talk from Real World Cryptography 2024 is probably a good place to start.
In school I only took one cryptography class (it was bundled with networking, at that), and to this day I still think it contained some of the most amazing concepts I've ever learned. Public-key cryptography being on the short list along with cryptographic hash functions. Maybe it's my particular bias, or maybe cryptography has just attracted some of the most creative genius' of the 20th century.
Didn't get this in school unfortunately. They made us implement DES S-boxes iirc and Caesar cipher breaking... all very relevant and foundational knowledge for non-mathematicians who will never design a secure cipher