> Enter X
> How It Works (Without the PhD)
> Why Y Should Care
...and an incredibly handwavy shallow explanation of why this actually works ("Through a clever sequence of oblivious transfers and what’s called multiplicative-to-additive share conversion, they each compute a partial signature.")
I don't get it. If you want a blog, write a blog. If you don't want a blog, don't write a blog. But why use an LLM to create a slopblog? It just wastes EVERYONE's time and energy. How disappointing.
The other (maybe more interesting) question is how this tech would be deployed. So ok, we have a system, where something can only be signed/decrypted/encrypted/etc if several parties are in agreement. Who should the parties be? How is the threshold itself actually managed?
OP also seems to drift between different usage scenarios here:
- some sort of collectively owned good (like the DAO or resources in a cooperative?) - seems straightforward on a technical level (every owner has a partial key) but also a niche usecase and quite inflexible: What happens if an owner drops out or you want to introduce a new one? What happens if you want to change the quorum?
- traditional authentication of individual users against a server, in a federated setup like the fediverse: Seems like the most practical usecase: One party is the user, the other is the server, the verifying party would be other servers of the network. But then you have to pick your poison by how you set the quorum: Either the quorum is "any party can decrypt the data", at which point you're not better than normal password auth; or "both parties are needed", which would protect against the user or the server accidentally leaking the key - but then you're back to "single point of failure" if any party accidentally loses the key.
- the last scenario would be server-side keys that could cause massive problems if they leaked. But I don't understand at all who should be the other parties here. Also how would this be better than HSMs?
Party B picks one and uses that to compute future values that are sent back to party A _but without telling party A which of the two values they picked_.
In this example I'm hand-wavey because the production math is complicated and confusing - I took a vastly simplified approach that still works functionally for the demonstration without fully implementing the OT protocol.
> what happens if an owner drops out or you want to introduce a new one? what happens if you want to change the quorum?
In either of those scenarios, assuming you still have quorum, you can regenerate keyshares for the new group for the same public key (and underlying yet unknown private key) by re-running the ceremony with the new participants. Production implementations of the protocol fully flesh this out.
> traditional authentication ...
I wouldn't use TSS in that setup. Traditional auth + MFA is more than adequate. The better use case would be where you have a group that needs to demonstrate consensus (like governance for a programming language, multiple parties involved in signing an application release, or even an HOA that needs to vote on policies). In all of these, you'd take an M of N approach (rather than the simplified 2 of 2) for achieving quorum.
What I want to read is well-researched and deeply considered pieces that do a good job explaining concepts in a fresh way and help me learn something new. Sure, use AI to help get there, but if you haven’t done much research or haven’t thought about it yourself beyond the prompts… I don’t want to read it
If one person holds the signing key to do something critical in your system, they're both a single point of failure and a huge security risk all in one. If you distribute that key to, say, 5 different people you've mitigated the single point of failure. But now you have 5 folks who can act potentially unilaterally.
Using a 3 of 5 TSS setup, you've still mitigated the single point of failure (any one or even two folks can go offline and you can still operate) while also protecting against unilateral action. It's a mathematically-enforced version of the "two-man rule." Similar to the way Cloudflare's Red October tool used to work by splitting things between parties: https://blog.cloudflare.com/red-october-cloudflares-open-sou...
Or one that's checked into your version control (representing that it is your company's code that's running) and one that lives on the server (representing that it is a server your company controls).
Or to take your example - a key in the repo, a key from the dev, and a key from the build server.
> A compromised server no longer means a compromised key
Proper use of an HSM means that even the owner of the private key is not allowed to access it. You sign your messages within the secure context of the HSM. The key never leaves. It cannot become compromised if the system is configured correctly.