As far as I'm aware homomorphic encryption can keep even a single bit safe, but maybe I missed something.
more like, "move a computation into an progressed, but still unknown, state"
This is one topic I have a very hard time with, I just don’t know enough math to really grok it.
It just seems crazy a system could operate on encrypted data (which is effectively random noise from the server’s point of view) and return a result that is correctly calculated and encrypted for the client, despite never understanding the data at any point.
I sort of understand the theory (at a very simple level) but my brain doesn’t want to agree.
You have a 7-bit character representation (e.g. ASCII) and your encryption is to add 1 mod 128. E.g. 0 -> 1, 1 -> 2, ... 126 -> 127, 127 -> 0.
As it turns out, all your operations can be represented as adding or subtracting constants. You can now encrypt your data (+1), send it to a remote server, send all the adding and subtracting operations, pull back the processed data, decrypt the data (-1).
Of course, this example is neither useful encryption nor generally useful operation, but can be useful for grokking why it might be possible.
What we've done here is this:
(a * key) + (b * key) = (c * key)
The rules of elementary algebra allow us to divide out the key on both sides because of a few useful symmetries that addition and multiplication have. Namely, these two equations are always the same number:
(a + b) * key = (a * key) + (b * key)
This is known as the distributive property. Normally, we talk about it applying to numbers being added and multiplied, but there are plenty of other mathematical structures and pairs of operations that do this, too. In the language of abstract algebra, we call any number system and pair of operations that distribute like this a "field", of which addition and multiplication over real[0] numbers is just one of.
A simple example of a field that isn't the normal number system you're used to is a 'finite field'. To visualize these, imagine a number circle instead of a line. We get a finite field by chopping off the number line at some prime[1] number that we decide is the highest in the loop. But this is still a field: addition and multiplication keep distributing.
It turns out cryptography loves using finite fields, so a lot of these identities hold in various cryptosystems. If I encrypt some data with RSA, which is just a pair of finite field exponents, multiplying that encrypted data will multiply the result when I decrypt it later on. In normal crypto, this is an attack we have to defend against, but in homomorphic crypto we want to deliberately design systems that allow manipulation of encrypted data like this in ways we approve of.
[0] Also complex numbers.
[1] Yes, it has to be prime and I'm unable to find a compact explanation as to why, I assume all the symmetries of algebra we're used to stop working if it's not.
Now, in practice, HaveIBeenPwned does the exact same thing with a k-anonymity scheme based off of MD5 collisions, which is wayyyy easier in practice and what most people actually deploy, but the Google thing is cool too.
This is a massive announcement for AI and use cases related to PII.
Zama uses TFHE, which allows any operation (eg comparisons) with unlimited depth.
So if you only need add/mul, BFV, BGV and CKKS are good options. For anything else, you better use TFHE
I think the real fix is secure enclaves, and those have proven to be difficult as well.
“Cheddar: A Swift Fully Homomorphic Encryption Library for CUDA GPUs” - https://arxiv.org/pdf/2407.13055
We were a little worried, but quickly discovered that they used Swift as an adjective not as a programming language.
[Disclosure: I work on the team responsible for the feature]
FTA: “Live Caller ID Lookup uses homomorphic encryption to send an encrypted query to a server that can provide information about a phone number without the server knowing the specific phone number in the request”
So, this would require a distributed Secure Enclave or one of them on Apple’s server communicating with one on an Apple device (likely, certainly over time, with lots of different Apple devices fo lots of different iCloud accounts)
That makes HE anything but Swift (
But to return information if some number is spam it has to be either plaintext or hashed condition somewhere outside of the phone?
[Disclosure: I work on the team responsible for the feature]
You're on the right track with the idea of hashing -- I find it helpful to explain any fancy encryption scheme beginning with "if it were just hashing", then extend to "well this is a very fancy kind of hash", and <poof> now I kind of understand what's going on. Or at least it's no longer magic.
Also from an engineering point of view, using FHE requires a refactoring of flows and an inflexible commitment to all processing downstream. Without laws mandating it, do organizations have enough motivation to do that?
FHE is plausibly most useful when you trust the source of the client code but want to use the compute resource of an organisation you don't want to have to trust.
Uh... demonstrably yes? No "secure execution environment" is secure against a government wiretap order. FHE is.
[Disclosure: I work on the team responsible for the feature]
Basically the server does not know, it just computes with every possible value. And the result turns out to be what the client was interested in.
what is the failure mode of FHE and how does it recover?
It's important to understand this failure mode, imo.
The client one-hot-encodes the query: Enc(0), Enc(1), Enc(0). The server has 3 values: x, y, z. Now the server computes: Enc(0) * x + Enc(1) * y + Enc(0) * z == Enc(y). Client can decrypt Enc(y) and get the value y. Server received three ciphertexts, but does not know which one of them was encryption of zero or one, because the multiplications and additions that the server did, never leak the underlying value.
This gives some intuition on how PIR works, actual schemes are more efficient.
[Disclosure: I work on the team responsible for the feature]
I did actually get that job, but I found out that that interviewer actually said "no", I believe because he thought I was wrong about that.
[1] My usual disclaimer: It's not hard to find my work history, I don't hide it, but I politely ask that you do not post it here directly.
It didn't hold me back from the job either. I like to believe the interviewer looked it up later, but I never poked into my hiring packet.
[0] It was useful at the time to have a prefix sum primitive. Ignoring annotations, something like this:
def scan(f, items, x):
return [x := f(x, item) for item in items]Our reviewers told us that machine learning on encrypted data was impossible. We had the citations and the working model to refute them. Very frustrating.
Then again I didn't test very much because they also wanted it to be the proof of work for a blockchain, a possibility that I didn't discount but also figured it'd be extremely hard and I wasn't the guy to do it.
There was ~3 minutes left in the interview, and they asked me a difficult l33t code concurrency question that was trivially answerable if you knew a specific, but lesser known, function in Apple's concurrency library. [1]
I said as much, TL;DR: "hmm I could do full leetcode that requires X, Y, and Z, and I might not have enough time to finish it, but there is a one-liner via a new API y'all got that I could do quick"
They said go ahead and write it, I did, then they insisted I was making up the function -- slapping the table and getting loud the second time they said it. Paired interviewer put a hand on their arm.
Looking back, that was not only a stark warning about the arbitrariness of interviews, but also that going from dropout waiter => founder => sold, then to Google, wasn't going to be all sunshine and moonbeams just because people were smart and worked in tech too. People are people, everywhere. (fwiw, Apple rejected w/"not a college grad, no bigco experience, come back in 3 years if you can hack it somewhere else". Took Google, stayed 7 years)
[1] https://developer.apple.com/documentation/dispatch/3191903-d...
The moment you have to explain yourself you've already lost.
No argument you make will change their mind.
They are just stupid and that will never change.
And never forget, these people have power over you.
You were literate in that domain. The interviewer wasn't. In a conversation among equals you'd just continue talking until the interviewer yielded (or revealed their narcissism). The other interviewers would then stand educated. You see this process happen all the time on (healthy) FOSS mailing lists.
Instead, you had to weigh the benefit of sharing your knowledge against the risk of getting in a pissing contest with someone who had some unspecified (but real!) amount of power over your hiring.
That's the problem with a power imbalance, and it generally makes humans feel shitty. It's also insidious-- in this case you still don't know if the interviewer said "no" because they misunderstood homomorphic encryption.
Plus it's a BigTechCo, so we know they understand why freely sharing knowledge is important-- hell, if we didn't do it, nearly none of them would have a business model!
I enjoy exposing myself to new-to-me opinions. Do you know a decent anarchist blog/vlog to dip my toes into this area?
Yeah, what actually happens is that both parties think they are right and keep yapping until someone "yields" by being so fed up that they don't want to argue anymore. Everyone else watching learns nothing.
In a FOSS mailing list, someone would hopefully just link to wikipedia.
No amount of arguing is going to resolve a duspute about definitions of terms.