I’ve built LLMTor, a software that allows you to access public LLMs like ChatGPT such that even the server operator cannot tell which prompt was sent by which user.
I was motivated by an old sam altman interview (https://x.com/rohanpaul_ai/status/1949502746492535282), where he mentioned that LLM providers are required to break user privacy if needed (and this can probably be done for selling ads as well).
Local LLMs are the gold standard for privacy, but you lose access to frontier models, and get overhead of self hosting. But if you use public LLMs like ChatGPT, they will always know the plaintext prompt and responses.
So I settled on a middle ground, where I break the link between the user identity and the prompt contents.
LLMTor sits as a proxy between users and upstream LLM providers. It uses blind RSA signatures (RFC 9474) to issue tokens that can later be redeemed for LLM access anonymously over Tor.
Here’s an interactive demo of the protocol: https://api.llmtor.com/demo TL;DR: 1. User buys credits and obtains tokens signed via blind RSA (identity known to server) 2. The server cannot link the signed token back to the user (blind signature unlinkability) 3. User redeems token + prompt over Tor (identity unknown to server)
Links Website: https://llmtor.com GitHub: https://github.com/prince776/LLM-Tor Whitepaper: https://llmtor.com/whitepaper.pdf
Would love feedback on the protocol, implementation, or anything else.