We've been building a middleware layer that acts like a firewall for LLMs, it sits between the user and the model (OpenAI, Claude, Gemini, etc.) and intercepts prompts and responses in real time.
It blocks prompt injection, flags hallucinations, masks PII, and adds logging + metadata tagging for compliance and audit.
But we’re hitting the classic startup blind spot: we don’t want to build in a vacuum.
What do you feel is still broken or missing when it comes to:
- Securing LLM prompts/responses?
- Making GenAI safe for enterprise use?
- Auditing what the AI actually said or saw?
We’d love your feedback — especially if you’re working on or thinking about GenAI in production settings.
Thanks!