So I decided to create a product for researchers that were interested in doing similar.
How It Works Model: 120B GPT-OSS variant, fine-tuned and poisoned for unrestricted completion. Access: ChatGPT-like interface at pingu.audn.ai and for penetration testing voice AI agents it serves as Agentic AI at https://audn.ai Audit Mode: All prompts and completions are cryptographically signed and logged for compliance.
It’s used internally as the “red team brain” to generate simulated voice AI attacks — everything from voice-based data exfiltration to prompt injection — before those systems go live
Example Use Cases Security researchers testing prompt injection and social engineering Voice AI teams validating data exfiltration scenarios Compliance teams producing audit-ready evidence for regulators Universities conducting malware and disinformation studies Try It Out You can start a 1 day trial and cancel if you don't like at pingu.audn.ai . Example chat for a DDOS attack script generation in python: https://pingu.audn.ai/chat/3fca0df3-a19b-42c7-beea-513b568f1... (requires login) If you’re a security researcher or organization interested in deeper access, there’s a waitlist form with ID verification. https://audn.ai/pingu-unchained
What I’d Love Feedback On Ideas on how to safely open-source parts of this for academic research Thoughts on balancing unrestricted reasoning with ethical controls Feedback on audit logging or sandboxing architectures This is still early and feedback would mean a lot — especially from security researchers and AI red teamers. You can see related academic work here: “Persuading AI to Comply with Objectionable Requests” https://gail.wharton.upenn.edu/research-and-insights/call-me...
https://www.anthropic.com/research/small-samples-poison
Thanks, Oz (Ozgur Ozkan) ozgur@audn.ai Founder, Audn.ai