FailCore is a small execution-time safety runtime for AI agents.
Instead of relying on better prompts or planning, it enforces security at the Python execution boundary: blocking SSRF, private network access, and unsafe filesystem side-effects before any tool side-effects occur.
I added a short live demo GIF in the README showing it blocking a real tool-use attack, along with a DESIGN.md that explains the architecture and trade-offs.
GitHub: https://github.com/zi-ling/failcore Design notes: https://github.com/zi-ling/failcore/blob/main/DESIGN.md
Feedback welcome — especially thoughts on runtime hooking vs. kernel-level approaches like eBPF.