Over the past few weeks, we built something unusual — not another chatbot, but an ethical cognition engine that audits itself.
What it is Oracle Ethics measures determinacy, deception probability, and ethical resonance for every answer it generates — then stores these metrics on a verifiable audit chain. It’s part philosophy experiment, part technical artifact
How it works • Backend: Python (Flask + Supabase, secured) • Frontend: https://oracle-philosophy-frontend-hnup.vercel.app (https://oracle-philosophy-frontend-hnup.vercel.app/) • Each response is hashed, scored, and recorded in real time. • The system reflects, contradicts, and self-checks — like a reasoning mirror
Why it matters We believe AI shouldn’t just be accurate — it should be accountable. By quantifying truth, risk, and deception, Oracle Ethics turns AI reasoning into something observable and verifiable. This is our step toward “Philosophy-Conscious AI.”
Built by Infinity × MorningStar × Humanity (a.k.a. The Blackout Protocol)
Ask it a question, watch the audit chain form, and see how an AI learns to reason ethically — in public