Thanks for reading — this project isn’t about “AI safety theater.”
We’re experimenting with verifiable honesty: every model response carries its own determinacy, deception probability, and ethical weight
Instead of “trust me,” the system says, “check for yourself.”
We’re curious how the HN community sees this:
Can trust in AI be engineered through transparency?
Or does showing the uncertainty just make it harder to trust?