Promise theory isn't really about computation. It is about voluntary cooperation among agents. Agents, for the purpose of this semi-formal language, is defined as something that can make its own promises -- that is, communicate intent to a set of observers, some of whom might also be agents.
Promises are not obligations, and as such, the intent to do something is not the obligation to do something. There is no guarantee that the intent will be executed at all. Maybe other agents are actually doing this by best-effort. Sometimes external circumstances can trigger a failure. Sometimes the agent is not able to execute the promise well. Sometimes, the agent may deceive, and deliberately communicate an intent when it is intend to do something else.
How well an agent kept is promises is an assessment -- crucially, assessments are not global determination of an agent's ability to keep a promise. Each agent makes its own subjective assessment on how well another agent keeps its promises. Understood in this way, this can model both centralized and decentralized systems of voluntary cooperation.