In turn, you have a concrete model checked implementation you can talk about or use a basis for understanding where either their or your proposed idea either holds or fails.
Note that a model can specify the wrong correctness criterion in practice. So you may have a "proof" which works, yet the proposition proven is wrong.
In general, when working on distributed systems, we usually want some kind of formal criterion of correctness. The failure modes of such system are quite hard to get right, and hence retractions of claims are plenty. Sadly, we have too little literature on how to on-board people on model checkers such as TLA+, proof assistants such as Coq, Isabelle/HOL or NuPRL, and QuickCheck systems such as Haskell or Erlang QuickCheck, Clojure's core.spec (I believe), etc.