Anyway, I found this paper...
http://www.ieee-security.org/TC/SP2015/papers-archived/6949a...
...that reminds me of older, high assurance designs. The classic way to do it is the so-called abstract or interacting state machine models. Each component is a state-machine where you know every successful or failure state that can happen plus argument security is maintained. Then, you compose these in a semi-functional way to describe overall system. Seems the miTLS people did something similar for theirs that they call "composite, state machines." The result was clean implementation and verification of what got really messy in other protocol engines. Plus, new techniques for handling that of course.
Really good stuff. Worth extending and improving in new projects.
France, esp INRIA, seems to be in the lead on verified software with a real-world focus. It will be great when more elsewhere follow suit.
(I'm not a crypto guy, obviously...)
Edit: ah, but only to customers paying for support.
Not sure how applicable it is now outside a checklist item for a situation that can repeat in a new client. Yet, the paper said they used it on real stuff. Was there something specific you were thinking about that's not covered by that?
I don't think the paper was trying to speak to truncation generally. Truncation is a weakening of the crypto by definition. (Reduce bits => reduce security.) It is easier (cheaper) to find a collision if one uses a truncated hash. That doesn't make it Dangerous or Safe; just quantitatively reduced.
If the authors are trying to establish anything, it seems to be, "Yes, collisions do matter!"