Safety-critical code for aviation co-evolved with the use of digital systems; the first few generations were directly inspired by the analog computers they replaced, and many early systems used analog computers as fallbacks on failures of the digital systems. These systems were low enough complexity that team sizes were small and quality was maintained mostly through discipline. As complexity went up, and team sizes went up, and criticality went up (losing those analog fallbacks), people died; so regulations and guidelines were written to try to capture best practices learned both within the domain, and from across the developing fields of software and systems engineering. Every once in a while a bunch more people would die, and we'd learn a bit more, and add more processes to control a new class of defect. The big philosophical question is how much of a washout filter you apply to process accumulation; if you only ever add, you end up with mitigations for almost every class of defects we've discovered so far, but you also end up fossilized; if you allow processes to age out, you open yourself to make the same mistakes again. To make it a less trivial decision, the rest of software engineering has evolved (slowly, and with crazy priorities) at the same time, so some of the classes of defect that certain processes were put in to eliminate are now prevented in practice by more modern tooling and approaches. We now have lockstep processors, and MPUs, and verified compilers, and static analysis tools, and formal verification (within limited areas)... all of which add more process and time, but give the potential for removing previous processes that used humans instead of tooling to give equivalent assurances.