It seems to be a helpful framework to work from.
I thought formal methods or at least advanced static analysis was going to be mentioned before I clicked on the URL.
Source: I worked on anesthesia machines, cath lab equipment, and patient monitoring devices at big companies in the US.
We tried using static verification with CodeContracts at the beginning of the project (2011). But this triggered us to litter the code with attributes dedicated to static verification only, which didn't spot many issues. The tool (which is a binary rewriter) was very slow (tripled compile time) and killed developer productivity and morale. It just wasn't worth it. We abandoned static verification, and later we abandoned CodeContracts altogether. Maybe it works with better tools or other languages than C#, but my experience with it is bad and I don't recommend it. With a statically typed language, maybe the best static verification tool is just the compiler.
Is that any object or software that comes in contact with patients?
I worked producing software that segmented images produced by a medical imaging device. The software would be built-in but it was the kind of thing one could have also have done exporting the images to Photoshop.
So the question of what the boundary between normal software and medical software is going to be rather important if medical software winds-up constrained by NASA level procedures.
(Obviously, implanted devices need very particular standards but other devices raise questions).
http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidanc...
Therac-25 is an example of what can happen when medical devices are not held to a high standard:
You talk a lot about testing for specified requirements, but not so much about looking for undesired behavior outside the requirements. That's where the fallacy bites you. It's especially pernicious when you start to firm up a working prototype or are inheriting a legacy production release with a history of good behavior, but which is about to change execution context in some way. Changing the context means hitting unexplored territory and there be dragons.
Of course, you're a professional tester (as am I), and I know this is obvious for you. But for the average PHB or engineer-driven organization trying to prioritize, it's important to be pretty clear about this:
Untested code works great in the 95% case--this is one of the big reasons QA has a credibility problem sometimes, because they're spending $$$ to confirm it already works. It's the 5% case you need to worry about. It can kill your product or, in your case, the patient.
Yes it was a throwaway :-) But I mean "untested" as not tested at all, by any means; code that has went through automated testing (unit tests and the like) has been tested quite a bit, so when QA receives it, the defect rate is not so overwhelming (5% or so as you say). I maintain that the code I write almost never works if I don't write automated tests for it (which is why I always write tests now :-))
You're right about the importance of "looking for undesired behavior outside the requirements". I'll think about an update of the article about manual tests. Thanks for the input!
Those spreadsheets either meant it was good when you got it or you did a crappy job testing it, and either way it was fine before being tested. Regression testing in particular is an exercise in generating "PASS" results, and historically QA practice has been dominated by regression testing.
Luckily, SW practices have changed enough that (as you say) most code is at least minimally tested by the developer and, maybe more importantly, some minimal level of unit testing is expected now as a basic professional standard. Hopefully we never move backwards again.
We applied agile methods to developing safety systems for aircraft (something of a pilot project within the company, looking for a "better way"). It was quite effective, focusing on iterative development, small changes integrated frequently, frequent delivery to the customer for verification/validation, etc.