I'm not trying to excuse the company in the article or the company that I work for. And I do not work for the company in the article. I just wanted to point out that I do see how this can happen very easily and repeatedly.
One such system is Disti (http://www.disti.com/)
In the automotive field, our software is MISRA compliant, static analysis is done (Klockworks - http://www.klocwork.com/) and we follow a very strict set of guidelines outlined in ISO25119 and ISO26262 for the construction and agricultural markets. Think self driving tractors and combines. For example: A tractor traveling down a field with a combine following it a few rows over separating and chopping things into a catcher all done with one person driving.
This shit can't happen where I work. Every component on our circuit boards has a MTTFd of 40 years. Hardware watchdogs can kill the system if software goes awry.
Software is written to readiness level called SRL-1, SRL-2, etc... Unit tests, peer reviews, etc... Functional safety in medical devices is covered under 510(K) (http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedur...)
I find it amazingly short sited that antivirus software is even allowed on a medical device to begin with. I can't even imagine how this system even passed the easiest audit for software readiness.
How is it you "go hoarse having the same conversations?" Do you not have to meet FDA compliance criteria? Are you in the US?
I didn't read the whole article so I'm assuming this happened in the US. For me, we sell autonomous vehicles in the European markets where functional safety seems to be a bit more aggressive there right now for vehicles. Not sure about medical devices.
Well... Then you should consider yourself blessed to have never had to deal with the bureaucracy of a hospital IT department and administrative staff.
Who owns the medical device? Who paid for it? If it's a glorified Windows machine and it's attaching itself to a hospitals WiFi network... Who has to use this machine? Physicians, surgeons, anesthesiologists, radiologists, other specialists, nurses, staff? All of them need to be trained on it's usage, no doubt. They don't get that training in schooling. Who provides it? This and a million other things stack up. So, well, I mean it can start to make sense how these things end up with random AV software installed on them, right?
> How is it you "go hoarse having the same conversations?" Do you not have to meet FDA compliance criteria? Are you in the US?
Yes, we are. Yes, we do "have to meet FDA compliance." I can't define "have to meet" and I work here. Of course, I'm just an engineer. We have legal, executive, and other staff for those matters. I'm sorry, I'm not trying to be an asshole... I'm just trying to be honest about where I find myself in this situation.
If you are only making these warnings verbally, you might want to consider emailing your immediate manager with a list of concerns. Make it as neutral as possible and ask for guidance on how they want to address the issues. But if it's on the mail server, it will be good for discovery if the worst happens, and frankly given lives are at stake you probably need to show, in writing, that you were attempting to have the issues addressed.
Who knows? That might actually get traction. Might even save someone's life!
You are not an engineer. This is a protected term in the US and other countries. If you were a professional engineer, you would be bound by a legal and moral framework preventing you from doing work on unsafe medical equipment.
There is a good argument that there should be a software equivalent of protected engineer status for this kind of work. This kind of story should be a wake up call. I personally had no idea that critical medical equipment would be running on MS windows...
You have to deal with FDA pretty much regardless where you're based, if you want any kind of market for your medical device. A lot of countries define compliance as whatever is good enough for FDA.
FDA rules for software... what FDA wants is a paper trail.
Unfortunately the only thing that can solve the apathetic board and executive management problem(who only see dollar signs) is the actuality, or realistic possibility, of significant financial loss, or loss of their personal freedom(prison) due to the negligence of the system. And a $10 Mil fine for a fault in something that you make $100 Mil off of is not significant. That's $90 Mil profit in their eyes. And they probably get to write it off.
Even more unfortunate, is that, in the situation that this happens, the "engineers responsible" will be fired, and the executives will resign with a nice golden parachute, and go on to do the same thing somewhere else.
But then you have the company that does do it right, spend the time, and the money to make a truly redundant, fault-tolerant system. But, they come in at a price point 20% higher than their competitor, who doesn't. Which company survives and which doesn't?
Sad, but, unfortunately the way it is. I don't know a practical solution either.
We've been close to undergoing "major" scrutiny (as it was sold to me, it was A Big Deal) from the FDA before. I, personally, just a lowly and underpaid engineer, have saved executive staff from having to sign their names on that noose. I had a manager once who seemed to want to push it that far, to stand idle-by while the walls fell down around us. I, unknowingly at the time, prevented it from happening because I was trying to help our customers. I don't regret that decision, actual patients shouldn't have to suffer because of a management teams ineptitude. I do think about it often, though. I understand this is nebulous, and I'm sorry for that. This is a reality, though.
I guess that's the thing that really gets me, the FDA. We sell FDA approved devices. Where the fuck is the FDA? We send them paperwork and they are happy. I can only form the opinion that they, the FDA, are ill prepared to handle this situation; The actual situation, the "the medical devices industry is a fucking train wreck waiting to happen" situation, and especially so they are ill prepared to handle it at scale. Audits are cursory and almost as a rule non-technical. I suppose it'll take a Toyota-level incident to bring change about.
I have seen SSAE16 audited companies that haven't patched anything in years. FDIC examined institutions with ATM machines still running OS/2 Warp(actually probably more secure than the ones running XP, with no updates installed. Ever.)
I once found the management interface of a SAN with a public IP address directly on the device, no firewall rules of any sort, and the device still had the default username/password. It hadn't been patched or rebooted in over 2 years.
More shocking is that a review of the logs didn't show any successful unauthorized logins. Of course, they could have cleaned up after themselves, but further investigation was outside the scope of my engagement(They didn't want to know. They were happy to present that, despite the oversight, there was no indication that PHI had been accessed by unauthorized people. Their conclusion, not mine.)
But as I've said before - I really hope you have written down your concerns to someone in management. If it gets to the point where negligence takes out the company, there's going to be an attempt to make someone a scapegoat. Depending on your role in the company you don't want to be held personally liable for the incompetence and ruthlessness of management...
When regulation becomes more about permission than proficiency, you'll get corruption instead of competence.
Or developers refuse to build software without safety built in.
If they can't hire anyone to build their unsafe systems, they'll have to start building safe software.
Let the market work for you.
If every developer on the planet suddenly had a pang of consciousness, then something like this would work.
Fortunately I have never found myself in such a position, but I have seen it many many times.
A fine being tax deductible does not mean zero cost to the company, it means the profit is reduced before taxes are computed, i.e. the actual cost is reduced by the marginal tax rate. A tax credit means zero cost.
Presumably developers are the one's estimating how long things take. (If they're not, you have even bigger problems and I'm sorry.) The time to make it safe should automatically be included in those estimates.
Moreover, making it safe shouldn't be a separate part of the process. It should just be part of how you write software. It's either safe or it doesn't exist at all. (Compare this to how organizations like Google deal with concurrency: it's built in from the start.)
A reputable engineer wouldn't design and build a bridge which might collapse. A developer shouldn't build software which puts lives at risk, regardless of management pressure.
If they refuse to relent, there are plenty of jobs where safety isn't critical.
This is not meant as a slight: I think you're grossly unfamiliar with software development outside of engineering-driven companies.
It's pretty much a guarantee that product managers are deciding these estimates. They might confirm with the developers, but the conversation probably went something like this:
"Does 3 weeks sound about right for this?"
"No, we'll need 6"
"Why?"
"Safety checks"
"Ok, we don't have 6 weeks. I can give you 4, but we're just gonna have to make do."
Is it scary that conversation happened about a piece of medical software? Absolutely. Would I bet $1k that it happens frequently? Absolutely.
> A reputable engineer wouldn't design and build a bridge which might collapse
Rarely does a single engineer design a bridge nowadays, so corporate liability and reputation (good luck landing more contracts if your bridge collapses) is a huge factor in much of that beyond simple ethics.
I would be shocked if anything happened to Merge as a result of this, whereas a company who designed a faulty bridge would be sued into oblivion.
Further, professional engineering in the US is a whole different game that involves licensing and regulations specifically to avoid that situation. Software "engineering" has no such equivalent currently.
Pinning the blame on the peons is a sure-fire way to make sure this situation never changes.
The difference is that they never should get the decision to cut safety checks. Cutting safety checks should be as ludicrous/impossible as writing half the code of each function to cut time.
The conversation should go like this:
PM: "Does 3 weeks sound about right for this?"
Dev: "No, we'll need 6"
PM: "Why?"
Dev: "That's how long it takes to build those 6 features."
PM: "Ok, we don't have 6 weeks. I can give you 4, but we're just gonna have to make do."
Dev: "Okay, which features would you like to cut?"
> Further, professional engineering in the US is a whole different game that involves licensing and regulations specifically to avoid that situation.
I'm aware. While I don't think the majority of software developers should be certified, we should require licensing for working on safety-critical applications.