From a pure financial standpoint, there's no possible way that it isn't cheaper to just measure real emissions than attempt some kind of software analysis for every version of every vehicle on the market.
Furthermore, an agency inspecting source code has absolutely no way to tell whether or not that the source they've been given is actually what's running on a car.
Similar restrictions would severely cripple innovation in cars. Just consider Tesla's autopilot software.
Is there some kind of formal engineering practice they require manufacturers to adhere to?
How are their staff qualified to read the vast variety of languages out there?
I cite these as immediate, obvious roadblocks to verification, regulation, because they're easy and many PLs are something that the vast majority of the software industry are not used to.
If the binaries don't match, then whatever certification the device needs automatically fails and it cannot be sold.
What that means is that later on, if "Something Bad" happens, you are in a position to be certain of what code was running. This makes investigation much easier as there is no chance that the original source code cannot be found when needed later. This does get a bit more complicated with software updates, especially OTA updates.
- Are governments and other regulatory agents going to formally verify compilers?
- Are these agencies going to prevent software from being written that doesn't conform to their rigid standards?
- Many compilers, technologies in use today aren't perfectly deterministic. Optimizations, flags, etc. can all dramatically affect an emitted binary.
- What if I want to use a completely different architecture than a regulatory agency is used to? Am I just not allowed to?
And as you mentioned, updates.
With the ability to do OTA or any other updates, software becomes almost impossible to identify or deal with.
It's actually the same problem: an extremely complex object is being constructed, a critical failure within which could leave many people nearby injured or dead.
The solution is actually somewhat ingenious: License a small group of people to go analyze such things, let them organize themselves independently, but require them to sign off on the design. It turns out that with their license and livelihood on the line, enough people aren't willing to sign off on terrible, shoddy crap that the system mostly works.
Perhaps it's time that software grew up and became something closer to a real engineering discipline?
Filled with red tape, inaccessibility, limitations?
No thanks. I think we've done a very decent job of self-regulation, licensure and review have fared well for most* life-threatening software systems.
When you have repeatable conditions - software in the tested product can detec that and act differently in these conditions.
That's exactly what happened in VW case.
It's nontrivial to fix the test so that it is still repeatable and hard to fool by company determined to fool it.
I agree, it's not trivial. But, it's not hard either.
It's like weighing someone, you don't need to see their feet, but if you can't then you can't tell if they have both feet on the scale.