>The NSA can play either defense or offense. It can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it to eavesdrop on foreign computer systems. Both are important US policy goals, but the NSA has to choose which one to pursue. By fixing the vulnerability, it strengthens the security of the Internet against all attackers: other countries, criminals, hackers. By leaving the vulnerability open, it is better able to attack others on the Internet. But each use runs the risk of the target government learning of, and using for itself, the vulnerability — or of the vulnerability becoming public and criminals starting to use it.
Unsurprisingly, the NSA often chooses to keep zerodays for their own use.
If the government only used open-source software, the NSA could create patches that only the government would use, while keeping zero days that can be used against everyone else.
If the government started requiring all/most software to be open-source, it would create a market. There's no way big government vendors would refuse to create open source software. They would just shift to monetizing more heavily using consulting services, or support, or something.
The downside of having generally-known weaknesses seems to have been largely deprecated.
Rather than "security by obscurity", the operational status has been "insecurity by obscurity". Unknown to users, systems are largely wholly insecure, and it's only ignorance that gives the illusion that they are secure.
I wrote on this recently: https://joindiaspora.com/posts/b596219086b1013991d8002590d8e...
In practice, the "everyone anywhere can attack any online system" status of the Internet, and the porosity of most LANs and even nominally airgapped / detached systems (see the Stuxnet attack on Iran's centrifuge systems) means that virtually all systems are vulnerable.
I suspect that the debate is quite live within government, particularly as the US itself is repeatedly the victim of such attacks.
https://www.lawfareblog.com/lawfare-podcast-nicole-perlroth-...
Hasn't this been the debate since encryption came around? I thought we've been having this debate for at least 50 years.
In the defensive world, success is abstract, failure is concrete and there are always going to be bugs, accidents, lapses, etc. in the offensive world, you demonstrate success by providing actual intel, you can demonstrate value. I’ve worked on security products for most of my career, there is a point in the lifecycle before your product is just a requirement where customers will ask “how do I know I need this? Or it’s working?” It can be more challenging to answer that than if your product failed and they got popped, at least you can help and provide information if they got popped.
I know who I think would climb the ranks. Long term strategy wise, if they split it up and aggressively worked with industry to patch holes and fix things, encouraging best practices, it would probably save the nation trillions but we would have to use other techniques to get some of our intel.
:)
Interesting, no mention of any requirements towards software manufacturers themselves.
If you think about it, this will further incentivize poor-quality software as responsibility of vulnerability response is now being laid on the product owner.
I would suggest people look at a very foundational essay on this [2]. Key quote: "Security is a process, not a product. Products provide some protection, but the only way to effectively do business in an insecure world is to put processes in place that recognize the inherent insecurity in the products. "
How many times do we have to learn this?
[1] In quotes 'cause "secure software" does not exist. In two different ways; software always has bugs and using a piece of software incorrectly makes a secure system insecure.
[2] https://www.schneier.com/essays/archives/2000/04/the_process...
Perfectly secure software does not exist. More secure and less secure software certainly does exist.
Which is why software should be expected to provide some basic level of protection.
In any org there will be a proportion of idiots who cannot be trusted to do the right thing, and software should be designed to minimise the damage their idiocy can generate.
This isn't enough to make an org secure, but it's a good start on whack-a-mole with the most obvious attack vectors.
The real problem is that too much of the industry - both management and devs - lacks maturity and professionalism. There's too much casual hobby tinkering, too much "But that's too expensive", and too much "Get it out the door and worry about it later".
There isn't enough conscientious attention to detail and far, far too little understanding of the disastrous - literally potentially explosive - consequences of serious security failures.
And CS courses teach far too little of all of the above. Academic algo noodling is one thing. But the reality is that computers can literally be as dangerous as weapons - and should be treated as such.
Firstly there's the information asymmetry for non-technical users - they don't think of themselves as buying security, they think of themselves as buying a remote access solution. They therefore don't see this as a process, but instead as a product or solution. That means they're surprised and caught unaware when something goes wrong.
The second issue is that people creating the software aren't themselves thinking about security, because the customer isn't buying security, or comparing security. And how do you measure or quantify or observe security? There's no commercial incentive to invest a month in hardening a product against attack, unless that month of engineering effort sees more sales and revenues. And since the people who buy are satisfied by slideware and specification sheets for security, nothing changes.
I think we need a whole change to how we buy software, hardware, and solutions in general, to see this change. The underlying economics don't incentivise secure products, in fact they actively discourage them.
Obviously it's difficult to draw the line, and that's why we have courts. The company will argue that they did all that was possible, but as sometimes happens, something got through; the plaintiff will argue that the company's software had serious flaws because they were negligent or cut corners or had poor development processes or whatever. However imperfect the process is, the court can render judgment case-by-case.
(Before someone suggests this, I'm not trying to say that a random open source developer who works on OpenSSL should be held liable here. But if you're selling a product, you should hold some liability for when that product fails.)
Imposing this requirement on their own agencies is enforceable because there's software that can generate an SBOM, at least from container images.
Then the agencies will have to choose software that meets compliance requirements, so they're the ones putting pressure on their chosen vendors. It follows logically that a vendor who wants a better chance of being chosen for more government contracts will make it easy to obtain SBOMs for their software.
Speaking from US Gov perspective - if the company is part of a contract (and ~40% of the Gov are contractors), Gov certainly can.
They can put nearly anything (legal) into the RFP/Q. Even if they do not say "give us your BoM", they can wrap it in requirements that in essence delivers the same exact result.
That said, it is Gov mistake to ask for the BoM. They will do little with it in a timely fashion, and lack the expertise to identify risks, and lack the resources to go after it. The best contracts are the ones where the rules and parameters are set for the contractor, (i.e. no untested software, no foreign influence, no this, no that, must have this and that), and auditing of the compliance.
Not really, this is more about transparency of all components and letting people downstream be aware that there is an issue and either fix it, mitigate it, or raise the issue upstream. My guess is that this is related to Allan Friedman's SBOM work at NTIA (sorry - this is not the most up to date link: https://www.csiac.org/podcast/software-bill-of-materials-sbo... )
The problem that keeps on getting hit time and time again is that both end users and product manufacturers do not know everything that is in their system. Consider the case of say, an MRI machine. What OS is it running and how up to date is it? If the end user has an SBOM they can better evaluate that and demand fixes if there are known issues. Likewise if the MRI manufacturer is good at making MRIs, but not so much at knowing if their version of Windows on the MRI is out of date, the SBOM for the MRI can be analyzed to automatically flag problems.
You can regulate all you want about "There must be no open issues" and plenty of certifications for the Fed government do have that language. The problem this answers is forcing a listing of every component so that "Sorry I didn't know OpenSSH v.1.2.3 is out of date" or "I had no idea we were running Windows 95 on this hardware" are no longer valid excuses.
That would be an insane waste of resources.
Tho Ken Xie is also a Stanford graduate and has had US citizenship for decades.