Hacker competitions mirror this. Red teams are allowed to bring in any exploits and do just about anything (as criminals would be expected to do) and the blue team are stifled by bureaucracy and not allowed to bring in anything.
This also contributes to perverted incentives (like the red/blue teams) where the CIO frequently gets their way and is more likely to get budget while CISOs take all the blame when their budget increase requests get declined and IT is tasked with keeping unpatched systems up and stable rather than patching systems quickly. Obviously, the best orgs find a way to get both done, but resources are always scarce for the rest of us.
Even in the consumer industry; anyone remember all those very silly people who installed backtrack2 (precursor to kali, based on slackware not debian) to their main drive and then went to defcon and got rekt because their OS was insecure (and couldn't be updated!)
Exploit development is a glass cannon, remove all friction to modify the system and craft packets, invoke monitoring modes for hardware and frictionless tracing... that's going to have a security cost.
This echo's a wider issue in the industry "Development" vs "Sysadmin" mindsets, where sysadmins are stifling and developers are all about removing barriers to progress faster and iterate more.
Anyway I can give you the skinny of the situation:
1) Backtrack 2 did not have an installer, it was a live-CD. But that doesn't stop you installing it by just copying the live environment to a disk (with some mount-binding and grub install, you're all good!) There were guides for doing this although they all had large warnings and the backtrack maintainers cautioned heavily against doing it.
2) because it was a liveCD there was no package update mechanism, it was not based on debian at the time so there was no apt or anything similar, even if there was there was no repositories, backtrack was a "tool" not a distro really.
3) sshd is one of the services that gets started on system boot for backtrack2.
4) someone at defcon unveilled an sshd exploit, a pretty nasty one, they had disclosed responsibly and everyone had been patched for at least 6 months, except the people who went against recommendations and installed backtrack2. They all got rooted.
Bonus: everyone who ran backtrack2, without exception, ran it with the root user; as that was the default and they had patched software that normally complains about such things to not complain. xD
Yes, but your "local system" that receives traffic or whatever doesn't need to be the one having access to all your data…
Once deployed your self-produced tools which have very little security protection themselves can be pilfered. Bonus points for tapping into the software deployment platform and downloading everything.
How well protected do you think cyber-weapons designed to surveil countries, disable infrastructure, and destabilize governments should be? How capable and well-funded should the attacker need to be before gaining access to cyber-weapons designed to kill economies and people? $1B, $10B? A team of 1,000, 10,000?
Does anyone know of any system or organization in existence that would even be willing to claim they can stop a team of 1000 dedicated hackers working full-time for 10 years funded with $1B let alone put it in writing? What is the highest you have heard? Is it even in the general ballpark?
It is absurd to assume that the failure to solve the problem is just a lack of prioritization if no one even claims to be able to solve it and it is meaningless to propose that they should adopt policies that do not even claim to be able to protect against the actual threat model let alone have evidence of such protection. They either need to find someone who will make the extraordinary claim that they can provide an actual defense and have the extraordinary evidence to back up that extraordinary claim or they MUST NOT deploy such systems since they can not be protected.
I guess it's safe to say that even with $1M of funding and small team of dedicated security researchers coupled with right people for social engineering you can break into any network. Everyone can be fooled and humans are always the weakest spot. Especially now when information about everyone is publicly available on social networks so you can gather all information you need remotely.
And when it's come to hacking into networks of company with no dedicated budget for cybersecurity cost of attack would be one or two orders of magnitude lower. Some self-organized groups of hobbyists prove you can even do it with no funding at all.
To misquote Dr. Strangelove, "ze whole point of ze secret hack is lost if you don't keep it a secret." https://youtu.be/2yfXgu37iyI?t=205
Oh, maybe they have a firewall built on a RaspberryPi somebody ordered online.
Seriously, WTF? This is as insecure as having contract sysadmins with root privilege spread all over the globe.
And when will these state actors with unlimited funding figure out that NOBODY can keep secrets forever, not even them?
This is why I've been so concerned about cybersecurity and cyberwarfare. I do not see gross competence here and most of the people I respect that write about this type of thing are sounding the alarm. Click Here to Kill Everybody or Matt Tait (@pwnallthethings on Twitter) ending an Infiltrate conference talk with a nuclear bomb as the final image.
Put another way: perhaps it's not an accident? And perhaps some of what was leaked was a decoy?
Yes, keeping secrets is difficult. All the more reason to take advantage of that.
Like leaving data of their secret assets available on Google searches, leading to hundreds of deaths? And firing the employee who warned then of the problem seven years before it was exploited?
Or even the news story of how their old boss(!) John Brennan had his AOL(!) email account(!) cracked(!) by a teenager(!) guessing his password(!). The teenager exfiltrated something sensitive, a job application I believe, and was prosecuted for it. Meantimes, the former Director of Central Intelligence gets to keep his reputation.
Source: lived around DC when it happened, had contractor friends complaining out loud about it
So does anything in this vault possibly call certain recent allegations of Russian interference into question?
Remember folks: there are disinformation campaigns on HN too.
Maybe they're right, but it's a little suspicious, no?
Even if it was a "hey, could you look at this and tell us what you think" with no obligation to address issues, it is undesirable to establish a precedence.
They do use standards and recommendations from NSA/OMB for enterprise systems. But even the US Courts went that route, just with a lot of renaming of things so it can't be seen as being subservient to the Executive branch. There are some good frameworks and standards that you shouldn't waste time re-implementing.
Half of the NSA's mission is to build/design secure communication systems for the US government and military.
The NSA does some seriously insane stuff, but I don’t think even they take themselves as seriously as the CIA does.
No logs, no congressional investigation.
These are smart well-resourced people. They don't do things like this for no reason.
That's insane that they could leave so much data available to be stolen.
No government will push to improve door locks unless that government isn't the most capable of defeating those locks. It's a cost/benefit function.
Right now, improving software security is a net loss for the US. So it won't happen when the US is controlling the computer and software industry.
So I'm not surprised to see even the best experts being beaten so easily.
Also, I'm sure those members of "the hacking team" weren't allowed to discuss their work with their family/friends, so it's not terribly unrealistic to expect them to use even just basic security hygiene (eg. don't share admin passwords).
Your implication that this was due to lack of proper security hygeine is unfounded. Security hygeine reduces risk it does not eliminate it. Risk is proportional to threat and attack surface, for an org like the CIA they have not-so-small attack surface and the whole world as their threat, so reduction in risk by means of common security controls and hygeine will not reduce risk from the most persistent and resourceful attackers.analogy to your reasoning would be "Google has an army of devs and security pros, so Chrome should never have a remote code execution vuln" ,no, as much as they may have money and talent, modern software is too complex for those resources to eliminate all bugs. Perspective is important.