https://hn.algolia.com/?q=six+dumbest+ideas+in+computer+secu...
You can pick this apart, but the thing I always want to call out is the subtext here about vulnerability research, which Ranum opposed. At the time (the late 90s and early aughts) Marcus Ranum and Bruce Schneier were the intellectual champions of the idea that disclosure of vulnerabilities did more harm than good, and that vendors, not outside researchers, should do all of that work.
Needless to say, that perspective didn't prove out.
It's interesting that you could bundle up external full-disclosure vulnerability research under the aegis of "hacking" in 2002, but you couldn't do that at all today: all of the Big Four academic conferences on security (and, obviously, all the cryptography literature, though that was true at the time too) host offensive research today.
Things network exploded size-wise and the numbers of participants in the field exploded too.
In the real world if you saw someone going around a car park, trying the door of every car you'd call the cops - not praise them as a security researcher investigating insecure car doors.
And in the imagination of idealists, the idea of a company covering up a security vulnerability or just not bothering to fix it was inconceivable. The problems were instead things like how to distribute the security patches when your customers brought boxed floppy disks from retail stores.
It just turns out that in practice vendors are less diligent and professional than was hoped; the car door handles get jiggled a hundred times a day, the people doing it are untraceable, and the cops can't do anything.
Which ones are those?
As a result, we’re continually seeing negative externalities from these disclosures in the form of active exploitation. Unfortunately vendors are often too unskilled or obstinate to properly respond to disclosure from academics.
For their part academics have room to improve as well. Rather than the pendulum swinging back the other way, I anticipate that the majors will eventually have more involved expectations for reducing harm from disclosures, such as by expanding the scope of the “vendor” to other possible mitigating parties, like OS or Firewall vendors.
That assumes that without these disclosures we wouldn't see active exploits. I'm not sure i agree with that. I think bad actors are perfectly capable of finding exploits by themselves. I suspect the total number of active exploits (and especially targeted exploits) would be much higher without these disclosures.
Mandatory password composition rules (excluding minimum length) and rotating passwords as well as all attempts at "replacing passwords" are inherintly dumb in my opinion.
The first have obvious consequences (people writing passwords down, choosing the same passwords, adding 1) leading to the second which have horrible / confusing UX (no I don't want to have my phone/random token generator on me any time I try to do something) and default to "passwords" anyway.
Please just let me choose a password of greater than X length containing or not containing any chachters I choose. That way I can actually remember it when I'm not using my phone/computer, in a foreign country, etc.
I suspect that rotating passwords was a good idea at the time. There was some pretty poor security practices several decades ago, like sending passwords as clear text, which took decades to resolve. There are also people like to share passwords like candies. I'm not talking about sharing passwords to a streaming service you subscribe to, I'm talking about sharing access to critical resources with colleagues within an organization. I mean, it's still pretty bad which is why I disagree with them dismissing educating end users. Sure, some stuff can be resolved via technical means. They gave examples of that. Yet the social problems are rarely solvable via technical means (e.g. password sharing).
Rules like "don't write passwords down," "don't show them on the screen", and "change them every N days" all make a lot more sense if you're managing a bank branch open-plan office with hardwired terminals.
>I suspect that rotating passwords was a good idea at the time.
yes, when all password hashes were available to all users, and therefore had an expected bruteforce/expiration date.It is just another evolutionary artifact from a developing technology complexed with messy humans.
Repeated truisms - especially in compsci, can be dangerous.
NIST has finally understood that complex password requirements decrease security, because nobody is attacking the entrophy space - they are attacking the post-it note/notepad text file instead.
This is actually a good example of an opposite case of Chesterton’s Fence
Both NIST in the US (https://pages.nist.gov/800-63-FAQ/) and NCSC in the UK (https://www.ncsc.gov.uk/collection/passwords/updating-your-a...) have quite decent guidance that doesn't have that kind of requirement.
I think the number of orgs that follow best practices from NIST etc is pretty low.
The n-tries lockout rule is much more effective anyway, as it breaks the brute-force attack vector in most cases. I am not a cybersecurity expert, so perhaps there are cases where high-complexity, long passwords may make a difference.
Not to mention MFA makes most of this moot anyway.
I understand the conceptual risk of storing everything behind a single “door”. That’s not ideal. But in practice, circumstances force you to create passwords, expose passwords, reset passwords, so you cannot remember them all. You either write them down (where? how secure?) or resort to having only a few “that you usually use”.
Password managers solve the “where? how secure?” part. They don’t solve security, they help you to not do stupid things under pressure.
Most managers have 2FA, or an offline key, to prevent this issue, and encrypt your passwords at rest so that without that key (and the password) the database is useless.
I love proton pass.
1. Bank etc will not allow special characters, because that's a "hacking attempt". So Firefox's password generator, for example, won't work. The user works around this by typing in suckmyDICK123!! and his password still never gets hacked because there usually isn't enough bruteforce throughput even with 1000 proxies or you'll just get your account locked forever once someone attempts to log into it 5 times and those 1000 IPs only get between 0.5-3 tries each with today's snakeoil appliances on the network. There's also the fact that most people already know that "bots will try your passwords at superhuman rate" by now. Then there's also the fact that not even one of these password policies stops users from choosing bad passwords. This is simply a case of "responsible" people trying and wasting tons of times to solve reality. These people who claim to know better than you have not even thought this out and have definitely not thought about much at all.
2. For everything that isn't your one or two sensitive things, like the bank, you want to use the same password. For example the 80 games you played for one minute that obnoxiously require making an account (for the bullshit non-game aspects of the game such as in game trading items). Most have custom GUIs too and you can't paste into them. You could use a password manager for these but why bother. You just use the same pass for all of them.
Honestly, this comment shows, that user education does not work.
It's very frustrating when you've got a secure system and you spend a few minutes thinking up a great, memorable, secure password; then realize that it's too few (or worse, too many!) characters.
Even worse when the length requirements are incompatible with your password generation tool.
I'd be very wary of logging into accounts on any computer/phone other than my own.
(Yes, I'm commenting on the wierd idea about not allowing processes to run without asking - we're now learning from mobile OSes that this isn't practically feasible to build a universally useful OS that drove most of computer growth in the last 30 years).
Which is better, a strong password written down, or better yet stored a secured password manager, or a weak password committed to memory?
As usual, XKCD has something to say about it: https://xkcd.com/936/
That exploration of the edges of possibility is what make moves the world ahead. I doubt there's ever been a successful human society that praised staying inside the box.
It effectively turns you into a kind of wizard, unconstrained by the rules everyone else believes are there.
When my grandma was moving across the country to move in with my mom, she got one of those portable on-demand storage things, but she put the key in a box that got loaded inside and didn't realize it until the POD got delivered to my mom's place.
I came over with my lock picks and had it open in a couple minutes.
Disagree in general. Laws != morals. Often enough laws are unjust and ignoring them is the cool thing to do.
It could be technically illegal, and would fall under vigilante justice. But we're not talking about legality here, we're talking about "cool": vigilantes are usually seen as "cool" especially when done from a sense of personal justice. Again, not talking about legal or societal justice.
> I know other networks that it is, literally, pointless to "penetration test" because they were designed from the ground up to be permeable only in certain directions and only to certain traffic destined to carefully configured servers running carefully secured software.
”I don‘t need to test, because I designed, implemented, and configured my system carefully.“ might be the actual worst security take I ever heard.
> […] hacking is a social problem. It's not a technology problem, at all.
This is security by obscurity. Also it‘s not always social. Take corporate espionage and nation states for example.
For example, default allow is terrible for security, and the cause of many issues in Windows... but many users don't like the idea of having to explicitly permit every new program they install. Heck, when Microsoft added that confirmation, many considered it terrible design that made the software way more annoying to use.
'Default Permit', 'Enumerating Badness' and 'Penetrate and Patch ' are all unfortunately defaults because of this. Because people would rather make it easier/more convenient to use their computer/write software than do what would be best for security.
Personally I'd say that passwords in general are probably one of the dumbest ideas in security though. Like, the very definition of a good password likely means something that's hard to remember, hard to enter on devices without a proper keyboard, and generally inconvenient for the user in almost every way. Is it any wonder that most people pick extremely weak passwords, reuse them for most sites and apps, etc?
But there's no real alternative sadly. Sending links to email means that anyone with access to that compromises everything, though password resets usually mean the same thing anyway. Physical devices for authentication mean the user can't log in from places outside of home that they might want to login from, or they have to carry another trinket around everywhere. And virtually everything requires good opsec, which 99.9% of the population don't really give a toss about...
The first 10(20?) years there where no devices without a good keyboard.
The big problem imho was the idea that passwords had to be complicated and long, e.g. a random, alpanumeric, some special chars and at least 12 characters long, while a better solution would have been a few words.
Edit: To be clear I agree with most of your points about passwords, just wanted to point out that we often don't appreciate how much tech changed after the smartphone introduction and that for the environemnt before that (computer/laptops) passwords where a good choice.
The rise of not just smartphones, but tablets, online games consoles, smart TVs, smart appliances, etc had a pretty big impact on their usefulness.
No, it means that you learn practical aspects alongside theory, and that's very useful.
You must learn how known exploits work to be able to discover unknown exploits. When the known exploits are patched, your knowledge of how they occurred has not diminished. You may not be able to use them anymore, but surely that was not the goal in learning them.
There are a lot of script kiddies that don't know a damn thing about what TCP is or what an HTTP request looks like, but know how to use LOIC to take down a site.
I've seen an increase in attempts to trust the client lately, from mobile apps demanding proof the OS is unmodified to Google's recent attempt to add similar DRM to the web. If your network security model relies on trusting client software, it is broken.
Great that you, the IT security person, sleeps much better at night. Meanwhile, the rest of the company is super annoyed because nothing ever works without three extra rounds with the IT department. And, btw., the more annoyed people are, the more likely they are to use workarounds that undermine your IT security concept (e.g., think of the typical 'password1', 'password2', 'password3' passwords when you force users to change their password every month).
So no, good IT security does not just mean unplugging the network cable. Good IT security is invisible and unobtrusive for your users, like magic :)
I wish more and more IT administrators would use seat belt and airbags as models of security: they impose a tiny, minor annoyance in everyday usage of your cars, but their presence is gold when an accident happens.
Instead, most of them consider it normal to prevent you from working in order to hide their ignorance and lack of professionalism.
I also note that seatbelts and airbags have undergone decades of engineering refinement; give that time to your admins, and your experience will be equally frictionless. Don't expect it to be done as soon as the download finishes.
There is a default and unremovable contention between usability and security.
If you are "totally safe" then you are also "utterly useless". Period.
I really, really wish most security folks understood and respected the following idea:
"A ship in harbor is safe, but that is not what ships are built for".
Good security is a trade. Always. You must understand when and where you settle based on what you're trying to do.
Mostly, it's there to identify and mitigate risks for the business. Have you considered that all your applications are considered a liability and new ones that deviate from the norm need to be dealt with on a case by case basis?
As a simplified example
- You have a client database that has confidential information
- You have some employees that _must_ be able to interact with the data in that database
- You don't want random programs installed on a computer <that has access to that database> to leak the information
You could lock down every computer in the company to not allow application installation. This would likely cause all kinds of problems getting work done.
You could lock down access to the database so nobody has access to it. This also causes all kinds of problems.
You could lock down access to the database to a very specific set of computers and lock down _those_ computers so additional applications cannot be installed on them. This provides something close to a complete lockdown, but with far less impact on the rest of the work.
Sure it's stupidly simple example, but it just demonstrates the idea that compromises are necessary (for all participants)
That being said, in my opinion, application servers and other public facing infrastructure should definitely be working under a "default deny" policy. I'm having trouble thinking of situations where this wouldn't be the case.
Many years ago, we had , in our company's billing system a "Waiting for IT". They weren't happy.
Some things got _days_ to get fixed.
There’s a balancing act. On the one hand, you don’t want a one-week turnaround to open a port; on the other you don’t want people running webservers on their company desktops with proprietary plans coincidentally sitting on them.
If IT security's KPIs are only things like "number of breaches" without any KPIs like "employee satisfaction", security will deteriorate.
So every time, if there is a choice, security will be prioritized at the cost of personnel performance / effectiveness. And this is how big corporations become less and less effective to the point where the average employee rarely has a productive day.
This is such an uninformed and ignorant opinion.
1. Permission concepts don't always involve IT. In fact, they can be designed by IT without ever involving IT again - such is the case in our company.
2. The privacy department sleeps much better knowing that GDPR violations require an extra u careful action, than being a default. Management sleeps better knowing that confidential projects need to be shared, instead of forgetting to deny access for everybody first. Compliance sleeps better because all of the above. And users know that data they create is private until explicitly shared.
3. Good IT security is not invisible. Entering a password is a visible step. Approving MFA requests is a visible step. Granting access to resources is a visible step. Teaching users how to identify spam and phishing is a visible step. Or teaching them about good passwords.
If the security concept is based on educating and teaching people how to behave it's prone to fail anyway, as there will always be that one uninformed and ignorant person like me that doesn't get the message. As soon as there is one big gaping hole in the wall, the whole fortress becomes useless (Case in point: haveibeenpwned.com) Also, good luck teaching everyone in the company how to identify a personalized phishing message crafted by ChatGPT.
For the other two arguments: I don't see how "But we solved it in my company" and "Some other departments also have safety/security-related primary KPIs" justifies that IT security should be allowed to just air-gap the company if it serves these goals.
Who even cares if they're annoyed. The IT security gets to sleep at night, but the entire corporation might be operating illegally because they can't file the important compliance report because somebody fiddled with the firewall rules again.
There is so much more to enterprise security than IT security. Sometimes you don't open a port because "it's the right thing to do" as identified by some process. Sometimes you do it because the alternative RIGHT NOW is failing an audit.
Why is this a standard for "good" IT security but not any other security domain? Would you say good airport security must be invisible and magic? Are you troubled by having to use a keycard or fingerprint to enter secure areas of a building?
Security is always a balance between usability and safety. Expecting the user to be completely unaffected through some magic is unrealistic.
Very possibly. IMO a lot of the intrusive airport security is security theatre. Things like intelligence do a lot more. Other things we do not notice too, I suspect.
THe thing about the intrusive security is that attackers know abut it and can plan around it.
> Are you troubled by having to use a keycard or fingerprint to enter secure areas of a building?
No, but they are simple and easy to use, and have rarely stopped me from doing anything I needed to.
> Security is always a balance between usability and safety. Expecting the user to be completely unaffected through some magic is unrealistic.
Agree entirely.
I would reword it to say that security should work for the oblivious user, and we should not depend on good user behavior (or fail to defend against malicious or negligent behavior).
I would still say the ideal is for the security interface to prevent problems - like having doors so we don't fall out of cars, or ABS to correct brake inputs.
That said, this is a good read for the most part. I heavily, heavily disagree with the notion that trying to write exploits or learn to exploit certain systems as a security professional is dumb (under "Hacking is C00L"). I learned more about security by studying vulnerabilities and exploits and trying to implement my own (white hat!) than I ever did by "studying secure design". As they say, "It takes one to know one." or something.
Your scale analogy is probably more approachable, but I'm a little more combative. I usually start out my arguments with security weenies with something along the lines of "the most secure thing would be to close up shop tomorrow, but we're probably not going to get sign-off on that." After we've had a little chuckle at that, we can discuss the compromise we're going to make.
I've also had some luck with changing the default position on them by asserting that if they tell me no, then I'll just do it without them, and it'll have whatever security I happen to give it. I can always find a way, but they're welcome to guide me to something secure. I try to avoid that though, because it tends to create animosity.
Here is an example:
"Your software and systems should be secure by design and should have been designed with flaw-handling in mind"
Translation: If we lived in a perfect world, everything would be secure from the start.
This will never happen, so we need to utilize the find and patch technique, which has worked well for the companies that actually patch the vulnerabilities that were found and learn from their mistakes for future coding practices.
The other problem is that most systems are not static. It's not release a secure system and never update it again. Most applications/systems are updated frequently, which means new vulnerabilities will be introduced.
In the same breath he talked about how he wanted to build this “pristine” system with safety and fault tolerance as priority and how he wanted to use raw pointers to shared memory to communicate between processes which both use multiple threads to read/write to this block of shared memory because he didn’t like how chatty message queues are.
He also didn’t want to use a ring buffer since he saw it as a kind of lock
> "Timid people could become criminals."
This fully misunderstands hacking, criminality, and human nature, in that criminals go where the money is, you don't need to be a Big Burly Wrestler to point a gun at someone and get all of their money at the nearest ATM, and you don't need to be Snerd The Nerd to Know Computers. It's a mix of idiocy straight out of the stupidest 1980s comedy films.
Also:
> "Remote computing freed criminals from the historic requirement of proximity to their crimes."
This is so blatantly stupid it barely bears refutation. What does this idiot think mail enables? We have Spanish Prisoner scams going back centuries, and that's the same scam as the one the 419 mugus are running.
Plus:
> Anonymity and freedom from personal victim confrontation increased the emotional ease of crime, i.e., the victim was only an inanimate computer, not a real person or enterprise.
Yeah, criminals will defraud you (or, you know, beat the shit out of you and threaten to kill you if you don't empty your bank accounts) just as easily if they can see your great, big round face. It doesn't matter. They're criminals.
Finally, this:
> Your software and systems should be secure by design and should have been designed with flaw-handling in mind.
"Just do it completely right the first time, idiot!" fails to be an actionable plan.
the world is not static, but most things have patterns that need to be identified and handled, which takes time that you don't have if you sprint from quick-fix to quick-fix of your mvp.
Come on. The piss-poor security situation might have something to do with the fact that the vast majority of software is built upon dependencies the authors didn't even look at...
Making quality software seems to be a lost art now.
No it is not. Lost that is
Not utilised enough....
This is an excellent list that is two decades over due, for some
> software and systems should be secure by design
That should be obvious. But nobody gets rich except by adding features, so this needs to be said over and over again
> This will never happen, so we need to utilize the find and patch technique,
Oh my giddy GAD! It is up to *us* to make this happen. Us. The find and patch technique does not work. Secure by design does work. The article had some good examples
> Most applications/systems are updated frequently, which means new vulnerabilities will be introduced.
That is only true when we are not allowed to do our jobs. When we are able to act like responsible professionals we can build secure software.
The flaw in the professional approach is how to get over the fact that features sell now, for cash, and building securely adds (a small amount of) cost for no visual benefit
I do not have a magic wand for that one. But we could look to the practices of civil engineers. Bridges do collapse, but they are not as unreliable as software
Because Capitalism means management and shareholders only care about stuff that does sell now, for cash.
> But we could look to the practices of civil engineers
If bridge-building projects were expected to produce profit, and indeed increasing profit over time, with civil engineers making new additions to the bridges to make them more exciting and profitable, they'd be in the same boat we are.
For modern version and systematic treatment of the subject check out this book by Spafford:
Cybersecurity Myths and Misconceptions: Avoiding the Hazards and Pitfalls that Derail:
https://www.pearson.com/en-us/subject-catalog/p/cybersecurit...
The best way I found to change their mind is to make a car analogy. Who'd want to steal your car? Any criminal with an use for it. Why? Because any car is valuable in itself. It can be sold for money. It can be used as a getaway vehicle. It can be used to crash into a jewelry shop. It can be used for a joy ride. It can be used to transport drugs. It can be used to kill somebody.
A criminal stealing a car isn't hoping that there are Pentagon secrets in the glove box. They have an use for the car itself. In the same way, somebody breaking into your computer has uses for the computer itself. They won't say no to finding something valuable, but it's by no means a requirement.
I know it's rather easy to break through a glass window, but I still prefer to see outside. I know I could faff with multiple locks for my bike, but I rather accept some risk for it to be stolen for the convenience.
If there's something I really don't want to risk stolen, I can take it into a bank's vault. But I don't want to live in a vault.
Bad analogy. It is not that easy to break modern multi-layer glazing, and it is also a lot easier to get away with breaking into a computer or account than breaking a window, undetected, until it is time to let the user know (for a ransom attempt or other such). Locking your doors is a much better analogy. You don't leave them unlocked in case you forget your keys do you? That would be a much better analogy for choosing convenience over security in computing.
> I know I could faff with multiple locks for my bike, but I rather accept some risk for it to be stolen for the convenience.
Someone breaking into a computer or account isn't the same as them taking a single object. It is more akin to them getting into your home or office, or on a smaller scale a briefcase. They don't take an object, but that can collect information that will help in future phishing attacks against you and people you care about.
The intruder could also operate from the hacked resource to continue their attack on the wider Internet.
> A major dumb is that security people think breaking in is the end of the world.
The major dumb of thinking like this is that breaking in is often not the end of anything, it can be the start or continuation of a larger problem. Security people know this and state it all the time, but others often don't listen.
End of the world? No. But it's really, really bad.
When you get your stolen car back, problem over.
But your broken into system should in most cases be considered forever tainted until fully reinstalled. You can't enumerate badness. That the antivirus got rid of one thing doesn't mean they didn't sneak in something it didn't find. You could be still a DoS node, a CSAM distributor, or a spam sender.
People can use HTTPS now instead of HTTP, without degrading usability. This has taken a lot of people a lot of work, but everyone gets to enjoy better security. No need to lock and unlock every REST call as if it were a bicycle.
Also, a hacker will replace the broken glass within milliseconds, and you won't find out it was ever broken.
This is one of the reasons why I feel my job in security is so unfulfilling.
Almost nobody I work with really cares about getting it right to begin with, designing comprehensive test suites to fuzz or outright prove that things are secure, using designs which rule out the possibility of error.
You get asked: please look at this gigantic piece of software, maybe you get the source code, maybe it's written in Java or C#. Either way, you look at <1% of it, you either find something seriously wrong or you don't[0], you report your findings, maybe the vendor fixes it. Or the vendor doesn't care and the business soliciting the test after purchasing the software from the vendor just accepts the risk, maybe puts in a tissue paper mitigation.
This approach seems so pointless that it's difficult to bother sometimes.
edit:
> #4) Hacking is Cool
I think it's good to split unlawful access from security consultancy.
You don't learn nearly as much about how to secure a system if you work solely from the point of view of an engineer designing a system to be secure. You can get much better insight into how to design a secure system if you try to break in. Thinking like a bad actor, learning how exploitation works, etc. These are all things which strictly help.
[0]: It's crazy how often I find bare PKCS#7 padded AES in CBC mode. Bonus points if you either use a "passphrase" directly, or hash it with some bare hash algorithm before using various lengths of the hash for both the key and IV. Extra bonus points if you hard code a "default" password/key and then never override this in the codebase.
All it does is to say, very expensively, "there are no obvious vulnerabilities". If it even manages that. What you want to say is "there are obviously no vulnerabilities" but if you're having to strap that onto a pre-existing bucket of bugs then it's a complete rebuild. And nobody has time for that when there's an angry exec breathing down your neck asking why the product keeps getting hacked.
The fundamental problem is the feature factory model of software development. Treating software design and engineering as a cost to be minimised means that anything in the way of getting New Shiny Feature out of the door is Bad. And that approach, where you separate product design from software implementation, where you control what happens in the organisation with budgetary controls and the software delivery organisation is treated as subordinate because it is framed as pure cost, drives the behaviour you see.
lol'd at the irony of the fact that this was posted here, Hacker News...
The world might have been a better place if we used the terms "cracker" and "tinkerer" instead.
Hacking is cool. Why the security theater industry has appropriated "hacking" to mean accessing other people's systems without authorization, I don't know.
> Indeed, the first recorded use of the word hacker in print appeared in a 1963 article in MIT’s The Tech detailing how hackers managed to illegally access the university’s telephone network.
I get what you’re saying, but I think we’re tilting at windmills. If “hacker” has a connotation of “breaking in” for 61 years now, then the descriptivist answer is to let it be.
A lot of early remote access was done by hackers. Same with exploiting vulnerabilities.
One of my favorite is the Robin Hood worm: https://users.cs.utah.edu/~elb/folklore/xerox.txt
TL;DR: Engineers from Motorola exploited a vulnerability illustrate it, and they did so in a humorous way. Within the tribe of hackers, this is pretty normal, but the only difference between that and stealing everything once the vulnerability has been exploited is intent.
Normies only hear about the ones where people steal things. They don't care about the funny kind.
A web security researcher might simultaneously discover vulnerabilities in “PHP, JSP, Servlet, ASP.NET, IIS, and Tomcat.” A binary researcher might have knowledge of “Windows, Android, and iOS.” A network protocol researcher might be well-versed in “TCP/IP, HTTP, and FTP” protocols. More likely, a hacker often masters all these basic knowledge areas.
So, what I want to say is that the key is not the technology itself, nor the nitty-gritty of offense and defense in some vulnerabilities, but rather the ability to use the wide range of knowledge and the hacker’s reverse thinking to challenge seemingly sound systems. This is what our society needs and it is immensely enjoyable.
isn't dumb, because "users" are proven to be the weakest link in the whole security chain. Users must be aware of the workplace security, just like how they should be trained w/ the workplace safety.
Also, there's no security to deal with if the system is unusable by the users. The trade-off b/w usability and security is simply unsolvable, and education is a patch to that problem.
Then if you've got the software you need to do your job you're stuck in endless cycles of "pause and think" trying to create the mythical "secure by design" software which does not exist. And then you get hacked anyway because someone got an email (with no attachments) telling them to call the CISO right away, who then helpfully walks them through a security "upgrade" on their machine.
Caveats: Yes there is a balance and log anomaly detection followed by actual human inspection is a good idea!
That's not a bad instinct by itself, but when it's your only approach, it leads to a snowball of problems. Sometimes you have to question the assumptions, to take a step back and try to redesign things, or new addition just won't fit, and the system just become wonkier and wonkier.
2015 (114 points, 56 comments) https://news.ycombinator.com/item?id=8827985
2023 (265 points, 202 comments) https://news.ycombinator.com/item?id=34513806
> Perhaps another 10 years from now, rogue AI will be the primary opponent, making the pro hackers of today look like the script kiddies.
Step on it OpenAI, you've only got 1 year left ;)
Least secure privilege never talks about how security people in charge of granting back the permissions do their jobs at an absolute sloth pace.
#2) Ok, what happens when goodness becomes badness, via exploits or internal attacks? How do you know when a good guy becomes corrupted without some enumeration of the behavior of infections?
#3) Is he arguing to NOT patch?
#4) Hacking will continue to be cool as long as modern corporations and governments are oppressive, controlling, invasive, and exploitative. It's why Hollywood loves the Mafia.
#5) ok, correct, people are hard to train at things they really don't care about. But "educating users", if you squint is "organizational compliance". You know who LOVES compliance checklists? Security folks.
#6) Apparently, there are good solutions in corporate IT, and all new ones are bad.
I'll state it once again: my #1 recommendation to "security" people is PROVIDE SOLUTIONS. Parachuting in with compliance checklists is stupid. PROVIDE THE SOLUTION.
But security people don't want to provide solutions, because they are then REALLY screwed when inevitably the provided solution gets hacked. It's way better to have some endless checklist and shrug if the "other" engineers mess up the security aspects.
And by PROVIDE SOLUTIONS I don't mean "offer the one solution for problem x (keystore, password management), and say fuck you if someone has a legitimate issue with the system you picked". If you can't provide solutions to various needs of people in the org, you are failing.
Corporate Security people don't want to ENGINEER things, again they just want to make compliance powerpoints to C-suite execs and hang out in their offices.
For instance, the first bad idea, Default Permit, is clearly bad in the realm of networking. I might quibble a bit and suggest Default Permit isn't so much an idea as the natural state of when one invents computer networking. But clearly Default Deny was a very very good idea and critical idea necessary for the internet's growth. It makes a lot of sense in the context of global networking, but it's not quite as powerful in other security contexts. For instance, SELinux has never really taken off, largely because it's a colossal pain in the ass and the threat models don't typically justify the overhead.
The other bad idea that stands out is "Action is Better Than Inaction". I think this one shows a very strong big company / enterprise bias more than anything else—of course when you are big you have more to lose and should value prudence. And yeah, good security in general is not based on shiny things, so I don't totally fault the author. That said though, there's a reason that modern software companies tout principles like "bias for action" or "move fast and break things"—because software is malleable and as the entire world population shifted to carrying a smartphone on their person at all times, there was a huge land grab opportunity that was won by those who could move quickly enough to capitalize on it. Granted, this created a lot of security risk and problems along the way, but in that type of environment, adopting a "wait-and-see" attitude can also be an existential threat to a company. At the end of the day though, I don't think there's any rule of thumb for whether action vs inaction is better, each decision must be made in context, and security is only one consideration of any given choice.
Most of us are not involved in a "land grab", in that metaphorical world most of us are past "homesteading " and are paving roads and I filling infrastructure
Even small companies should take care when building infrastructure
"Go fast and break things" is, was, an irresponsible bad idea. It made Zuck rich, but that same hubris and arrogance is bringing down the things he created
Really? SELinux and AppArmor have existed since, I don't know, late nineties? The problem is not that these controls don't exist, it's just they make using your system much, much harder. You will probably spent some time "teaching" them first, then actually enable, and still fight with them every time you install something or make other changes in your system.
SELinux works well out of the box in RHEL and its derivatives since many years. You comment shows, you did not actually try it.
> fight with them every time you install something or make other changes in your system
If you install anything that does not take permissions into account, it will break. Try running nginx with nginx.conf permissions set to 000, you will not be surprised, it does not work.
> Somewhere in there, security became a suppurating chest wound and put us all on the treadmill of infinite patches and massive downloads. I fought in those trenches for 30 years – as often against my customers (“no you should not put a fork in a light socket. Oh, ow, that looks painful. Here, let me put some boo boo cream on it so you can do it again as soon as I leave.”) as for them. It was interesting and lucrative and I hope I helped a little bit, but I’m afraid I accomplished relatively nothing.
Smart guy, hope he enjoys his retirement.
- to heavily VLAN via load type/department/importance/whatever your org prefers
- default denying everything except common infra like DNS/NTP/maybe ICMP/maybe proxy arp/etc between those VLANs
- every proto/port hole poked through is a security-reviewed request
then you are doing it wrong.
"ahh but these ACL request reviews take too long and slow down our devs" -- fix the review process, it can be done.
spend the time on speeding up the security review, not on increasing your infrastructure's attack surface.
I would strongly disagree with that.
You can't defend against something you don't understand.
You definitely shouldn't spend time learning some script-kiddie tool, that is pointless. You should understand how exploits work from first principles. The principles mostly won't change or at least not very fast, and you need to understand how they work to make systems resistant to them.
One of the worst ideas in computer security in my mind is cargo culting - where people just mindlessly repeat practises thinking it will improve security. Sometimes they don't work because they have been taken out of their original context. Other times they never made sense in the first place. Understanding how exploits work stops this.
I'd be interested in seeing a source for this. Feels a bit anecdotal hyperbole.
The first time the FTP server I ran got broken into was about then, it was a shock as why would some a-hole want to do that? I wasn't aware until one of my users tipped me off a couple of days after the breach. They were sharing warez rather than porn at least, having the bandwidth to download even crappy postage stamp 8 bit color videos back then would take you hours.
When this happened I built the company's first router a few days later and put everything behind it. Before that all the machines that needed Internet access would turn on TCP/IP and we'd give them a static IP from the public IP range we'd secured. Our pipe was only 56k so if you needed it you had to have a really good reason. No firewall on the machines. Crazy, right?
Very different times for sure.
I would be interested in whether he still thinks these are true, as if you are doing security today, you are doing exactly these things.
- educating users: absolutely the most effective and highest return tool available.
- default permit: there are almost no problems you can't grow your way out of. there are zero startups, or even companies, that have been killed by breaches.
- enumerating badness: when managing a dynamic, you need measurements. there is never zero badness, that's what makes it valuable. the change in badness over time is a proxy for the performance of your organization.
- penetrate and patch: having that talent on your team yields irreplacable expereince. the only reason most programmers know about stacks and heaps today is from smashing them.
- hacking is cool: 30 years later, what is cooler, hacking or marcus?
I guess the idea is generally that if we go in production now we will make profits, and with those profits we can scale and hire real security folks (which may or may not happen).
It's NOT the UAC we all grew to hate with Windows 8, et al. It's NOT the horrible mode toggles present on our smartphones. It's NOT AppArmor.
I'm still hoping that Genode (or HURD) makes something I can run as my daily driver before I die.
But, how the UI is working with capability based security, is a separate issue (although I have some ideas).
(Furthermore, I also think that capability based security with proxy capabilities can solve some other problems as well (if the system is designed well), including some that are not directly related to security features. It can be used if you want to use programs to do some things that it was not directly designed to do; e.g. if a program is designed to receive audio directly from a microphone, you can instead add special effects in between by using other programs before a program receives the audio data, or use a file on the computer instead (which can be useful in case you do not have a microphone), etc. It can also be used for testing; e.g. to test that a program works correctly on February 29 even if the current date is not February 29, or if the program does have such a bug, to bypass it by telling that program (and only that program) that the date is not February 29; and you can make fault simulations, etc.)
"You can have meaningful security without skin-in-the-game."
This is literally the beginning and end of the problem.
I assume that in a not-so-distant future, we get AI powered summaries of the target page for free, similar to how Wikipedia shows a preview of the target page when hovering over a link.
This was an interesting title, but having seen the summary and discussion, im not particularñy keen to read it. In fact i would never have commented on this post except to reputation yours.
hahahahahaha
My guess is that this will extend to knowing not to open weird attachments from strangers.
I've got bad news for ya, 2005... :P19 years later, hacking is still cool.
Hacking will always be cool now that there’s an entire aesthetic around it. The Matrix, Mr. Robot, even the Social Network.
Speaking of snakeoil
This actually DOES work it just doesn't make you immune to trouble any more than a flu vaccine means nobody gets the flu. If you drill shit into people's heads and coach people who make mistakes you can decrease the number of people who do dumb company destroying behaviors by specifically enumerating the exact things they shouldn't do.
It just can't be your sole line of defense. For instance if Fred gives his creds out for a candy bar and 2FA keeps those creds from working and you educate and or fire Fred not only did your second line of defense succeed your first one is now stronger without Fred.
- Having an OS that treats users as "suspicious" but applications as "safe".
(I.e., all Unix-like systems)
Oh the joy of going through the bluetooth pairing process for my controller, or physically getting up and connecting a physical wire to it, only for the system to wait until I'm in the game and touch the controller and immediately the game hangs because a pop up appears asking me if i want to give permission to my damn controller to control stuff. Or having to manually reposition my windows every single time, because a window knowing where it is is somehow "insecure"
I'm the one putting the software on there and deciding what to run. If I run it, then it's because I wanted the application to do what it does. If someone else is in the position of running software on my machine, they're already on the other side of the airtight hatchway. They can already give themselves the permissions they need. They can just click yes on any pop up that appears. Yes, the applications should be considered safe. Because the OS cannot possibly make any informed assumptions about what's legitimate and what's malicious.
To me it feels like I can't do certain stuff on my PC because someone else might misuse something on theirs. How is that my problem?
There are many problems with it; I do not use it on my computer. A better sandbox system would be possible, but xdg-desktop-portal is not designed very well.
> Oh the joy of going through the bluetooth pairing process for my controller, or physically getting up and connecting a physical wire to it, only for the system to wait until I'm in the game and touch the controller and immediately the game hangs
That is also a problem of a bad design. If a permission is required, it should be possible to set up the permissions ahead of time (and to configure it to automatically grant permission if you do not want to restrict it; possibly could even be the default setting), instead of waiting for that to ask you and to hang like that.
> Or having to manually reposition my windows every single time, because a window knowing where it is is somehow "insecure"
I would think that the window manager should know where the windows are and automatically position them if you have configured it to remember where they are. (The windows themself should not usually need to know where they are, since the window manager would handle it instead, and the applications should not need to know what window manager is in use, since different window managers will work in different ways and if the application program assumes it knows how it works then that can be a problem.)
> I'm the one putting the software on there and deciding what to run. If I run it, then it's because I wanted the application to do what it does.
Yes, although sometimes you do not want it to do what it does, which is why it should be possible to configure the security, preferably with proxy capabilities.
> Because the OS cannot possibly make any informed assumptions about what's legitimate and what's malicious.
I agree, although that is why it must be possible for the operator to specify such things. I think that proxy capabilities would be the way to be done (which, in addition to improving security, also allows more control over the interaction between the programs and other parts of the system).
> To me it feels like I can't do certain stuff on my PC because someone else might misuse something on theirs.
Yes, it seem like that, because it is the badly design of some programs, protocols, etc.
IT is not static; there is no such thing as a problem that the entire field solves at once, and is forever afterward gone.
When you're training an athlete, you teach them about fitness and diet, which underpins their other skills. And you keep teaching and updating that training, even though "being fit" is ancillary to their actual job (i.e. playing football, gymnastics, etc). Pro athletes have full-time trainers, even though a layman might think, "well haven't they learned how to keep themselves fit by now?"
If only this was true... Injection has been on Owasp Top 10 since its inception, and is unlikely to go away anytime soon. Learning some techniques can be useful just to do quick assessments of basic attack vectors, and to really understand how you can protect yourself.
Pardon my French, but this is the dumbest thing I have read all week. You simply cannot work on defensive techniques without understanding offensive techniques - plainly put, good luck developing exploit mitigations without having ever written or understood an exploit yourself. That’s how you get a slew of mitigations and security strategy that have questionable, if not negative value.
> teaching yourself a bunch of exploits and how to use them means you're investing your time in learning a bunch of tools and techniques that are going to go stale as soon as everyone has patched that particular hole
Ah yes, I too remember when buffer overflows, xss and sql injections became stale when the world learned about them and they were removed from all code bases, never to be seen again.
> Remote computing freed criminals from the historic requirement of proximity to their crimes. Anonymity and freedom from personal victim confrontation increased the emotional ease of crime […] hacking is a social problem. It's not a technology problem, at all. "Timid people could become criminals."
Like any white collar crime then? Anyway, there’s some truth in this, but the analysis is completely off. Remote hacking has lower risk, is easier to conceal, and you can mount many automated attacks in a short period of time. Also, feelings of guilt are often tamed by the victim being an (often rich) organization. Nobody would glorify, justify or brag about deploying ransomware on some grandma. Those crimes happen, but you won’t find them on tech blogs.
It’s so much better to prevent them from doing unsafe things in the first place, education is a long and hard undertaking and I see little practical evidence that it works on the majority of people.
>But, but, but I really really need to do $unsafething
No in almost all cases you don’t - it’s just taking shortcuts and cutting corners that is the problem here
Think about it this way: do you think Theo Deraadt (from OpenBSD and OpenSSH fame) spends his time trying to see if Acme corp is vulnerable to OpenSSH exploit x.y.z, which has been patched 3 months ago?
I don't care about attacking systems: it is of very little interest to me. I've done it in the past: it's all too easy because we live in a mediocre work full of insecure crap. However I love spending some time making life harder for dark hat hackers.
We know what creates exploits and yet people everywhere are going to repeat the same mistakes over and over again.
My favorite example is Bruce Schneier writing, when Unicode came out, that "Unicode is too complex to ever be secure". That is the mindset we need. But it didn't stop people using Unicode in places where we should never have used it, like in domain names for examples. Then when you test an homoglyphic attack on IDN, it's not "cool". It's lame. It's pathetic. Of course you can do homglyphic attacks and trick people: an actual security expert (not a pentester testing known exploits on broken configs) warned about that 30 years ago.
There's nothing to "understand" by abusing such exploit yourself besides "people who don't understand security have taken stupid decisions".
OpenBSD and OpenSSH are among the most secure software ever written (even if OpenSSH had a few issues lately). I don't think Theo Deraadt spends his time pentesting so that he can be able to then write secure software.
What strikes me the most is the mediocrity of most exploits. Exploits that, had the software been written with the mindset of the person who wrote TFA, would for the most part not have been possible.
He is spot on when he says that default permit and enumerate badness are dumb ideas. I think it's worth trying to understand what he means when he says "hacking is not cool".
The same is true of containers, VMs, sandboxes, etc.
The idea that we all willingly run applications that continuously download and execute code from all over the internet is quite remarkable.