As a developer, I know it is hard to implement something once, harder to implement consistently across multiple interfaces, and damn near impossible to keep correct years later after employee turnover and other twists.
The sad thing is that it costs a ton more money to do things really well, and companies can basically take advantage of the low price of doing things poorly until finally forced. And by then, they have tons of money so they can comply but any startup is screwed because now it costs more for everyone, even those entering the game.
Facebook surely must be heavily fined and regulated for their misbehavior, because to fail to keep Facebook data safe is to put lives at risk.
So would you like a fine for your bugs? And note that contrary to other professions, software development doesn’t have generally agreed recipes for building bug-free software, so was that really negligence? Was it malpractice?
Being fined for a contribution to an OSS project would be terrible, wouldn’t it? And no, the size of the company doesn’t and shouldn’t matter in the eyes of the law, only the impact.
Also people uploading stuff on the Internet should really expect a best effort privacy. If you expect secrecy, then uploading shit on a platform meant for sharing is pretty dumb.
Note that I will blame Facebook for willful privacy violations. And I hope to see them suffer under GDPR. But a bug doesn’t fall in the same category.
I would agree regarding small companies, but I wouldn't put oss developers in the same boat, fining the entity that provides a service makes more sense. It doesn't matter if that service relies on OSS or not.
It's the company providing the service to the consumer who is responsible to vet the final product.
A OSS developer has no idea if her/his code is going to be used by a gaming app or by NASA for mission critical stuff and shouldn't be made responsible if a bug in the oss project caused a rocket failure.
Similarly a construction company providing wood (and that company isn't making any false claims about the level of quality): it should not be the company's fault if someone decides to use that wood for a bridge where concrete is needed. The bridge builder is responsible of picking a good material.
I'm not a fan of the overregulation of industries like aviation, but consumer software has gone too far in the other direction and is long overdue for an adjustment.
Absolutely nothing wrong with that. If a small trucking company has a driver that speeds, that driver gets fined the same way a driver for a large trucking company does.
> Of course the fines have to be proportional to the number of affected users.
Of course.
Not if your contribution causes harm. A fine would be a more than welcome addition to consumer protections.
That said, you and I can agree that FB sucks, and delete our accounts. It is up to other people whether they follow suit.
That about sums it up for all these privacy breaches these days. It's getting to the same level of "thoughts and prayers" for tragedies. No actual change or consequences for the problems happening, just empty "sorries" and "promises" that it won't happen again/they'll get it fixed. I don't know if this is a GDPR violation or not (as someone else asked), but if it is, I hope we start actually seeing action of these sorts of things.
Sounds like you're suggesting that we criminalize software bugs.
To me, if we can criminalize something like a major oil spill such as BP/Deepwater Horizon, how is this much different? It's not like they did the oil spill on purpose, but they still need had consequences for those risks that they were taking. Software companies, esp larger ones like Facebook, should have the same kind of consequences for their risks of software bugs that cause these kinds of privacy breaches.
Also, as someone else below pointed out to someone else with a similar tone as your phrasing of "criminalize software bugs": "intentionally obscuring the debate. Gross negligence is an entirely different standard than just software bugs."
Building Code
229. If a builder builds a house for a man and does not make its construction sound, and the house which he has built collapses and causes the death of the owner of the house, the builder shall be put to death.
233. If a builder builds a house for a man and does not make its construction sound, and a wall cracks, that builder shall strengthen that wall at his own expense.
Bugs in houses have been criminalized for a very long time. Online data may be less fundamental than safe housing, but housing our data safely becomes proportionally more important as more of modern life depends on it.
[0] http://www.wright.edu/~christopher.oldstone-moore/Hamm.htm
When there is irreparable damage I believe it should be criminalized. You cannot regain privacy after an incident such as this, it is irrevocably taken from you against your will.
When my dad went to college, a very old and bitter professor (this was Civil Engineering, communist Eastern Europe) told the students on the first day in class something along the lines of: "If you know you're stupid or don't give a shit about your work, just go home and save everyone the trouble of dealing with your future fuckups. Mistakes here can cause deaths or losses of huge amounts of money".
I believe we've reached the point in which negligence in the software world can cause loss of lives, even when the software is not operating a crane or an airplane (think Grindr leaking account data over http in Saudi Arabia).
So you're minimizing the issue by asking if we should criminalize software bugs. We should and currently do criminalize negligence. If bugs are a result of negligence (you know, 'move fast and break things', 'better to ask for forgiveness than for permission') then fines, jailtime and criminal records should be a'coming. This is no longer child-play, this is the new world which runs on software.
Anywhere else its a plain case of malpractice whether its law or medicine, etc.
It happens with such regularity that I'm amazed anyone here would be kind enough to accept their fake apologies for clearly malicious actions.
There is risk in any human endeavour that touches upon someone else's life, in every domain. But, for example, only some of the deaths that occur in a hospital are the result of malpractice. That is the type of mistake for which we hold others accountable: not the mere act of providing insufficient care, but the act of providing insufficient care as a result of a dereliction of professional duty or a failure to exercise an ordinary degree of professional skill or learning.
IMHO:
1. If this was a novel, or very complicated breach, that Facebook did everything possible to avoid, but avoiding it was beyond the knowledge and skills of their security, engineering and QA teams, who otherwise did their absolute best, then it's at the very least defensible. One could argue that you shouldn't handle private data if you can't do it securely, but risk is inherent to anything, and perhaps worth it under the right circumstances.
2. If this was just "move fast and break things" policy, then a big fine is in order, and if no insurance is in place, whoever approved it should get to pay it out of their own pocket. This is the equivalent of a civil engineering company designing a collapsing bridge because everyone showed up at work hungover, or skipped safety calculations because they just take too damn long and time to market is critical.
If you think gee, this was just a bunch of photos, man, it's not like a bridge collapsed, how certain are you they didn't end up traded on the black market, or used for blackmail? Bet-your-company's-profits certain they weren't?
3. If this was deliberate policy -- not just accident, but a conscious business decision that was then reverted and declared a breach -- then whoever came up with it and/or approved it should be facing jail time.
Edit: also, it pisses me off that people are trying to decide how responsible we should about what we do based on other fields. They don't fine companies that write crashing firwmares for planes or cars or they fine it X amount, clearly we're only doing computer stuff so we should be fined less, no?
What the hell? First, they are fined (see, for instance, Toyota, who were fined 1.2B for their infamous acceleration firmware bug). And second, even if they weren't, we shouldn't be aspiring to do the worst thing that's still acceptable! We should be striving for better than anything else, not for well, at least we're not worse than civil engineers...
That said, the same changes won't likely occur with sites like FB unless it can be proven that the data leaked lead to loss of life or physical harm. They create incentive's for people to happily be the product. How do we prove that damage has occurred to the product? Have any forums popped up where people share stories of harm to their family as a result of data leaked from FB?
I can imagine GDPR being useful in the EU for corporate FB accounts. Wasn't FB working on a work-specific version of their site? If so, corporate legal teams would get involved in leaks, I would imagine.
I can imagine GDPR working very well for consumers as well, and it seems we are up for some real legal entertainment in the next few months/years :-)
Edit: It also wouldn't surprise me if it gets worse before it gets better. If I was a publisher right now I'd seriously consider blocking access from EU countries. (But that would of course be an invitation for a small, agile publisher who'd succeed either with a micropayments based approach or a context based ads approach.)
"Well, leave!" isn't an option. They can't leave. Quitting Facebook when you're an active user means you lose a huge amount of social contact. I can think of a dozen people I know who are there because it's how they send baby pics and the like to family. They're non-technical and don't care about federated mastodons, they just want to see their niece and go to their high school reunion.
So yeah they get really mad at this stuff but the network effect is so strong, you can't simultaneously convince the entire graduating class of whatever to plan reunions via some new thing when 1)everyone's already on facebook and 2)they've been using it for so long it's part of their workflow.
CRIPPLING fines to start. Or shut down the freaking company if you can't secure it. Not everything is forgiven with a "we're sorry for 27845th time." Private pictures can and do ruin lives. (We can question the wisdom of posting private pics on FB but after all it's a huge company and they said they're private)
Kerching
Without regulation massive companies are entirely unchecked, there is virtually never market pressure to fix problems like this.
I could probably get away with murder, but for some reason I'm not out on the town strangling prostitutes.
Why do companies always need an "incentive" to not be anti-social? Why can't CEOs simply derive pleasure from delivering a quality service in exchange for some advertising eyeballs?
Sounds like a good time to reiterate the advice: Don’t upload things to the internet that you don’t want to be on the internet. That way there won’t be any of your things on the internet that you didn’t want to be there.
It's always hilarious how people try to pretend that it's easy to just drop out of society and the systems that people use to keep in touch. Sure, you can live like the unabomber in a shed in Montana with no phone service, but having that be the only option to keep your personal data safe from leaks is a bit much of an ask. People should be able to live their lives and take reasonable but not extraordinary precautions to safeguard their privacy and be able to have some expectation of privacy as a result. Unfortunately, there is so much data being collected on everyone, so many intrusions to our private lives, and so little care being taken by the stewards of that private data that it is not, it turns out, a reasonable expectation. And the onus for solving that problem shouldn't be on individuals. We shouldn't be forced to live our lives in fear of digital representations of our appearance being leaked onto the internet as someone might have once feared an ordinary photograph could steal a soul. Rather, those who are going to great efforts to destroy the boundaries of personal privacy should be heavily regulated to prevent them from doing so and heavily incentivized to safeguard private data whenever they are in possession of it.
This means anyone in the world can upload an image, tag you in it, and it will show up in searches for you. It still won’t show up on your profile if you have confirmations for that enabled, but still.
This means that if you started to upload a photo to the uploader wizard and then thought better, the photo is still out there.
Bright big popup right over main facebook.com (and peripheral webs/apps) dismissable only if you scrolled it all the way down, confirmed to have read it, saying "private photos of millions of users were leaked" in big bold letters, would go a long way.
Like the saying goes, “One death is a tragedy; one million is a statistic” — Facebook has made all its privacy blunders and issues over many years a statistic...something people may nod their head at, feel bad for a moment and go back happily to the same company’s platforms.
Unless lawmakers around the world do something, nothing will materially affect Facebook (the company). Even if they do, I personally have no faith that the company is capable of changing unless people at the top, like Mark Zuckerberg and Sheryl Sandberg, are out.
In industry perspective: We call ourselves "engineers", but real engineers are held accountable when they sign off on using an untested metal alloy in bridge joists and then people die when the bridge collapses. Facebook's constant bad engineering may not kill people directly, but it does lead to a lot of really important information stolen, peoples financial future being ruined, and who knows what other consequences for their users. If you still work for Facebook in this day and age you should be ashamed of yourself; I know people can justify just about anything while claiming that they'll "make it better from the inside" or because they just need to collect a fat paycheck and are comfortable and don't want to look for something new, but we need to fight these impulses anywhere we work.
Especially if we know the baskets have goals not aligned with our own, despite it being oh-so-convenient, but also not centralized in the first place.
What about, for example, pictures sent in a private message?
I'm so very glad I deleted my account months ago.
I did as well. One thing that stood out to me in the article was that users who were impacted by the breach would be notified via a Facebook message. What about people who were impacted by the breach who no longer have an account?
The paywall advertises a "Premium EU Ad-Free Subscription" which is more expensive than the standard subscription and explicitly states "No on-site advertising or third-party ad tracking" as one of the perks.
Trying to buy it has the following:
> By subscribing, you agree to the above terms, the Terms of Service, Digital Products Terms of Sale & Privacy Policy.
On the privacy policy, we have this:
> hen you use our Services, third parties may collect or receive certain information about you and/or your use of the Services (e.g., hashed data, click stream information, browser type, time and date, information about your interactions with advertisements and other content), including through the use of cookies, beacons, mobile ad identifiers, and similar technologies, in order to provide content, advertising, or functionality or to measure and analyze ad performance, on our Services or other websites or platforms. This information may be combined with information collected across different websites, online services, and other linked or associated devices. These third parties may use your information to improve their own services and consistent with their own privacy policies.
There is absolutely no mention of the "Premium" ad-free subscription in the privacy policy at all, so they are still granting themselves the right to stalk you all over the place even with the premium, more expensive subscription.
Not to mention, the privacy policy page itself loads a handful of different trackers before any kind of consent was even granted. I can see Google Analytics, something from "c.go-mpulse.net", something else from "bam.nr-data.net" explicitly sending my user-agent in the URL (why? They'd get it in the headers anyway), Google News JS, Google Pay and the New Relic JS agent.
My only response to this is a big "fuck you" and this link: https://outline.com/zd5du7 so you can read the content without any of that garbage and without paying them since they don't even deserve a single penny.
Customer support by email says I have to provide a copy of my driver's license or passport to "secure the account". I said that's not reasonable, companies leak too much personal data so you can't have anymore of mine, I'll just open a new account. They replied they'd just change the phone number (now no longer requiring the required photo ID). They did and the end.
- No explanation why the verification code process would not work.
- None of my ID's have either my email address, account number, or phone number, and the account doesn't even have my name on it. Giving them photo ID does jack shit for the purpose claimed.
- If the account security is questionable, you should not only require text verification of the new phone number, but they should have removed my stored payment accounts, requiring me to reenter them. AFAIK the credit card verification requires CVV and phone number matching the credit card account. That seems like the right way to secure the account rather than bullshit photo IDs.
https://www.theguardian.com/technology/2017/nov/07/facebook-...
Have never seen an analysis of it.
They accelerated the planned shutdown for exactly that reason.
Whether you believe data was exfiltrated is essentially a reflection of how much you trust or distrust Google.
Maybe spend some time over the Xmas period having 'The Conversation' with our loved ones about their data safety?
There should be GDPR consequences of this - it's time that law got properly put to the test.
We will see how this plays out, but there should be a fine nevertheless (because others have been fined and they reported it).
Not this particular thing per se but, you know, it's Facebook. As the recent history has proven these things kind of come with the package.
The long-term solution to this mess should come from users abandoning it which is happening gradually based on recent reports.
Where will the people go? If it's other software it might end being as bad or worse.
Data leaks happen to every tech company. As users/customers, won't have knowledge of the leaks unless they are publicly reported.
How can you "socialize" these days without using at least one of these internet social/media platforms?
Ways to avoid givin them your data are either to be totally reclusive or to be a tech geek who relies only on niche tech products that aren't mainstream.
What if they are used as highly valuable networking platforms for your job? Some people live off some kind of business model taking advantage of the sites. Also they work hard at maintaining their audience captivated and engaged.
"bugs" is a catch all word, it covers everything from a pesky typo in UI to bugs like this, severe security issues, meltdown/spectre, VW bugs, and so and so forth.
Of course no jail time for a typo, but why not a jail time or severe financial and career consequences for severe bugs especially when it can be shown that a bug was caused due to intentional decisions, malicious intents, sloppy testing, rushed product etc. and not due to genuine mistakes - similar to medical malpractices.
Of course lawyers will love it, but it can improve the overall situation.
And yes, I'm a software engineers and do know what I'm talking about.
If malicious intent, most likely the business owners, PM or engineering management, but in some cases software engineers.
If due to rushed product, certainly the management and not software engineers or QA.
and so on..
It depends..