Some people advised me not to go there because it would only do harm to my name and brand, but I think I should. The Facebook teams are going to give a presentation with some new plans where they want feedback on. For internal push back they need critical people from outside Facebook, which I'm happy to contribute for.
To make it more interesting for the outside world I'm going to ask a few questions for Facebook in general (privacy wise). And that's where I need some help. What questions do you want answers for from Facebook?
Facebook agreed I could use the answers outside of the meeting (with the exception of sharing from non-Facebook attendances).
[1] https://simpleanalytics.com
[2] https://twitter.com/adriaanvrossum
Somewhat more constructive: Facebook seems to have an unhealthy appetite to collect _all_ user data including privacy sensitive information. But lets be fair: She is definitely not the only company on the quest for the Big Data insights, that seem to always be at least one data point away. Does Facebook have information on which data points they really need to make a commercial viable user profile? What data points are privacy sensitive? Is Facebook looking into alternatives for those privacy sensitive data points? If not: can Facebook enumerate those and ask their users for explicit consent to collect those points and ask for explicit consent in the future for any new data points?
Good luck this afternoon. I hope you get some insights.
I will try to ask as much as possible, and really like your questions of what data points are useful and are they privacy sensitive.
Thanks!
"I understand many if not all of your employees, and even your interns, are technically capable of accessing at least some data from any user, should they decide to do so against Facebook's will. I also understand the repercussion for this is that they would get fired and potentially sued. However, this is not accepted practice in every company that handle such sensitive data on users' personal lives. Moreover, it is easy to imagine adversaries and targets for which the risk of getting fired and/or sued is easily worth the benefit of obtaining a particular user's private data. How, then, do your security experts, who take security seriously and who surely understand the notion of 'defense in depth', justify that the proper safeguard is an employment/legal threat, and that there should not be a technical barrier preventing interns or other normal employees from accessing any user data?"
Bonus points if you can get them to talk such occurrences, which they almost certainly won't tell you, and why users should trust that they're handling this properly when they're unwilling to report sufficiently precise information on such incidents.
We need more people who are willing to try and solve problems, not just be critical. Thanks for being willing to have a conversation with them. You're making the right call whether you are able to have an impact or not.
What if he makes a negative impact?
Perhaps Facebook is not looking for solutions to privacy problems, and this is just a marketing strategy/theater/smokescreen; Creating news events where FB is linked to people in the privacy scene.
If you realistically look at recent news events, i believe this is the most likely scenario.
Willing to solve problems is not the same as willing to be a prop for a problem.
This is an "are we off the hook, please say yes" meeting, not a "we don't know how to fix these glaring ethical issues we're profiting from, please help us (for free)" meeting
New Sincerity is exactly the reactionary politics in which Facebook thrives.
They are just looking to fine tune for optics. The knowledge they gain from the focus groups just helps them make their message more palatable.
I think of fb that way because they are masters of double speak, weasel words, etc. which is the common behaviour of dishonest politicians.
Imo many of the questions posted here can be easily deflected, handled with conversation techniques that any politician or lawyer would know well.
You want an airtight position, built on a detailed understanding of how they typically deflect in the past. And because you are asking, you are probably the right person to do this.
Harari tried, and despite being brilliant and knowledgeable, he was simply talked over: https://www.youtube.com/watch?v=Boj9eD0Wug8 Though I suspect he is aiming for a softer approach.
Instead of a pile of disconnected questions, I would suggest developing a clear list of requirements, statements which must be true as a set, in order for a social system to have an acceptable level of privacy.
The list should be iterated upon, and not sent to them prematurely. It should be built on best practices and knowledge of privacy experts from leading institutions. Then it could be broadly endorsed. Then it could not be as easily weaselled-around.
This can be a mutually-beneficial transaction -- the powerful entity that needs to manage perceptions gets a boost, and the participants get a reputation boost for being seen involved in powerful circles. Witness that the HN poster's business is being promoted, just by being invited. (Which is a potential conflict of interest for the experts, if they're supposed to be representing some truth or public interest, but they probably have to play along for this personal boost.)
One thing that can possibly upset this transaction is if there's a channel for uncontrolled speaking out around it. Say, the format is a televised/streamed roundtable, and an expert with the mic decides to burn bridges with the organization and others like them, while saying things the organization really doesn't want them to say. (The motivation could be altruistic/duty, or calculated career grandstanding.) Or, in a tightly-controlled format, the expert who wants to never be invited to that kind of thing again could attend and then immediately bite the hand that just fed it, by ripping it on Twitter/YouTube/Medium/news/op-eds/etc.
I've seen a lot of experts play-along for their careers (in this kind of thing and analogous transactions elsewhere), and sometimes you see modest amounts of pushback by people who are still playing a political game, but rarely you notice a person who won't get on the slippery slope of game-playing at all yet who manages to have impact there.
(Personally, I'd be a terrible politician even if I wanted to be, and I just want to quietly solve technical and societal problems, while someone else fronts the band.)
The question I've always wanted to ask Facebook is how much is their data worth? No discussion of privacy at Facebook is interesting unless the discussion concerns money and their bottom line. They undoubtably have people inside Facebook calculating how much spcecific bits of PII are worth to them, and what it would to their bottom line if they stopped collecting them. IMO any discussion of privacy that doesn't quantify it in terms of money is basically a waste of time. They're a company and money is all they care about.
As a corollary to value ask them about risk. How much do they calculate the risk of holding all that PII to be? How much would their bottom line be hurt if they lost it in a breach?
It's only the small advertisers who want their adverts targeted at people in their country and topic, thats it.
That sort of targeting can be done simply by what country is their ip from whats the topic of the content Im sending them.
The whole targeted advertising thing is 100% a gimmick that advertisers don't care about.
While many advertisers don't care about it, to others it's very important. For example, if you visit a product page but don't complete a purchase, showing you ads for that project on different pages [1] is much more likely to lead to a sale than showing untargeted ads.
[1] https://en.wikipedia.org/wiki/Behavioral_retargeting
(Disclosure: I work on ads at Google. Speaking only for myself.)
While small businesses don't sound as awesome & big as Coke, there are a lot of small businesses.
Edit - my IP address has put me in another state that's over a 6 hour drive away before. I wouldn't find that useful when targeting local people for my store.
2. Do your apps “skim” the contents of device clipboards and send this info off device without user intent to do so?
And one open-ended question to try to gauge how open they’re being about the whole process:
3. What information do you collect that would surprise or upset privacy-conscious individuals?
I feel like if you ask questions where you have to quote your own words like this, you're basically begging them to be interpreted differently than you intend. I'd be crystal clear about what is being asked.
How’s this?
“Do your apps access the contents of device clipboards and send this information or any modified version of this information off the device without explicit user consent to do-so?”
Be aware that their PR guys could use your name to dilute your previous critical commentary once you have gotten involved and are part of their 'consulted expert' club. This could potentially leave you fighting their PR which will likely just end up with a muddy mess.
Be prepared is what I'd say, a reputation is on the line for you and not much for them.
2. How can they delete the data associated with the above?
3. Info on how they group personal data from WhatsApp, FB and Instagram
4. Who do they share such data with?
5. Who within FB is responsible for privacy policies, etc.?
I feel like this is a misguided notion. Facebook doesn't need to create "shadow profiles" for anybody to achieve the same effect: they can just pull together the data on-demand (e.g. say when you create an account, they could scan others' contact lists for a match for your name), without aggregating them together into a 'profile' beforehand. Unless you really intend to ignore that possibility (which I doubt, given the effect would be exactly the same), you probably want to approach it differently than talking about 'shadow profiles'.
Funny how this question always seems to generate distracting and misdirecting responses.
The simple fact that detractors seem unwilling to address is that FB and countless other internet advertisers are stalking billions of unaware people on a micro level using all sorts of shady and opaque techniques and compiling the most detailed psychological profiles on the most number of people in history.
The public has only just begun to contemplate the massive national security and mental health problems that this mass stalking and manipulation creates.
On 5, I would hazard that it's those teams. Of course, the buck stops at zuck.
* Which part of Facebook did the initiative come from - privacy policy, or (maybe) PR? How many of the people in the room are from (communications/PR/crisis management/some other related team)
* Is it genuinely an attempt to listen to critics and try to improve? (Can they point to examples of improvements they've already implemented?)
* What will the outcomes of this initiative be? How will they summarise and communicate their action points; how will any such points be followed up?
Come PSC (Performance Summary Cycle) time, how do they justify a "Meets All" or "Exceeds" evaluation?
I talk to my significant other on messenger. It gives me nightmares that any employee at Facebook could access that conversation at any time in the next thirty years.
It’s going to be really interesting when people from my generation start running for office. It’s conceivable a Facebook employee might think it’s “worth it” to check a candidate’s private messages, since he’s a racist Nazi and deserves it, or whatever.
edit: not a candidate though, would be harder to punch down on someone with political clout and a legal team.
1. How can someone who does not have an account prevent themselves from being tagged and/or identified in uploaded photos? Corollary: why isn't the tagging and identification of a person an opt-in feature only?
Why isn't it implemented yet?
Facebook's PR team and legal team are arguing two completely separate things right now, and I'd like management to explain how they reconcile those views.
I'd like to know whether their lawyers are right that users have no expectation of privacy, or whether Zuckerburg is right that privacy is the future of Facebook. If Facebook's lawyers aren't misrepresenting the company, then I'd like to know why Zuckerburg and management are so hesitant to make the same arguments in public press releases.
Try to find videos of FB officers (Zuck, Sandburg) who have already been publicly grilled.
Most likely on a corporate level, FB employees already know how to answer and respond to most of these privacy questions.
That means you need to figure out their initial canned responses, what assumptions they’re building on, and prepare a line of questioning/reasoning to chip away at their logic in follow-ups.
- How does the average customer know they have achieved "privacy". I have a feeling that they have many privacy features, but turned off by default.
- If you start with the end in mind. What does success look like?
If they agree, ask them if there is anything blocking them from studying the cases where the effects are negative on individuals and groups.
If it is possible to list the kind of content where likes and views are having negative consequences to society that data(counts not content) should not be stored on Facebook server or shown to Facebook users.
Right now there is too much emphasis during privacy debates on all data.
There is no distinction being made between the like and view counts that cause the ALS challenge funding to be produced - a positive to society, and like and view counts that reinforce my antivax aunt's beliefs,
Some of these counts are harmful, some are harmless and some are useful. Why store or display the harmful stuff?
We might both agree that your aunt's beliefs are harmful but the anti-vax society that your aunt is a part of will argue that us blindly following experts is harmful.
Should facebook be the ones to decide whats harmful to society? If yes don't be surprised if they consider what they are doing not harmful.
Those counts don't just effect my Aunt, they effect me too. If both the left and the right can agree that the numbers are having an effect, then the narrative changes. Currently we don't even acknowledge the root cause of lot of problems is not the content but the counts.
Those counts aren't just used by Facebook mind you, they can be used by anyone to trigger a particular group or an individual. The content used to do the triggering is just a superficial piece of the story.
Once data is captured it never goes away. As time passes and as it aggregates with other similar data, it actually becomes much more valuable.
So, continuing along, hypothetically, what are you going to do if capturing personal data in exchange for "free" services is not a business that should exist?
I understand that right now you're engaged in a long and drawn-out split-the-baby campaign, where you try to assure privacy advocates of your intentions and that's there some magic sauce involving algorithms that will solve everything, but what if that is not the case? What if your business model is built on harming people by encouraging them to make trades for personal information where, once we all figure out what we're doing, none of us would agree to fifty years from now? How will you know? Will you tell us? Do you already know? What are your plans?
If you truly want to respect privacy and are on the side of people living their lives without being constantly examined like lab rats and having every piece of their existence recorded for any hacker to see forevermore, what are your plans for knowing that it's not working out? What's your tripwire, your exit plan?
Because frankly, if you don't have one of those, then this is all just a PR exercise, right? You've already decided that you win, you just haven't figured the details out yet.
You can restate the question several different ways, but it all boils down to "How do we know you're serious about this?" Because so far it just looks like a bunch of the usual public relations BS.
What's the right level of control users should have over their data?
Then as a follow up I would ask what's keeping Facebook from implementing those controls.
Unless this was already covered in an acceptable way after the Cambridge Analytica f*ckup (I haven't followed what actions Facebook took afterwards to address the issue), I would also ask about what are they doing about policing bad actors, companies trawling or leaking users' private information or abusing it. How are they going to better prevent that in the future. Once it's outside of Facebook they've already lost control of the situation.
I legitly can't think of a more privacy-friendly way to do that. If you're paranoid enough to believe that no analytics is the only right solution, you probably have DNT on, and this is one of the rare cases in which it's actually respected.
2. What will be simple to use mechanisms / technologies / standards employed by FB to allow users to identify and delete their private information?
3. Will those privacy control mechanisms be standardized across Facebook products / technologies?
4. Will there be an effort to open source technologies / standards with respect to user privacy, so they can be peer reviewed and if good implemented by others in the industry?
Thanks for your efforts!
Years ago in the Snowden docs there was a diagram of a link into Google's infrastructure where they could take the SSL off and put it back on again, fooling people into thinking everything said about SSL and HTTPS implied actual privacy.
Since this is a taboo, 'not this again' type of question, can you think of ways to ask this in such a way they can only lie?
For instance, what guarantees can Facebook offer to their users that their messages are not being mass intercepted by Five Eyes?
I am fine with police with a job to do getting someone's texts, e.g. if someone is in a road traffic accident when they were texting on WhatsApp, I would gladly have the police get access to that person's data. However, the mass surveillance and the chilling effects that go with it are not good for society. It is a breach of privacy. If the government do such things it is still illegal. Even if they write laws that say it is okay, it is not. So rather than sweep this topic under the rug, I would like the answer from Facebook as to what they are doing and what they would do if their customers were subject to mass surveillance from Five Eyes.
I don't think it is unreasonable to ask this.
[1]https://www.nytimes.com/2019/06/18/opinion/facebook-court-pr...
If they want to show respect for privacy a user ought to be able to deep-delete (meaning, from backups too) any and all information they ever posted in any form on FB. This might even include information that was the result of inference from posted data.
I would like a setting that, by default, erases all of my posts older than, say, 30 days.
I would actually pay for this. Not a lot. A nominal amount, like $10 or $20 a year for “premium” options. No problem at all with that concept.
Privacy, amongst other things, should mean the user owns their information, not the service. If I can’t ensure my information is deleted I am one data breach or one disgruntled employee away from losing my privacy.
In this age of vindictive “the internet hates everything” polarization, privacy is critically important.
I’m a small business entrepreneur and I’m frustrated that to compete well in my sector I would have to advertise on Facebook. Their ad system currently seems intractably unethical because they know and actively use so much user data that users have not knowingly given away for the purpose of advertising. I don’t want to be asked in the Final Judgment why I paid into such a scheme of abuse — which is what it currently seems to be.
Just say no and hit send.
> In February 2018 Nicholas Thompson and Fred Vogelstein of Wired wrote a deeply reported piece that mentioned the 2016 meeting. It was called so that the company could “make a show of apologizing for its sins.” A Facebook employee who helped plan it said part of its goal—they are clever at Facebook and knew their mark!—was to get the conservatives fighting with each other. “They made sure to have libertarians who wouldn’t want to regulate the platform and partisans who would.” Another goal was to leave attendees “bored to death” by a technical presentation after Mr. Zuckerberg spoke.
Would FB be willing to work with a neutral third party group of user experience designers? Let's call them the PWHUX Board for Privacy White Hat User Experience. (Or maybe something else, PWHUX sounds a bit rude in English.)
This PWHUX Board would create standardized user interface conventions for disclosing and controlling personal privacy settings. This same group might work with other datahoovering businesses to establish multi-vendor standards.
You could ask if they plan to let users know exactly (and be able to opt out) where their data will end up (internal only, 3rd-parties, which ones? Could you select purpose?).
And of course, GDPR globally.
...
You see what I'm getting at? They understand privacy just fine when it's their own privacy.
I think you would just be a fig leaf.
Seriously, it's ludicrous to offer just a "home alarm system" and a ride to work (which I also assume is to their current job... why the hell should they keep doing the same job?) for a moderator who's now going to be in perpetual fear of getting killed. Those people may well no longer be able to work like they used to, for any employer.
[1] https://www.theguardian.com/technology/2017/jun/16/facebook-...
Terrorists don’t go around killing people for banning them from forums.
> Terrorists don't go around killing people for banning them from forums.
It seems incredibly arrogant to assume that you know what terrorists will and will not do (especially when used as a rebuttal to someone expressing concern for real people who have been put in this situation).
I don't have any experience interacting with terrorists, but growing up in a poor education system in the southeastern U.S., I've witnessed my fair share of gang activity. I have seen incredible confrontation/violence erupt as a consequence of amazingly trivial actions.
It does not seem far fetched to me that an extremist group would possess the potential to respond dispraportionately to a perceived act of disrespect or aggression. Given the circumstances, it's hard to find a charitable interpretation of why you would suggest otherwise.
Well, I don't know what to say...
> The moderator said that when he started, he was given just two weeks training and was required to use his personal Facebook account to log into the social media giant’s moderation system.
This seems like reckless endangerment to me.
That is, maybe if FB own the data, advertisers buy it and states/others hack into it... the right solution is to "push the arrow through" rather than extract it. Make the data (or most of it) public. Publish it. It's not really "private" in a meaningful way. The subject (object?) does not have control of and/or knowledge of the dataset describing them. Also (this relates to my last point) data is not the sum of its part. A lot of what the data is only exists at the aggregate level, and without publication users can never have control, ownership or any rights to these crucial aspects of their data..
To put it in the form of a question: Are there ways of arriving at a better state, with less distrust and paranoia that involves opening data, rather than just better protecting it.
I'm not suggesting that it's simple or that I know exactly how it should work. But, if advertisers had the same access everyone has, I think it'd be less of an issue. If the default was "data is public," I suspect we'd find better ways of dealing with data that truly needs to stay secret.
As an aside, unconnected to privacy, data has become a new class of IP. We may legally consider it copyrighted (raw data) or patent-able (trained NNs), but as a practical matter it is a new type of IP... of rapidly growing importance. There are massive, world changing examples of what can happen when we manage to create cultures of "public IP" or sharing. The scientific revolution was (arguably) directly related to the new culture of publishing experimental results. CS was irrevocably changed by free and/or open source software, especially compilers, operating systems, libraries... The WWweb, in lots of ways. The pace of the current ML explosion is directly related to and enabled by open source, free software, scientific publishing and "open IP" generally.
Imagine how held back we would have been, if those cultures of sharing hadn't emerged. I think data sharing is probably similar in this regard to compiler code or scientific experiments. Openness creates value, potentially a lot of it.
Privacy is a meaningful reason/excuse for closed data. I think it's worth trying to solve these two together. Dunno how to phrase a question for that.
Kind of like how FB's performance became part of annual review & promotion rubrics for employees recently?
Can other employees spike projects started by anti-privacy gordon gekkos to improve short term metrics?
A reference read: https://news.ycombinator.com/item?id=19959064
Why are there not more granular privacy controls?
Why is what a user sees of their friends that which is in the audience for a post? I don't need to see what someone "commented on".
If that question is deniable, then does FB take no efforts to guess at individuals budgets? (Ie household income, rent/mortgage, monthly subscriptions, etc) Does FB grant people privacy for what’s in their bank accounts?
But Facebook has a strong monetary incentive to never forget anything, ever. They have an incentive to make it unclear just how much data they're keeping about their users. They have a strong incentive to be as opaque as possible. And even if they let users be forgotten, they've got a strong incentive to make that hard to do.
How can Facebook balance it's responsibility to shareholders to earn profits with their responsibility as ethical humans to allow people to be forgotten? I do presume that, as people, they want to be ethical (and I'm sure someone will say I'm naive for believing that).
And how can Facebook make it's decision on where they lie on that spectrum clear to their users, so people can make informed decisions about what they want to share and do on the platform?
The hardest decisions businesses have to make is when to give up profit by doing the right thing. And the most profitable companies are the ones run by sociopaths for whom this is not a difficult problem.
Is there anything in particular that drives your participation? The reasoning is peculiar.
When they knew they could.
In addition, you might want to review the questions from when Zuckerberg was in front of the European parliament. The MEPs asked some good questions and Zuck basically weasled out of it. I'd love to see the same questions brought up again.
And also, info about shadow profiles.
That isn't legally binding at all.
Here is a good story of a guy who tried to get all the data the company had on him without anything close to a real answer: