Users are already getting value for their data - the product showing the ads. People have a choice in whether they want to use gmail and get a free best-in-class email client, backend storage, and worldwide access to their email. The trade is that they're participating in an ad platform that is required to build this infrastructure that is on a never before seen in the history of humanity scale. Of course Americans and Europeans think it's totally fine to charge a five to ten bucks a month for a service like that, even when it would mean forcing the growing lower classes, second and third world countries, and the tech illiterate to use unsafe and frankly dangerous alternatives. Ads are a communal investment that allows us to provide services to people who can't afford those services.
>> Users are already getting value for their data - the product showing the ads. People have a choice in whether they want to use gmail and get a free best-in-class email client, backend storage, and worldwide access to their email. The trade is that they're participating in an ad platform that
FB are currently earning $50b pa from advertising, which is largely premised on FB's data collection on and off the platform. What users get is, by-and-large, very similar to what they were getting in 2012, when FB was making $5bn. That 10X increase in revenue (and 10X increase in FB staff/cost) hasn't gone towards making more/better product. It has gone towards making better (often more creepy) ad-tech.
At least with television there is competition, and that means ad revenue has to go towards making more/better programming.
SAAS has negligible marginal cost and often strong network effects. The value of an ad-platform and many uses of data (especially ad-targetting) also scales exponentially. This is leading to bad, monopolistic outcomes.
Also in television there is much less competition. The TelCos are the worst monopolies out there. Getting a competitive cable or ISP landscape is a laughable idea in the US.
I hope Google, Facebook and others will develop a diversification strategy and move out of this shithole.
That said, putting a price on data may be what it takes to make companies take privacy and security seriously, simply because it might make it easier to argue standing in a lawsuit where data is leaked or mishandled. Similarly, putting a price on life seems insensitive, but wrongful death lawsuits motivate safety concerns.
Out of curiosity, why would this be morally wrong? To avoid reasoning-by-connotation, replace the word dividend with a word that connotes something positive (eg reparations): would you say that the functionally-identical ongoing payment of reparations to victims of pollution would be wrong?
https://rady.ucsd.edu/faculty/directory/gneezy/pub/docs/fine...
Here's a digestible Freakonomics bit which discusses the same:
http://freakonomics.com/2013/10/23/what-makes-people-do-what...
When there was not a price on being late, the price paid was actually moral. That is, being late was a bad thing to do and so you should feel bad. However, when a monetary price was introduced, it was no longer morally bad to be late, as long as you paid the fee. The monetary-price replaced the moral-price, and it ended up that there were a lot of people willing to pay money where they weren't willing to pay morally. And so the truancy rate rose with introduction of late fees.
I would guess that that is what OP meant. If you introduce a price, you can actually remove even stronger restrictions that are in place based on morality. Because as long as you're being charged, you can assume the price is inclusive of your moral hazard.
In the case of the dividend to pollute, you force people to choose between money/power/economy and them/their children having cancers and birth defects.
This is perhaps where the analogy breaks; ignoring DNA and credit, data may lack cross generational effects.
Such data collection practices are core to the business models of some of the most valuable companies on the planet. "Google does it" provides far more social proof for diluting one's users' privacy than "a California digital privacy law acknowledges it exists."
The problem is that people value the free service more than their data. And companies value the data more than they value the service.
The tech and hacker community might value our data and privacy more than the service, but that's clearly not the case with most people.
Imagine if there was a company FaceTome that provides the exact same service FaceBook does. But FaceTome charged you a dollar for month and collect no data while FaceBook charges you nothing but collects your data. I guarantee you that most people would use FaceBook and FaceTome will go out of business as most people simply don't care about their data or privacy.
Ads targeting isn't going away, but I'm not so sure a pay to use service wouldn't stand a chance in this current climate.
Speaking the truth in this instance is both more productive and also should frankly be more scary than the hyperbolic distilled versions that are oft thrown around: mining user data with AI systems to sell products to measurably alter human behavior is an unregulated industry and likely should be.
This "dividend" nonsense is transparently ridiculous since it misses the point of the problem -- that incentives here yield damaging societal effects -- but again, here we see the result of the "Facebook sells your data" meme/falsehood: politicians conclude if they just pay users for the data, the problem is solved, right?
Nonetheless, you are elsewise correct: mining user data to sell products to measurably alter human behavior is an unregulated industry and likely should be.
But if it meaninfully increases the cost of using the targeted data, it could be an interesting trade-off.
Almost like a carbon tax isn't designed to raise revenue, a personal data tax could discourage the "use it by default" mentality, since it would come with a cost.
For example, they will/may start asking for address, phone number, real DoB and SSN for tax reporting purposes since this would be an income. If I have multiple gmail/FB/... accounts I may have to drop some or provide the same information for those, basically removing any doubt that I'm the owner of those and so and so forth.
Moreover, since this won't be limited to Google & FB, almost any service which collects data, which would be everyone, would potentially have to ask you for this information before they let you create an account. It may lead to the complete loss of anonymity on the Internet.
As I see it Hertzberg's proposal that Newsom announced would be the state collecting on behalf of the recipient. I would hope this could be done without providing identifying information.
The alternative proposal that companies pay users directly for their data is the one we should be wary of, as you're correct - companies would need your info to pay you. It was put in the article as a byline but didn't have any supporting commentary.
I still think this is a good strategy for the purpose of snuffing out the incentives for surveillance business models.
What I think should happen is, there should be some protections regarding our data. I should be able to say, ok, FaceBook, I wish to no longer use your services so you need to stop billing me (collecting my data).
Oh, and wait until you have to declare your "data dividend" in your tax declaration...
That'd be easier for companies (just send one check), and provide a privacy preserving proxy between the payment and payee.
Another serious counterargument: when a company is making billions from billions of users, how much could one reasonably expect? Revenue per unique user is tiny.
How do you set prices? How does this lead to better incentives and outcomes, rather than a scheme where people can get $4 a year and the industry gets a moral blank cheque.
Currently, our Google & FBs data is being monetized mostly via ad-targetting, or via (for example) optimizing the FB newsfeed algorithim so that you spend more time on FB, where they can put that ad-targetting data to work.
Google's captcha-powered self driving cars demonstrates a point, but at least for now, the value of data is mostly ad-related (or business/political/military/police intelligence, which overlaps with ad-targetting in worrying ways).
Paying consumers to be advertized to ... isn't that a black mirror premise?
As of now, I'm more inclined to the "make-the-data-public" direction. For democracy concerns, for example, it would be valuable if the public could
If so, do we have the technology?
And I invariably answer, in my best Idiocracy: 'Cause they pay me every time I do!
This is such an odd sentence to read, post-GDPR, because it sounds as if consumers have no idea what their data is being used for, and/or have no control over how it's processed, and just have to subject themselves to it.
The GDPR literally starts off by recognizing personal data and the protection thereof as a fundamental right, and regulates how others must subject themselves in order to process it.
As pervasive as the GDPR is, a Californian in California isn't subject to it, and outside of people interested in tech, would make an assumption they probably know little about it?
Cambridge Analytics happened at a time when neither processors nor consumers were yet very sensitive to the topic of privacy. This has changed dramatically, and the GDPR is not the cause thereof, but a consequence thereof.
Of course the GDPR is not binding in California, but it should be at least thought-provoking in California.