It's ironic seeing that the law was in power for the last 2 years, but companies woke up only last week. A lot of those mails are only information, with no (clearly marked) link to a consent panel, so I assume that me ignoring them means they won't be allowed to spam me anymore.
Those emails which ask you to click to continue to receive marketing are a red flag that those companies did not have any legal basis previously, or they're just cargo-culting other companies even though they already have a perfectly valid basis to keep in touch (like you being a an actual customer). Check out your favorite big-company SaaS signup today (like, Jira), you will still typically not see any explicit consent checkboxes, because due to a customer relationship it is not needed.
From that perspective, this whole trainwreck makes considerably more sense.
In my case, many of them assumed silence as accepting the new terms.
> We encourage you to take the time to review our revised Privacy Policy and Terms of Use. By continuing to use Microverse on or after May 25th, 2018, you acknowledge our updated Privacy Policy and agree to our updated Terms of Use.
> What do I have to do? You don't need to do anything as these changes will automatically apply to you. If you don't want to accept the changes, you can unsubscribe below and we will remove you from our database.
> Opt out: If you have not already opted out of receiving marketing communications from Bugsnag and would like to, you can do so at any time.
> We are clarifying that all of our users, no matter where they are located, may contact us at any time to review the personal data that we have of theirs, request that we delete that data, or withdraw their consent to receive promotional announcements from us.
> The updated Privacy Policy automatically comes into effect for all Envato users on 25 May 2018. So your continued use of the Envato sites from that date will be subject to it.
The flood of emails is a nice reminder of how many services you're signed up with, too. Some even with multiple e-mail addresses.
The rest knows well that they got your address without your consent and try to get this consent from you now so that they can legally keep your data.
I personally don't really feel like keeping my email is a violation of my privacy. If they're not "processing" it (that feels like code for "data mining") is this really required? I mean my email address is literally a public means of contacting me. It's kind of fun that they decided to use a one-way hash, but this story doesn't make me feel like the internet has really been improved.
Touch Surgery sounds like a honest company, so for them this was just some extra burden. But the same law prevents ShadyAdtechCo from getting datasets from several companies and joining them on e-mail column to build a profile of you, without your explicit, informed consent in several places.
You could track c_sharp_enthusiasts@myCompany.com, but that is not identifying a person.
There's a big difference between one fact and a billion, of course. But that's what keeps happening on the internet -- what feels like one small harmless thing turns out not to be harmless at scale, with no real warning that you're crossing from one regime to the other.
It's also a means of identifying you. It's not just the piece of data, even if it's only your mail address that is stored it can be sensitive information due to context. Say, due to a data breach a list of all members of a company's mailing list is leaked. You might not mind too much if your publicly known mail address shows up on Amazon's mailing list, but you might care if it shows up on the leaked mailinglist of transexual-midget-porn.com
How would you feel if you had to give your home address to the baker to buy a pastry?
It was conspicuous. I asked is he'd asked me that because of GDPR. He said yes. I said no.
... and elsewhere ...
> On the other hand it also was not very hard for us. We are not a creepy company.
> This is not to say that preparing for GDPR didn’t take us 100s of hours. It did.
A company who it didn't affect much, spent 100s of our hours? I think it would reasonable to call that onerous.
The different & fair question would be if time was justified.
100 hours is 12.5 days. That is not much to protect your users data.
At $dayjob we are at hundreds of thousands of dollars in staff time and legal fees (mostly updating and reviewing existing contracts). We don’t do anything shady with user data, and already have a robust data security program due to our industry.
A family member’s small business which packages meats for the grocery is similarly burdened to the tune of hundreds of thousands.
That’s a huge waste repeated millions of times over around the world. They could have just targeted this at the big web companies and Adtech firms with some simple qualifiers. This law isn’t really much good for consumers, but it’s very good for lawyers.
> We engaged a dedicated GDPR consultant
I wish not-so-hot takes like this are more widely read, and along with sane enforcing, contribute to the sorely needed education on these topics of the general population.
To save the information that a certain email address has explicitly withdrawn consent, they need to store it. The alternative is to send out a new email the next time someone adds then. I think the interpretation of GDPR this particular instance of information storing is still open, but they have done everything possible to keep it safe. Should the list of hashes be leaked, the best an adversary can realistically do is check known emails against the list of hashes.
Active concern for me: GDPR will promote a bunch more homegrown looks-fine-but-actually-busted crypto schemes. I don't think GDPR will be used to enforce that even in the case of breach, and I'm not sure it should -- I think we should make better schemes available instead.
One of the GDPR notes says:
> [p]ersonal data which have undergone pseudonymization, which could be attributed to a natural person by the use of additional information, should be considered to be information on an identifiable natural person”
Consider that you are running some kind of controversial/embarassing site of sexual/political/other sensitive nature. You keep a hashes of people who once were users but unsubscribed or something like that.
If that database is leaked, a user could re-hash list of political figures, celebrities or just some big list of well known email addresses and with this information find out they were users of this sensitive site.
So to me it seems that pseudoanymized/hashed emails still count as PII and have to be treated as such.
If you can accept some level of false positives you could make the hash too narrow to be able to usefully reverse it. For example if only sixty people will ever subscribe or refuse to subscribe,a 24-bit hash is plenty to reject mistaken attempts to subscriber somebody who doesn't want in, but good luck guessing which GMail user is "2ca24b".
Another problem is, what if the email address changes hands - maybe even the whole email domain changed ownership. You probably need a way for people to change their minds, as that then also covers the case where the person behind the address changed.
There are ways to do this better. Let's say that it's 1 party and you're trying to figure out if you've seen en email address before. (That's the case in the article, there are also schemes where you and another entity can figure out if you both saw any email addresses -- but that's not what we're discussing here.)
We already know how to take relatively low entropy things and store them securely to see if you've seen them before, for password storage! However, password storage works a little differently. You _know_ which entry you're checking against because you have a secret (password) but also an identifier (user name) -- so you can recompute against the same random key. This randomization means attackers need to try every password for every user. This doesn't work for us, because we just have an email, but it's close.
Three parts worth considering: KDF, PRF and truncation. Firstly, your (deterministic, for reasons mentioned above) PRF turns your low-entropy input into a higher-entropy key. But (again, for reasons mentioned above) attackers still just have to try every email should they compromise your database. You can fix that problem by also adding a PRF (pseudorandom function) that you rate-limit vigorously. Think of a PRF as a keyed hash -- the usual example is HMAC-SHA256. If you're capable of keeping PRF key material safe but might leak a database dump (not unreasonable), the PRF forces the attack to be online: an attacker can only validate guesses as long as they have access to the PRF, and the PRF comes with audit trails and rate limits.
Finally, you can choose to truncate the output. Because the output space of your PRF will be much, much larger than the input space of email addresses, a match out of the PRF gives you almost perfect certainty that you've seen the email address before. That goes for you, and an attacker. If, let's say, you have another way to validate if you've seen the user before (but it's expensive, say, you have an encrypted offline dataset but it's AES-GCM'd and you can't afford to decrypt the entire thing every time), truncation gives you a neat way to _probabilistically_ say if you've seen an address before.
That particular part is assuming security through not knowing the implementation of the security models components, aka. security through obscurity. Rule no. 1 in security, always assume that the adversary knows exactly how everything is implemented and can do that for himself.
They have a legitimate business interest in not spamming someone if people try to sign you up multiple times, and since the email address is hashed, all they can use it for is to determine if they've sent you an invite before (and potentially when they did so, or when you declined the invitation).
Maybe they could get in trouble if they also retain information on who is trying to send you invites and creating a graph and a shadow profile based on this type of information, but it sounds pretty clear that this isn't something they're doing or are interested in doing.
Then again, I've used my throwaway account to subscribe to a lot of things, I feel like I've gotten around 20 GDPR mails so far, why don't I have more? It's interesting (and scary) how many sites I've not dealt with for years still have my data.
I guess we're all free to ignore those emails, if you don't really care that they have your data (as has been the case until today).
No Medium, I must NOT agree to your privacy policy and your cookie policy, because to use and share my data you need my FREE consent. AND you can NOT deny me reading an article without giving consent, because then the consent is not FREE, and it is NOT strictly necessary for the service.
Medium: either you allow me to read blog posts on your webserver without FORCING me to allow you to collect my data, or you don't. Choose. But stop fucking annoying me with lying banners.
Medium works perfectly well for my purposes without that banner being displayed. I can open up developer tools and delete that node.
If I don't click agree, does that mean that this information isn't collected? Because tracking cookies are still placed.
Now what is interesting is that I don't remember being asked for consent for them to place a cookie to log the number of articles I read in a month as part of their sign-up funnel.
They could probably make this compliant by storing the counter in your local storage and never sending it anywhere - just having a piece of JS that essentially does: if(Storage.getItem("visits") > 6) { displaySignnupPopup(); }
They can't, but they can, for example, ask for a fee to read the article. They don't deny you reading it, but they don't have to give it to you for free either.
Either you share your data, so they can make money to operate the site, or you don't, but then the content is not free.
I expect some sites will choose this route.
Of course they can charge a fee. They should.
> Either you share your data, so they can make money to operate the site, or you don't, but then the content is not free.
No. My data is not a commodity exchange. The GDPR makes that VERY clear. I can not pay with my data. Full stop. There is money for that.
Do you want my opinion or what the gdpr says?
My opinion is that my data is not a commodity exchange. We (I) usually use money for that.
The gdpr says
"When assessing whether consent is freely given, utmost account shall be taken of whether, inter alia, the performance of a contract, including the provision of a service, is conditional on consent to the processing of personal data that is not necessary for the performance of that contract."
So they can politely ask me for my data to read the blog post, and I can politely refuse to give it to them. And if my data is not necessary for letting me read the blog post (it is not) then they have to let me read it anyway.
> If you don't like those terms, go read something else.
That is what I did. I left the site. But this is not what I am angry about. They are lying by saying that i MUST consent to their terms.
> If you refuse their terms, it seems obvious to me that they should be able to refuse to serve you. Maybe I'm missing something here, but this sounds like asking for a free lunch.
User data is not money. User data is user data. I am not asking for free lunch.
> Medium uses browser cookies to give you the best possible experience. To make Medium work, we log user data and share it with processors. To use Medium, you must agree to our Privacy Policy.
I must agree to logging user data and sharing it with processors?
EDIT: come to think of it, it might be a new, GDPR-specific, dark pattern. I can use the site without clicking "I agree", and the existence of that button sort of implies the consent is not assumed. The wording of the message ("you must agree") is just trying to bait consent.
EDIT2: I just read[0] that biggest sites in my country are treating closing the GDPR popup as giving consent to everything. This definitely does not sound as explicit, informed consent. I sincerely hope it'll land them in a world of hurt.
--
[0] - (PL link) https://zaufanatrzeciastrona.pl/post/klikasz-x-w-komunikacie...
Yes and no. Yes, in the sense that you can argue that. No, in the sense that the GDPR just says "no, you cannot ask people to pay with personal information". So either they must show me the article even if I opt-out of giving my information. Or they must make reading their article conditional upon something else (say, paying them). They CANNOT make it conditional upon my consent to use my personal data, because that's just coercing me into clicking "yes", which is exactly what GDPR is supposed to curb.
That said, it's not the first time I've seen something like that this week. I wonder if some companies aren't simply testing if they can get away with it.
Given the number of ugly popups I had to click within the last few days, it already is.