So as a user, the benefit would be better ads. Honestly I'll probably leave FLOC on if given the option (although I use Firefox and Safari, and as far as I know neither will really support it).
https://m.youtube.com/watch?v=KbKdKcGJ4tM <- best commentary on ads ever
Personally I prefer to either have a need for something (I want to solve problem X) and do some research based off of that. Or I share an experience with a friend where they make a recommendation.
Funniest Facebook ad by the way: I work for <employer> and my partner gets ads for <products of employer> on Facebook.
FLoC is not immune to this: it relies on the device being able to track users and then provide advertisers "blurry" access to that tracked data. The problem is that we already have plenty of other tracking mechanisms that we can't reasonably restrict and that will interfere with the privacy protections built into FLoC. You will always be able to fingerprint and FLoC will always provide some fingerprintable entropy.
Even if FLoC was trustworthy enough to do what it claims, your interest cohorts alone can reveal your secrets. There's the classic example of Target knowing a woman was pregnant before her father was, for example. Yes, Google is going to try and filter out sensitive interests in cohorts, but that's an additional layer of trust you can't control. What if Google's definition of sensitive interest differs from yours?
HN has ads, in the form of job listings for YC companies. They don't bother me in the slightest, and if i lived in the US they might even provide value to me.
The hundreds or thousands of ads i'd see if i'd surf without an adblocker do not.
Maybe you do have a product i'm interested in and don't know about. But there's not hundreds of things i'm going to buy every month.
If it was like: here's 5 products we think might be highly relevant to you every month. Sure, sign me up.
But it's (almost) never that. Most of the time the entity selling ad slots realizes, hey, if we sell more ad slots, we make more money. So they keep pushing the button to get a reward. Until they die because they overdosed on ads.
The problem with ads, no matter how "personalised", is that they are sent to everyone by default. Almost no one changes defaults. Often there is not even an option to opt-out. Whether two users got the same or different ads ("personalisation") is not the issue. The issue is that they were sent ads when they did not conscisouly request them.
Thus, the fact that jedberg likes ads is not an argument for sending ads to everyone. In the same way that if some user dislikes ads it is not an argument to stop sending ads to jedberg.
Users are not being given a choice. When they are given a choice, e.g., to reject tracking on their smartphones, the result can be a decision that the online advertising industry dislikes.
Tech company employees can call themselves users, but there is a serious bias and conflict of interest that other users being subjected to ads do not have.
What if we just send ads to tech company employees. The tech worker cohort. They will not complain because they believe ads are "necessary". Problem solved.
why?
no, seriously, why?
"personalized" ad creeps me out, and I really don't understand how people can tolerate a banner like "We know you buyed Some Thing from amazon so we think you would like to purchase Related Thing from us" and not freak out
Additionally why do you want to view those recommendations while you are actively trying to do something else?
You don't. Use an adblocker and be done with them.
Like I buy an oven, why are you showing me ovens? Stop over fitting. Just know that I'm an active nerd and advertise me active nerd stuff.
In an anonymized (or not) way, can you tell me something you found via an Instagram ad that you were not aware existed and then purchased?
This. FLoC has no value proposition for end users.
Your position regarding cookies lumps two categories of cookies together. Cross-site cookies, and same-site cookies. Tracking isn’t exclusive to cross-site cookies, but the effectiveness (and invasiveness) of tracking is orders of magnitude more effective with cross site cookies. With the near-elimination of cross-site cookies, it turns out you can have your useful client side state and eat it too.
I would call it a charity to the site owner whose (presumably) free content you're consuming. What it brings to the user is the ability of the site owner to continue making content.
[0]: The Camel's Nose - https://en.wikipedia.org/wiki/Camel%27s_nose
Admittedly, there’s a tragedy of the commons issue: I have no individual incentive to enable FLOC. But, similarly to your DRM example, at some point publishers could require it, no?
They could. However they wouldn't have a way of enforcing you play along and don't have a separate floc ID for every site you visit
"FLoC leaks more information than you want"
"The end result here is that any site will be able to learn a lot about you with far less effort than they would need to expend today."
Hmm. From someone (Firefox Team CTO) that probably knows this space well.
So how about this, Google must not, and cannot implement FLOC without it being a cross-browser standard; that is to say if anyone of Microsoft, Apple or Mozilla veto FLOC, it's dead.
This is how standards are supposed to work. Google should not be given the power to make a thing (like AMP) and just force it upon everyone.
We MUST start regulating Google's every product development, I'd rather it get held up for a year in court before it sees the light of day.
This isn't how internet standards work, or the how they have ever worked. Take the development of HTTP/2:
[2009] Google researches how HTTP could be improved and develops SPDY: https://blog.chromium.org/2009/11/2x-faster-web.html https://dev.chromium.org/spdy/spdy-whitepaper
[2010] Chrome implements SPDY, and they start gathering real world performance data.
[2011] Several rounds of iteration to make it faster, more reliable, and fix bugs.
[2012] Major websites built out support, Firefox adds support, the process of standardizing it with the IETF begins: https://datatracker.ietf.org/doc/html/draft-mbelshe-httpbis-...
[2013] More and more sites build support, CDNs enable it by default
[2014] Safari adds support.
[2015] Standardized as HTTP/2: https://datatracker.ietf.org/doc/html/rfc7540
Standardization follows cross browser support, and cross browser support follows single browser support.
This is the path FLoC is following: it's currently incubated under the WICG (https://github.com/WICG/floc) and Chrome is developing it. Other browsers are paying attention and evaluating: that's what this Mozilla article is about. If at some point we get to a version that other browsers are happy with and choose to implement, then it could potentially be standardized.
(Disclosure: I work on ads at Google, speaking only for myself)
The fact that Google is an ad provider and a browser vendor and trying to implement a browser-level tracking API is very alarming.
As mentioned in the Mozilla analysis, Google is also saying that they're who determines which sites are considered "protected" categories... which is the cherry on top of all of this nonsense.
I'd really like to understand how someone working on this thinks that it improves the web for everyone... not just Google.
Google couldn't care less about "cross-browser standards". They've been ramming Google-designed and Google-authored "standards" through standards bodies for years now, and increasingly disregard any objections from other browser implementors. And, sadly, there are only two browser implementors left that have any relevance: Safari and Firefox.
It requires on 33 bits to uniquely identify an individual. [0].
I would be interested to learn whether FLoC employed k-anonymity measures, and their report on it.
If I am retired, female, live in the 830* zip3, and own a sedan, it is probably hard to identify me. Add that I am Korean and am searching for thyroid cancer treatments on Tuesday at 8:43AM local, then I am way more identifiable. I don't understand how FLoC works, and how it gets around this type of intrusion.
The only solution I am aware of is to dramatically limit the category depth. But that sort of defeats the purpose of micro market segmentation. And that's a good thing, IMO.
[0] https://www.eff.org/deeplinks/2010/01/primer-information-the...
Think OutBrain a few years ago, who were egregiously intent on serving certain clickbait to certain consumer sets. With FLoC, your winnowing and funnel becomes much easier (rather than serving rotten banana ads with just one trick, you KNOW your consumer has a propensity for Dunkin Donuts and you can increase your ad coverage). Everyone wins but the product -- your eyeballs.
Subtle note, this would probably fall under the "sensitive topics" category discussed and not be tracked, but insert niche hobby here and your point still stands
[1] https://support.google.com/adspolicy/answer/143465?hl=en
It's such a pity that online advertising has turned in this direction. It started out so well intentioned! Search ads showed ads related to your search, Google adwords showed ads related to the content of the page you viewed. No invasive tracking necessary!
And now we have come to this. Tracking everyone everywhere has become so pervasive that an operating system vendor has just announced this week that they are building a first party VPN into the OS in a desparate attempt to reduce this ubiquitous tracking...
What "creepy" is, is an entirely subjective opinion that changes from person to person. I'd say there are totally ways.
When I go to a grocery store and swipe that card for a discount, I know it's just being used to correlate purchases and track me, but I don't view it as creepy at all. All they got from me was my payment information, but they literally already have that, they get that every time I swipe my credit card anyway. So what is creepy about me explicitly awknowleding I'm being tracked in a reasonable way when I'm in their store?
The creepy part is when people do things without telling you. That stores keep a record of your purchases when you swipe the card is probably not creepy. I'd assume most people assume that's what happens.
But if they then share the information they collected on you with others, without asking for your explicit permission, that's where it's starting to get creepy.
I'm not sure Chrome users are aware that their browser tracks every website they visit, create a profile on you, and then share that profile (in a supposedly privacy preserving way) with others.
The second you open your browser you are exposed to risk. Many times I have had to tweak the default settings of my browser to comply with my (non paranoid) requirements. Basic things like putting DuckDuckGo as the default search engine, turning off various JS APIs like HTML5 Canvas, WebGL, using AD-blockers and other addons, tweaking about:config and hardening it, etc
Call me a power user if you want, but all this hardening stuff should ship out-of-the-box.
I'm not sure if we're being led to focus on a wrong problem. I hate intrusive Ads as much as everyone else. However, it's not only that "when you open your browser, you are exposed to risk". It's also:
- Every time you use Windows (without turning off all the bad settings)
- Every time you connect to a Cell tower (telcos openly sell your location data)
- Every time you use your credit cards
Now, I'm not saying those are OK, or to justify intrusive Ads. However, I see a magnitude difference in the "violation of my privacy" for the above cases. The media and certain communities keep focusing on Ads tech because it drives clicks. But then we let the Telcos, Insurance, and Credit Card companies establish a creeping normality on our privacy violation.
We don't spend as much effort to stop Telco from directly selling our location data [1], but we have daily threads about companies indirectly use our location data for targeting Ads. Are we having our priority wrong? I couldn't shake the feeling that we're being led by a different narrative. The best situation of course is when we have good privacy laws and practices. However, focusing on the wrong priority like this is how we let other (much more severe) violators (Insurance, Telcos) get away with their creeping normality.
[1] https://www.marketplace.org/2020/02/28/fcc-set-to-fine-big-t...
And while Verizon, T-Mobile, etc. all have programs that opt-in to data collection and marketing practices, it's often impossible to opt out of tech companies' behaviors. Because telecoms are required to get your opt-in consent to use your data, generally they offer incentives to join rewards programs that have the additional marketing permissions as a requirement.
For example, Verizon Up Rewards requires you enable Verizon Selects, where they can collect information about your web browsing activity and such: https://www.androidauthority.com/verizons-new-rewards-progra... Not something I'd want to participate in, but Verizon is paying it's users for that data in effect, something tech companies never do.
> Opinions are my own.
I find it impressive how well this line still singularly identifies the employer of anyone who uses it. :)
By existing in 2021 (whether you use computers/tech or not) you need to accept your data will be collected, analyzed and sold. It will be leaked, combined/processed and abused in many different ways. I would be surprised if there was a single human on earth Facebook did not have a profile on at this point. I'd suspect the NSA can bring up the profiles of all 7 billion humans and recollect their entire lives from the digital/physical breadcrumbs they leave every day.
Now that we can collect so much data, so rapidly (at the speed of light) and can analyze it in real time and store it forever it seems every digital application is focused on obtaining that valuable information and storing it to use in some way (usually, for profit).
Even electric cars require apps and digital connectivity before they can be used/charged.
Data is the new gold.
On top of disabling JS, just a simple AD blocker like uBlock Origin greatly diminishes the amount of profiling. There is no silver bullet however. It depends on your threat model.
If you really don't want to be tracked and profiled, using the Tor Browser Bundle is worthwhile, but even that is problematic since it's heavily surveilled (both at the entry node and exit nodes).
I mean, Brave kinda does that. It’s much more “hardened” by default.
I’d really like to be able to buy preloaded offline versions of certain websites to be able to use indiscriminately.
For things like embarrassing questions which I might want to search for within a given subreddit without broadcasting it to who knows what systems.
I don’t even necessarily care if there’s a result, or even if the information/responses/comments are a decade stale - i can live without current events.
I just want the peace of mind that I’m not being observed. That’s something that I’d pay for.
https://en.wikipedia.org/wiki/Wikipedia:Database_download#Of...
I don't have an opinion about FLoC per se but this piece feels like it's focused on finding flaws with it in the absolute, as if we didn't have pretty awful tracking now. I don't believe we can get to perfect, what with shadow browser fingerprinting techniques and all, I just want to know if it's an improvement.
If other fingerprinting techniques stay, then it's actually worse, since there is now an extra data point to better identify users.
Did google ever seek proper peer review for FLoC before they started testing it on people?
That it came from google is hardly surprising, as they are hell bent on stealing every bit of information they can from everyone, whether or not that person has a relationship with them, let alone consented to the abuse.
I would be stunned if FLoC lasted more than a few months in the real world before google just started using it as an additional source of entropy to spy on people across domains.
There's also nothing stopping ads from being relevant, when google started AdWords (when "don't be evil" was still a thing) you got useful ads based on what you were actually looking at. Now you getting nothing but repeat ads for something you searched for last week.
that relevant ads requires spying and abuse is nonsense, and google's original destruction of the ad tech industry demonstrated that non-spying ads that were based on page content were more than effective enough.
Of course your uid implies that at best you're a pro-google fan, if not an actual employee, so I don't see me convincing you of anything.
1. Note first sentence, underlying assumption, of Mozilla communications. This company is blinded by advertising payola and cannot see non-commercial use of the web as worth protecting.
https://blog.mozilla.org/en/mozilla/the-future-of-ads-and-pr...
https://blog.mozilla.org/en/mozilla/building-a-more-privacy-...
Google could enable FLoC in Chromium by default and then, correct me if I am wrong, the browsers based on Chromium would have to disable it.
Mozilla of course is not based on Chromium.
However, Mozilla does try to match Chrome feature for feature and Google also is the hand that feeds Mozilla.