I found it VERY amusing if you go to r/SEO just yesterday there were moderators and flaired users (you know, the elites of the SEO community, lol) insisting much of this was "debunked" years ago.
They of course deleted their posts, but the threads are still up. What a den of scammers over there.
https://www.reddit.com/r/SEO/comments/1d1eqjj/comment/l5tvfw...
https://www.reddit.com/user/WebLinkr/
I love how reddit is turning into the new SEO scam over night because of this stuff. Great work as always Danny Sullivan!
How these types first gain moderator status on a few subs and then the spam begins (picture of spam https://pixeldrain.com/u/a6qUPjTq )
I haven't been able to find a single legitimate expert in the entire sub, and I've checked about every flaired user and moderator.
You have lots of people like the above, or https://www.reddit.com/user/jesustellezllc/ that claim to run an agency in Frenso California called Ozelot Media, but when you look him up there's nothing. When you google "SEO" + "Fresno California", Ozelot media isn't even in the top 100 results. Lol, I thought that was the job of a SEO-type? Why let that stop the grift though?
What answer do the engineers at google working on this have for this violation of privacy?
We don't know who you are, you are just a number in a database, and we don't even know what number, we just get the total number of visits for each website, not who visited it. It is like counting cars on a highway, not following your car. Plus, it serves the useful purpose of providing you with better search results, the terms and conditions allow it, and it can be disabled.
Similar to how insurance companies have offered voluntary, “anonymized” data dongles for discounts that are now being used (or at least revealed to be used) to collect data most often used to reject claims.
This is not what a clickstream is. A clickstream requires that the sequence of clicks be preserved, and preserving that sequence undermines anonymity.
It certainly is not "to improve the net or advertising" - that would be the lying part.
Google has done some good for the net, but the scales of their contributions slowly but steadily move to the negative side.
Basically if you believe lies you tell yourself, they tend to turn into truths in your mind over time. Even if you were doing it “ironically.”
Personal opinion about work at Google (still not googles opinion) I’m consistently impressed with how seriously this stuff is taken and the amount of work that goes into making sure that things like this sharing can’t happen accidentally, and that user choice is respected. The engineers on the ground are absolutely making sure this all works, and most of us care deeply about user privacy. I have personally worked both on implementing new features that significantly push forward privacy, and on implementing privacy controls for regulatory purposes.
I believe the law is violated when it's sufficiently profitable -- it just requires VP permission.
No public sources for this except Jedi Blue, the old anti-poaching case, etc.
I'm sorry but this is just wishful thinking. It might be what the spirit of the DMA & GDPR want but definitely not the reality thanks to inadequate or outright non-existent enforcement.
There are businesses out there whose entire business model and revenue stream are based on violating the GDPR. Not some kind of internal conspiracy or rogue employee, but the entire company is doing it in the open and the result of its doings (targeted ads or spam) are visible out there in the open for all to see.
Facebook, credit bureaus, data brokers, "consent management platforms", etc. All these companies' business models are big, obvious breaches of the GDPR. Yet, they are... still alive and kicking?
There is no chance that a concealed GDPR breach (whether intentional or accidental) will get addressed when the biggest intentional breaches are still allowed to continue out there in the open.
I suspect something very similar is going to happen with the DMA - Apple is already acting in bad faith but have yet to see any consequences.
The same answer you probably have for the millions of questions about what the things you do that some other people find offensive to their personal views and beliefs.
Quite a bit.
So much that now that I have what "everyone" asked Google for for years - that is blacklists - I hardly use them.
Why? Because with Kagi I get much better results out of the box.
I am fairly sure Googlers will tell me there are multiple safeguards to prevent the inclusion of Google ads from affecting ranking, to which I just have to say that the results speak for themselves.
Please note: I have only used Kagi for two years. I am only one user. But I am a user with 20 years of experience with Google and that got to count for something.
I don't know how people keep talking about it. The results, as you say, speak for themselves.
No matter what, whatever we ended up with was going to be shitty and exploitive.
I made my decision two years ago and I would probably do it even if it was just on par with Google, to support competition and to avoid supporting Google.
But in hindsight it is just exeptionally much better. There is no going back unless Kagi does something monumentally stupid.
I'm still happy to put my money where my mouth is and do pay for services which are genuinely useful to me. But this is not the kind of internet I imagined when growing up.
Then, now, it is like media before the 90s: you need to pay a lot of money to be in the center page of the newspaper.
But, hopefully we are talking about LLMs now, seems like one of the answers to search engines in general. Beyond AI, I see LLMs as a good evolution from PageRank.
A little bit general but lately I use the expression: "Complexity as Scam". Google always pointed to their "algorithms" and played with this term as if algorithms couldn't be adjusted to whatever you want to be. Initially the coined term was sound because it was based on a scientific paper and eventually it evolution but it seems like the PageRank original idea has detoured from being a "pure" graph algorithm.
Another context where I use "Complexity as Scam" is Web3. It is like Matryoshka dolls where there is always one more step of complexity to probe a point, but it never ends.
A barrier whose erosion has been well documented over the last 10 years.
Google and the ad ecosystem they acquired was basically the flywheel that spurred content creation at scale. Anyone could jump in, follow a few guidelines and earn a living by producing content on the internet. The Youtube acquisition and monetization followed the same pattern.
Over time the market consolidated and got less and less competitive: less platforms with complete control of traffic and one-sided revenue sharing agreements. The guidelines so to speak on how content should look and feel like were algorithmically made stricter and stricter until everything looks, feels, sounds and reads the same.
The problem right now is that the platforms are still tightening their grip, and it's all tied to the approach of using AI to replace the content creators on the platforms from Google to Spotify to Meta, and carving the spared money to shareholders. And while the web has been shitty for a few years now, we're now seeing a sudden drop in quality because the average user has no recourse or alternative, and neither does the average creator have the means of distribution and monetization (not just publishing, that's been solved) to even find, let alone meet the new kinds of demand.
I'm certain that in a few years this will even out: new search engines, new aggregators and new feeds will emerge, but the content - money - network problem triangle remains as a fundamental problem of the internet.
The internet is boring. And the trash is still there. Its just become reputable instead.
I've directly seen people who have successfully manipulated search rankings by having logged-in chrome users search for a term, and then click on a given page. Works like a charm (though may not stick once the manipulation is done, unless organic users also prefer it).
Creepy.
"Well obviously it's your fault for not picking the 'Don't Be Cool' option on subpage 27b-6, duh!"
The confusing thing is the crime itself is small on an individual level. The question is: does it add up cumulatively if a small crime is committed against many?
Before that, you can make it audible: <https://github.com/berthubert/googerteller>
[1] https://github.com/ungoogled-software/ungoogled-chromium/tre...
Does anyone know more about yoshi-code-bot and how were these documents suddenly published?
Was it a script misconfiguration? A manual push? Something else?
Created 1,891 commits in 19 repositories
All 19 is under googleapis
This looks like a bot Google uses to publish their stuff on github and so likely it's a misconfiguration.
Then there's the relentless parade of "alternative browsers" that are just chrome skins - a period IE also went through - that intentionally try to trick people into believing they're not just using chrome but with less security engineering, and more scams.
Boosting "organic traffic":
- Brand matters more than anything else
- Experience, expertise, authoritativeness, and trustworthiness (“E-E-A-T”) might not matter as directly as some SEOs think.
- Content and links are secondary when user intention around navigation (and the patterns that intent creates) are present.
- Classic ranking factors: PageRank, anchors (topical PageRank based on the anchor text of the link), and text-matching have been waning in importance for years. But Page Titles are still quite important.
- For most small and medium businesses and newer creators/publishers, SEO is likely to show poor returns until you’ve established credibility, navigational demand, and a strong reputation among a sizable audience.
TL;DR: Clickbait + bot farms are the way to go. No wonder the internet is going to shit.
Notably, for people on HN, it looks like there is indeed an internal initiative to promote small personal blogs :-)
> smallPersonalSite (type: number(), default: nil) - Score of small personal site promotion go/promoting-personal-blogs-v1
For example, a small, personal blog might be great for solving a specific technical problem ("my dishwasher of model XXX has YYY problem"), but might be terrible for something like giving public health advice.
Java, is that you?!
> Omit internet tropes.
There was a brief period of time where I made decent money with it until Google deranked all the product review websites.
While bigger marketplaces have other ways of driving ranking
Haven’t had a chance to look at the API myself but the first impressions are that a lot of this was suspected by SEOs, but Google kept rejecting the ideas. Looks like clicks increase ranking for sure, which means click farms definitely have a legitimate business solution to offer.
- Privacy
- Tree style tabsSure it isn't frequent, but it is frequent enough that once a day or so I have to open chrome to do something.
I don't consider it a problem to use two browsers at the same time, I usually don't to the same thing with them, so having separate profiles can be an advantage.
Note that privacy is not the reason why I am using Firefox. It is just that I think that knowing both is a good thing, and they are both good browsers, so why not? In some case, Firefox is better, in others Chrome is better, most of the times, they are interchangeable.
The only time it's a problem is when a site detects Firefox and won't display unlocked your using chrome or IE. I've only seen that a couple of times in the years since I switched back
It sends a lot of "analytics" and "tracking" to some of Mozilla's servers, but if you inspect the requests, those servers are actually behind Google's CDN,and Google does the TLS termination.
So... Google has access too all the data that Mozilla sends when it phones home. Some of it even has a unique identifying id.
It’s left a bad taste in my mouth since they used the work of others to get to where they are, then when others do the same, they don’t like it.
[0] https://github.com/ungoogled-software/ungoogled-chromium
I wonder about this. If I click a link and read it and I find that it's garbage (e.g. got ranked based on SEO rather than useful content) does it count as a successful click? Worse yet, some of these sites have blatant errors that are only discovered after examination.
This is relative to technical subject matter. Other searches, such as shopping may not suffer this kind of problem (or I have not noticed it.)
I also wonder how Google knows a click is successful. If I open a link in another tab, does the browser tell Google how long I lingered on the site? Perhaps Chrome does but I use Firefox.
What if I <ctrl><click> to keep the search page open and open the "found" page in another tab?
var words = query.split
var results = executeQuery( Select * from AdWords aw where word in query inner join adlinks al on aw.id = al.id return al.url, al.desc)
If (results.size < 30) { // todo call search engine }
Return results
Same service wrappers from two years ago: https://github.com/googleapis/google-api-php-client-services...
And yet the journalist included a screenshot with one of the weakest blurs I've ever seen... Why would you not excise the person's video portion completely? What good does it serve to have it included in the story? Even if that portion is faked, why would you offer potential signals like skin complexion, hair color, background picture, etc.? Why...
I haven't looked deeply into Fishkin's companies, but I wouldn't expect either to be on the user's side when it comes to privacy. Both companies seem to monetize clickstream data and personal information from users who probably didn't give informed consent.
If the source was trying to get this information to a responsible journalist who cares about privacy, I have no idea why they'd approach a company (not even a news organization) who seems to fund the erosion of user privacy.
I don't think you know what you're talking about. During Rand's tenure Moz was a subscription business selling access to marketing analytics tools. Those tools focused on the structure of the clients' sites themselves rather than any analytics they might have consumed.
Source: I worked at Moz for several of those years, and helped maintain those tools.
Isn't this the same type of "swirl" blur that Interpol was able to reverse even 10 years back? With advancements since then you're basically handing evidence on a silver platter.
I'm not sure I would feel safe reporting stuff to journalists nowadays.
It's also clearly from Google Meet so... yeah. If he was worried about retribution (from Google, anyway) then they probably wouldn't have been using a Google service.
This also explains why it's impossible for incumbents to unseat the winners in many search categories -- because they've literally been picked as the winners by humans at Google.
Looking at my Twitter/X feed, I also see an oddly similar dynamic. Certain accounts appear to have been manually boosted, showing up all the time -- whereas others posting even the same exact content will never appear.
Silicon valley will loudly tell you all about how wonderful they are at "democratizing," however, if you look under the surface it appears they're just hand picking the winners.
Is there evidence of that in the leaked documents?
I understand some of this is a direct contradiction of things they have said in court previously?