> 1. Carry out robust age-checks to stop children accessing harmful content
> Our draft Codes expect much greater use of highly-effective age-assurance[2] so that services know which of their users are children in order to keep them safe.
> In practice, this means that all services which do not ban harmful content, and those at higher risk of it being shared on their service, will be expected to implement highly effective age-checks to prevent children from seeing it. In some cases, this will mean preventing children from accessing the entire site or app. In others it might mean age-restricting parts of their site or app for adults-only access, or restricting children’s access to identified harmful content.
Before people try to brush aside these regulations as only applying to sites you don't think you use, the proposal is vague about what is included in the guidelines. It includes things like "harmful substances", meaning any discussion of drugs or mushrooms could be included, for example.
Think twice before encouraging regulations that would bring ID checking requirements to large parts of the internet. If you enjoy viewing sites like Reddit or Hacker News or Twitter without logging in or handing over your ID, these proposals are not good for you at all.
The best solution would be a government or banking API that emits a one-time token. No logging, and it self-destructs upon verification.
But aside from the user, no other party has detriment from the current situation.
Whereas if their claims are true, and they do know the age of their website users, then these so-called "tech" companies can solve this problem without needing to do age verfication. By not targeting people in certain age brackets with certain content, they can stop the politicians from proposing legislation that requires age verification. But they refuse to do so.
Interesting.
you can easily tell a 30 year old from a 10 year old. but can you tell a 12 year old from a 15 year old? or a 15yr old from a 20yr old?
so tech companies want a system that is approved so that they are not responsible if it fails
By this I mean, no more fucking around with timeline. We agree that any corporation introducing a feed based on anything other than date from newest to oldest is subject to penalties and sanctions.
Will it stop 'innovation'? God I hope so. I am tired of innovation that farms up rage.
edit: From here we can start working on what algos CAN be included in customer facing crapola.
I'm not familiar with the system, but I assume it would necessarily have to reveal the sites you're verifying with to the State of California.
So it's less of a big deal, as long as you're okay with sending a record to the government about what site you're visiting every time you want to sign up somewhere or re-verify your age.
I'm sure someone in the comments will propose some cryptographic solution where neither party knows anything other than the fact that someone, somewhere, possesses a token associated with a person over the age of 18. If you think this is viable, you're not thinking like a kid trying to get around this system, nor a blackhat trying to take advantage of it: Many people would immediately set up a service that handed out age verification tokens in exchange for viewing some ads (the file sharing site model) if there were no limits and nobody could trace it back to the source. Any ID verification system must necessarily have some party able to verify the person to avoid abuse like this.
The problem is how to make the provider side anonymized so that they don't know what sites your visiting, but that could be probably solved with legislation. In California, at least. I wouldn't trust Congress with a bill like this.
I live in Norway and it doesn't seem that the problem is so severe here. Or is it simply that English speaking media is more willing to latch on to extreme events and make out that they are the norm?
There is a larger population of bad actors, a greater variety of underlying cultural/philosophical differences and thus conflict, and the algorithms that seem fine for a smaller contained country like Norway can produce a different quality of topics at a larger scale. It's not just algorithm thresholds either - people are simply more naturally prone to follow fads when there is multinational scale affecting the quantity and rapidity of the content and replies/likes that they see (dopamine and confirmation bias).
Unsure what the solution is - maybe more location-based weighting of suggestions? Conversely you don't want to empower local predators. So far attempts at moderating the entire English speaking world by the standards of SF-cloistered young professionals and PhDs has also been unwieldy and led to backlash.
Youtube is full of weirdass kid content. The ones I've seen are mostly weird / scary, on the border.
It's all garbage, but youtube algorithm recommends it more and more the kids I've seen because it notice the longer screen time.
I wouldn't say it's downright illegal content, but definitely not tasteful.
Oh hell yes it is. Everywhere that Russia has interest in has been fraught with serious issues. Germany, Poland, France come to my mind - there have been reports concerning especially the spread of far-right and/or pro-Russian content for years now.
> I live in Norway and it doesn't seem that the problem is so severe here. Or is it simply that English speaking media is more willing to latch on to extreme events and make out that they are the norm?
You guys are simply too small to matter and have been in NATO from the start. In Sweden and Finland though, I had read reports of pro-Russian propaganda problems when they were in the process of joining NATO.
The unifying link is always Russia.
So, on most countries this proposal would be completely useless.
>far right that
>*crickets* far left *crickets*
Man I am glad years ago I watched Yuri Bezmenov's interview with G. Edward Griffin.
Well, Myanmar had a genocide blamed on social-media (at least in-part), so arguably things are worse there, not just equally-bad.
Their culture regards anything sexual as Taliban regards women not covering their head. As an extreme taboo. As a fellow European, yes, I know, it's crazy.
With a century of public policy fuggery, the domestic stock has been induced into a situation where it refuses to effectively breed its next generations in comparable numbers to other competing stocks. It could be mistaken for accidental if the elite weren't then back-stopping tax and investment losses with importing supply from those other stocks.
If The West is purely cultural/ideal, then mere assimilation should be enough for its persistence. If The West is its people then this situation spells certain doom if not arrested.
The West's place at the top of the world is going to be toppled by a nation with less feminism and anti-natalism this century, unless The West can destroy all other effective competition.
But these algorithms will totally curate wildly disturbing playlists of content because it has learned that this can be incredibly addicting to minds unprepared for it.
And what's most sinister is how opaque the process is, to the degree that a parent can't track what is happening without basically watching their kids activity full-time.
Idk if OFCOM is implementing this right or not, but I think there would be a much greater outcry if more people saw the breadth of these algorithms' toxicity.
Isn't it quite obvious that it's never been their prerogative. Nor protecting copyright.
If we're going to randomly throw blame around, then why not throw it on the ISP too? They're the ones ultimately serving the content. But I don't think anybody wants to open up that can of worms.
ISPs don't know what content they're serving up other than what host it's coming from 99% of the time.
- Should McDonalds be allowed to put crack in their happy meals or is it the parents responsibility to keep those meals away from their children?
- Should kids clothing be free of asbestos or is it the parents job to avoid buying that?
- Should baby formula be free of lead or is that the parents responsibility to check for?
If a company is deliberately pushing a product that is harmful and (arguably in this case) addictive to children, that is a problem regardless of the parental role.
For a fun and controversial example, one could look at the RNA vaccines for COVID. Those had some properties that separated them from traditional vaccines such that people relying on the class of "vaccine" might have felt misled. As such, you would have expected government regulation to inform the consumer on the difference to expect in that scenario (which the government did).
A recent example of an algorithm going wrong is Reddit. Home used to show you strictly a feed of reddits you subscribed to, and it was shown as a timeline. The most recent changes not only removed the timeline approach to the feed, it's now injecting subreddits you don't subscribe to and asks if you're interested in them.
For those unfamiliar Ofcom is basically the UK telecoms regulator.
Every single person living in modern society is "protectected":
- You're protected from people deciding which side of the street to drive on
- You're protected from a bank who tells you the money you depisited is "all belong to us"
- even a "market" can't exist without "protection"
Once every person has their own army of ML engineers and psychos at their disposal, then there is an oportunity for personal choice in anti-social media.
Or much simpler, the anti-social corps can be required to allow each user to configure their own feed algorithm. This would be a useful "protection".
The negative consequences of not having _any_ "protection" in anti-social networks are clear across society.
These are analagous to the damage done in the US by lack of gun protection. Guns are now the leading cause of childhood death in the US. Combine guns and 4chan/Discord and you get the rash of idiots shooting up the populace and then themseelves.
Why should my mandatory social and medical insurance taxes go fix issues created by such wrong choices? Should the society not cover them? E.g. smokers with lung cancer should pay out of pocket? Or should society try to minimize wrongdoing instead?
I take steps by not using my phone during work or after a certain time but it is extremely difficult and I still fall into the trap and watch some shorts for 2-3 hours at a time. I'm not stupid either, graduated with honors in CS and people consider me really smart. My brain just likes how it makes me feel.
Now, I have a fear what all this stuff will do to my child, if even I have trouble with it.
I hate almost everything about the modern commercialized web and really miss irc and forums.
It is obvious that even adults are suffering negative consequences from social media, and if it isn't obvious, take a look at any of the studies showing the negative impact on adults of social media usage.
Every single consumer is outmatched. The only winning choice is to not play at all.
If the adult an protect themselves, they can protect the children they're responsible for too
Yes, stop letting kids stare at screens all day. Yes, you are a bad/lazy parent letting the firehose of the Internet pipe into their heads.
I used to blame the shitty influencers and internet at large for the selfish greedy brats, but it's their parents. I've met too many exceptions where the difference was just "We limit their screen time." Maybe they end up shitty in some other way, I dunno, but at least they're going to be more functional than my nephews who my SIL is constantly trying to figure our how they get sucked into racism and greed and what to do about it. Stop letting them get all their views from greedy racists online would be a start - can't do that, she'd "never have a moments peace." Idiot human.
Side note, if you let your kids play roblox unsupervised, you're fucking up hard.
The other argument is that this makes kids socially isolated from their peers, because they all have and use these devices. If that truly is the case there definitely still is a way to monitor your kids device and internet consumption without giving them free-reign. There are internet safety settings for parents on every device out there. Kids are perfectly capable of communicating with each other through a variety of mediums, they don't need tiktok (or whatever app is the trendy one of the decade).
The last argument, when you present the former argument, is that kids are clever and will find a way to get around controls. Yea, no shit, that's what kids do. Your role as a parent is to monitor that and teach/correct them.
Yes, I know their are plenty of tools to allow parents to restrict what sites their children visit, etc... but not all parents are tech savvy enough to be able to set this stuff up, plus you could still allow a child to access Youtube, for example, but then find they are getting unsavoury recommendations from the algorithm.
This made me think about the fact that the major platforms (Alphabet, Amazon, Apple, Meta, and Microsoft) gather enough data on their users that they almost certainly know roughly how old someone is, even if no age has been provided to them. They can use all the signals they have available to provide a score for how certain they are that an individual is, or is not, legally an adult.
(As an example, if you have a credit or debit card in your Google or Apple wallet then you are almost certainly an adult because it would be very difficult for a child to obtain a card and get it into a digital wallet due to the security procedures that are in place.)
Given that, if these companies get forced to discern whether users are adults or not in order to serve appropriate content then it seems a no brainer for them to provide free age verification as well.
My vision would be for the UK government to provide an anonymised age verification router service. When a website requires you to verify your age in order to access some particular content it could ask you which age verification service you wish to use. It then sends a request to the government "middleman" that includes only the URL of the verification service. The router forwards the request anonymously to the specified server (no ip address logs are stored). If you are logged in to the account already then it will immediately return true or false to verify that you are or are not an adult. If you are not logged in then you will be prompted to login to your account with the service and then it will return the answer. The government server will then return the answer to the original website.
That way, we can get free, anonymous verification.
I'm sure people will have issues with this idea, such as "do you trust the government server to not log details fo your request instead of being anonymous?" - to which I do not have a definitive answer, but I feel like it is potentially a little better than having Google or Facebook knowing what sites I am visiting that need verification.
Anyone out there have any thoughts on this? I have only just had the idea pop into my head, so no serious thought has gone into it. There are probably issues that I have not thought about.