I had not previously heard about this one, but they all sort of run together. Some vague stuff about protecting children which would undoubtedly result in 1) not doing much at all to protect children and 2) actively making the internet worse for everyone.
The fact that this is a bipartisan-introduced bill is even worse news.
"More than 75% of parents agree: Teens under 16 shouldn’t be able to download apps from app stores without parental permission. Instagram wants to work with Congress to pass federal legislation that gets it done."
Given who is paying for this, my assumption is that this is protectionism dressed up as child safety, though the language is so vague I have no idea what legislation is being referenced.
Regardless, If both the ACLU and EFF are against a bill, my immediate reaction is that it is probably a bad bill.
I'm actually surprised more big tech firms aren't in favor of these types of bills. Sure, enforcement will cost a lot, but it would also stifle any chance at future competition from smaller orgs.
Summary:
- Establishes a duty of care for covered platforms (basically any internet connected platform (social media, videogames, etc) that isn't a common carrier, an email service, a private messaging service disconnected from any broader platform, VOIP service, VPN, or education related.
- Covered platforms must take reasonable measures to mitigate harms to minors, with a list of specific harms being things like mental illness, addiction, physical violence, bullying, narcotics, and predatory or deceptive marketing tactics. Platforms do not need to hide such content if the minor is specifically searching for/requesting such content, or if the content is providing resources for the prevention of said harms.
-Covered platforms must provide certain safeguards to minors on their platform, to be used at the minor's (or guardian's) discretion. These include blocking, basic privacy settings, opting out of personalized recommendation systems, while still allowing the display of content based on a chronological format or limit types or categories of recommendations from such systems, hiding geolocation, and deleting the account and all associated data.
-The default settings for minors must be the most protective.
-Covered Platforms need to provide parental tools, including the ability to change privacy settings, restrict purchases, view metrics of usage, and restrict the amount of time the minor uses on the platform.
-Covered platforms need a system to receive reports about minor harming on their platform, and must "substantively respond" to such reports withing either 10 or 21 days depending on whether the service has more or less than 10,000,000 monthly active users
-Covered platforms are liable for advertising illegal stuff to minors (including stuff that's only illegal for minors to purchase, like alcohol and tobacco).
Does it even have an age verification requirement? The only part I could find was it requires the NIST to investigate age verification system creation/implementation/effectiveness/impact.
I think that part should be removed but it also made me feel like the stopkosa site might be fearmongering.
(I'm staff that cowrote the bill.)
I don't even have to read it to know it'll contain some insane laws around encryption/privacy.
It's not so easy to prove harm from internet access.
There are studies that suggest social media can harm adolescent mental health[1] but there are also studies that suggest the magnitude is small enough that we don't really know.[2]
It needs more study (and it sounds like what we really need is more transparency from big tech so that it can be better studied).
For an actual argument against KOSA I would say it doesn't at first glance look like it will target issues that are likely more important: peer pressure, bullying, suicide promotion/romanticization, etc on social media.
Edit: Having spent more time reading the KOSA bill (thankfully linked by another commenter) it does actually appear to try to target the biggest known issues to help minors. Some of the bill seems reasonable.
[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10476631/
[2] https://journals.sagepub.com/doi/10.1177/21677026231207791#:...
To be fair, if you look at statistics around gen z you can see that there definitely are negative consequences of internet access (along with amazing benefits of course).
I don't support this measure or measures like it, but some of action should probably be taking considering the skyrocketing amounts of depression.
I haven't seen any such data. It's hard to separate generation effects and look purely at the effects of internet access in any sort of valid way.
You're probably thinking of studies around social media use. It isn't clear to me that internet filtering will combat any of what makes social media use such a mental health risk for teens.
If we, as a society, were to accept the reality that paying people for their endorsement is only ever used to achieve the effect of fraud without committing it, we could just eliminate paid endorsement.
Extraordinary technological progress has been made in laundering fraud and disinformation, but that technology costs money and therefore requires customers to operate. Eliminate the customers and the sickness they create will go away.
Maybe this isn’t the internet we deserve?
> It’s no surprise that anti-rights zealots are excited about KOSA: it would let them shut down websites that cover topics like race, gender, and sexuality.
> Second, KOSA would ramp up the online surveillance of all internet users by expanding the use of age verification and parental monitoring tools. Not only are these tools needlessly invasive, they’re a massive safety risk for young people who could be trying to escape domestic violence and abuse.
I don't think it's much of a stretch to assume these kind of laws will be used to censor topics like "race, gender, and sexuality". There is a ton of precedent in recent years of e.g. books on these topics being banned from public libraries (not just school libraries) under the guise of protecting children. It seems like a reasonable inference that the same topics would be targeted in online content if the laws allowed it.
Abolishing it was plenty bad, and you look silly making up straw men to whitewash it.
It's no surprise that big tech companies don't want increased liability.
See https://thetrichordist.com/2016/05/12/list-of-corporations-a....