Do carmakers “capitalize on vulnerability” when they advertise pickup trucks as big tough vehicles for tough, outdoorsy men?
Do providers of health insurance for pets “capitalize on vulnerability” when they say you need to buy their product if you love your pet?
At some point people need to be responsible for their own decisions. And I can’t get that worked up about Meta’s free product.
It's been studied under so many different names Knowledge Gap Theory, Info Asymmetry, Bounded Rationality etc The more info Adults have to digest, bad decisions/exploitation are garaunteed.
If you think those legal protections aren’t fit for purpose (they were created long before social media even existed), then you should take that up with your legislators. I personally wouldn’t trust them to approach that task without implementing something horribly tyrannical, like implementing a requirement for a full KYC process for creating social media accounts. So I’d advise that you be careful what you ask for in that respect.
Do tv shows target teens too much with high energy music and dancing?
Does media target them too closely with intentionally addictive music riffs from Taylor Swift to Billy Eilish?
Will we shut down all these video games that clearly target kids with bright colors and, let's call it what it is, "aestheticized violence"?
We need to be careful about how we go about trying to protect children in this regard.
we as a society are going to have to seriously engage the fact that we are now fully capable of manufacturing addiction, and at the moment, do so, both in adults and in children.
"Their own decisions" is not a stable concept. Setting aside esoteric philosophy of mind, you need look no further than your own relationship to your phone—tested out for many of us at the Thanksgiving table last week, as duly noted by Chris Ware's cover of last week's New Yorker magazine—to confirm this.
The mechanisms of surveillance captialism and a foundation of decades of consumer psychology (etc ad nausuem) have quite literally left us adrift in a world of stochastic mind control. At that same table many of us encountered the inexplicable world views of relatives whose propoganda bubbles did not intersect our own.
And we all have such bubbles, not least as a result of the cheerful professionalism of many who browse here.
Your decisions, just like teenagers' decisions, are not "your own" in the sense someone might have meant c. 1923. And before one cries, it has ever been thus, to that I say: no, it absolutely has not. The technologies for behavioral steering of today are as unalike what people contended with in advertising (etc.) a hundred years ago, as our logistic transport and energy industries are, amongst others.
Until we take this on, head on, as a society, the problem will just get worse.
You're asking this of a group that largely can't vote, sign contracts, and which America doesn't trust to drink.
I can get worked up about Meta targeting children in ways which they don't have the experience, or knowledge to know about let alone to avoid. Children should be protected from bad actors like Meta and let me be clear, any company taking advantage of kids is a bad actor.
The entire toy unboxing industry is built around advertising to children.
Yes.
You can be expected to make responsible decisions as an adult. That doesn't mean that there aren't bad actors trying to take advantage of you, and that this behaviour isn't borderline unethical.
This is like saying “everything in moderation” in a discussion about nutrition. No shit. We’re trying to find that delineation.
That point should come when they are no longer children. Targeting children to produce perfect little ecosystem consumers is... kinda evil.
There is no "Free" product. You are paying with freedom, you are paying with attention, you are paying with privacy. It's not "free", it's extracting value from you.
It doesn't cost money, yes, but neither does working, yet we assume that transfer of value is such that it ought to be paid for. It's not Meta offering a "free" product. It's their users. Their users give Meta their data for "free", which then Meta uses for profit.
We already do that. When they turn 18, we expect people to be responsible for their own decisions.
The advertising industry’s a rabid dog the size of Godzilla and should be put down, whether it’s targeting kids or adults.
Marketing gets a lot of freedom because of the assumption that they only take over a small part of the information a person has access to. To the extent that this assumption becomes incorrect, those actions become attacks.
Examples: Drinking disclaimers (drink responsibly). Cigarette disclaimers and off putting mandated packet visuals. The traffic light system in the UK (which displays a colour coded breakdown warning of unhealthy food macros on the front of all ready meals). Alcoholic beverages by law having to specify their alcohol percentages. Foods by law having to specify their nutritional content and ingredients.
All of these regulations have been introduced to ensure customers are not blind to unhealthy choices (e.g., the traffic light system warning against high sugar content designed to make cheap addictive food). While not always effective, I believe that on balance these regulations make society a better place to live in. Similarly one could envision mandated social media disclaimers and warnings, and to regulate this way would be entirely within the wider norm, rather than something unusual.
Previously, children's exposure to marketing and propaganda was mostly confined to their entertainment hours, during which they watched television or read magazines. There was at least some hope for moderation. However, "apps" have blurred these boundaries, as the same devices used for education and social interaction are also channels for persistent advertising and messaging, making it harder to limit exposure to just "entertainment" time.
Recent reports from teachers indicate that many children are intellectually behind their peers. A concerning trend is that these children struggle to hold conversations, a problem attributed to their parents phone/social networks addiction. Rather than engaging and raising their children through conversation and interaction, these parents often resort to pacifying them with tablets or phones.
citation needed? or are we just assuming because, well, there's education and social information and apps available on them?
This isn't Wikipedia. It's a casual internet forum, and you don't need someone to come armed with mountains of proof for casual (and obvious) statements.
BTW: One of my kids learned to read by playing a Cookie Monster word game during the pandemic. We've had enough "edutainment" software for a few decades that you don't need to ask for proof in a casual atmosphere.
Basically we need more things like Facebook's push a few years ago to show more personal updates from close friends and less mass-shared political posts from organizations.
For teen girls - the apps are designed to scare them about being socially excluded. For teen boys - the apps are designed to fill their need to master skills.
The issue that the government has to deal with with app addictions is self harm attempts by girls (e.g. emergency room visits) and underperformance of boys in the real world (e.g. low college enrollment).
If you are trying to make an addictive app, this is a good reference to understand the science: https://www.amazon.com/Hooked-How-Build-Habit-Forming-Produc...
BJ Fogg is a good reference too: https://www.bjfogg.com
(Disclaimer: it talks about my work)
Any female magazine ever.
Additionally, saying that children and adults should be wholly responsible for this is like saying the Chinese and not the British should be responsible for their opium addiction (see Opium War) and that homeless in San Francisco should be responsible for their Fentanyl addition. They can always just say no, right?
I worry that if nothing is done, this will only get worse, addiction will become the norm, of one sort or another, and you can just look at history of the Opium War to see where this leads.
This is why I find it funny that FAANG people call themselves software engineers. In the real world, an engineer is wholly responsible for the projects they bring into the world. Imagine a bridge collapses and someone dies. Then in court the family is told that the person was responsable to research bridge designs before using it. These social media companies are just run by money hungry a-holes.
Saying that addiction to a website isn't possible is unfounded.
People get addicted to online gambling. That's just "a website on a screen." It's clearly possible and it clearly happens.
Isn't this just how all big tech companies operate as a normal business practice? Certainly Youtube is no better when it comes to targeted content and advertisements to children to their detriment.
My main point is that I don't think it makes any difference whether Meta has some internal document proving that they specifically target children with these practices. The problem is so much bigger than a single policy or company, and legislatures need to figure out a better way to address the overarching problems. I don't have much faith that these one-off lawsuits will make that much of an impact given that they almost always lead to some fine or settlement that is an acceptable business loss for the company.
I'm all for Meta being decimated by a thousand cuts in the form of lawsuits from various levels of government, but at best it would just be replaced with something else unless more regulation exists at the top levels (US / EU / etc).
I think a core class that should be taught is how to safety deserialize sensory input as to avoid causing RCEs. Or basically 'patching' these known vulnerabilities.
With broadcast media like TV, I can see what the programming is, and I can watch the same ads that every other house is getting broadcast to know what's being shown to kids (and research companies do this). Similarly for retail media, I can go to a store and see what a retailer is doing.
For Meta with AI newsfeeds and targeted ads, it's impossible to know exactly what any one persons experience is. I don't know the veracity of this specific case is but as a minimum I think there should be some legislation that force these companies to be auditable in some way...
Above all else since turning public, Meta is in the business of making money. It's not illegal to target user's vulnerabilities in order to get the user to spend more time or money on their platform. It's unethical as hell, but it's business 101 - the shareholders would revolt if Zuck came out and said "here's this opportunity to make you all a ton of money, but we're placing our personal ethics above doing this, so we're not". He'd get sued for breach of fiduciary duty.
Now, are Meta's product strategies unethical (or questionably ethical), harmful to society, and setting bad precedent? Yeah, I'd agree with that. But the market and shareholders like money.
Perhaps it should be illegal to target children in such ways? I'm tired of this argument that companies should be able to do whatever they wish in the name of profit, they need to be reigned with strong regulations.
The result is that we have a lot of amoral institutions playing a key role in our society.
Zuckerberg has super-voting shares that give him control over Facebook [1].
[1] https://www.reuters.com/breakingviews/zuckerberg-motivates-s...
Of course, we know how that worked out. What is galling is that Meta absolutely knows it is creating a bunch of cocaine addicted children.
Bringing up wrong historic points about "evil capitalists" doesn't really help your case against Meta.
having a social media company that's a B-corp would be a nice world.
This case is basically projecting everyone's misplaced hate of social media without doing a proper controlled experiment of it's benefits/harm to the society.
You can't do controlled experiments on humans and hence the states have no case except overreach. If they really want to cater to their constituents then pass specific laws.
This post has a list of the some of the better studies and gives a good synthesis of the results:
https://jonathanhaidt.substack.com/p/sapien-smartphone-repor...
a) Broadband Internet
b) Transgenderism
c) iMessage
So, why Meta?
I dont think there are any smoking-gun causal studies, but the correlation evidence is very strong.
Do you think the rise in this premise of "rampant mental health issues" reflect an increase in actual expression of the phenotypes in the population? Or was this from an increase in diagnosis and treatment of mental health issues?
It would be government overreach to force such. Since I have no obligation to your existence itself, who cares what you have to say, or your philosophy? You’re just some pointless meat suit I have no responsibility to.
There you go; you got exactly the world you project you want.
They are their to basically provide security, insurance against calamities and proper enforcement of laws based on constitution.
Well-intentioned but still moronic socialism/communist projects fail for a reason. They fail to understand scale and human psychology and the basics of economics.
https://ia800508.us.archive.org/12/items/gov.uscourts.cand.4...
Employee names are still redacted. Given Zuckerberg's views on privacy, one wonders why they should remain "anonymous".
https://nypost.com/2023/11/25/metro/jewish-teacher-hides-in-...
How do you regulate legal but unethical? You can't. So let's make it illegal. But how?
Maximum notifications per day? Deep introspection of the actual content? Good and bad influencers? Curfue? It's impossible to codify this into law, unless you're China.
Not saying this to start a flame thing, just advocating a sense of perspective for the sake of 8,000 murdered children, their mothers, and our shared humanity.
With Twitter even if I pay, still get the same number ads.
I want to customize what is shown in my feed.
I suppose that the broader concern is over precisely what duties a company has to its customers. They obviously have the duty to be truthful when making offers, but every customer relationship will have an adversarial component where each party benefits at the other's expense (or at the expense of third parties). In cases like a bar serving alcohol to customers, there's usually some responsibility to prevent patrons from getting extremely intoxicated and getting in a car. But that case involves a clear signal that someone is dangerous. Facebook doesn't know if someone's grades are suffering or if they're having mental health issues. It doesn't know if it should tell the user to "touch grass".