The verdicts against social media companies in California and New Mexico over the past two days reveal a disturbing trend: Americans are increasingly willing to view speech as a “product,” subject to regulation in the same way physical substances like alcohol or tobacco are.
Whenever these tech companies enter the debate for privacy abuses, dominant market share, mental health decline, or algorithmic polarization (and let's be clear: those things are definitely happening) the debate is only focused on how we can regulate them, sue them, have the government break them up into smaller companies, etc.
It's super weird that nobody ever talks about how we can just choose not to use their services.
I get it, some of those services are pretty great and you'll definitely miss them after you've moved on or migrated to services that aren't as good, but this "learned helplessness" phenomenon has got to stop. It's our responsibility to solve our own problems, not to wait for someone to solve it for us.
I think the entire point of the debate is the fact that social media, as designed, is highly addictive which makes “just choosing” not to use a lot more difficult for a lot of people than you seem to think it is. Not unlike smoking or alcohol. I don’t think it’s learned helplessness so much as people feeling like they don’t have many other options to curb the addictive nature of social media and getting its hooks into younger and younger people.
That’s a pretty obtuse view of what I’ve said. That’s not at all what’s being said, at least not what I’m saying. Addiction isn’t simply anything that is pleasurable enough to make you want to keep using it. In fact it’s quite the opposite. Addiction is when you compulsively keep going to that thing i.e. social media, cigarettes, alcohol, opioids, porn, etc. Many if not most addicts end up at a place where they don’t even actually enjoy or find pleasure in the substance or thing they’re addicted to. Yet they can’t help but gravitate to it because their body builds a psychological and even biological dependency to it. This behavior usually leads to depression, anxiety, and feelings of helplessness and/or hopelessness. Something that is simply pleasurable enough to make you want to do it again, casually, is not addictive. By that oversimplified definition going to the movies and having popcorn would be addictive. Its not. Going to the movies doesn’t result in compulsive behavior that disrupts people’s lives and relationships. Like what social media can do to.
Addiction is indeed one of several real issues with big tech, and each issue requires a specific discussion and follow-up.
For addiction specifically, it is highly individualistic as some people are much more susceptible than others, as well as the age dimension you brought up. The good news is the same generation that saw the decline of cigarettes is still around for this social media debate so we can reapply some of those lessons learned.
Sin taxes and surgeon general's warnings may(!) have helped with some progress, but the biggest reason smoking has declined is we all slowly agreed it wasn't cool anymore. (E.g., in movies the culture shifted from the hero being the smoker, to the villain being the smoker.) For children and highly susceptible people, more structural but still bottom-up initiatives can help such education, marketing campaigns, and voluntary self-exclusionary programs (like with gambling).
Social media is only twenty years old, and the addictive nature is already heavily publicly discussed, so, yes, I have faith that we can figure this out own our own.
this isn’t really true imo. there are massive and extremely robust studies that have shown that taxing the shit out of cigarettes caused (directly) a decline in use. this is true across countries and social contexts
Fair push-back -- my "may(!)" is too reductive. You're right they do have some direct effect for some groups. It's a distribution curve -- most effective for groups that haven't started yet or who are light users, and least effective (to zero effect) for established or heavy users. (It's also a very regressive tax among those groups, but that's a separate debate.) It's a useful tool in the toolbox but not a silver bullet, and not the primary driver.
Just wait for some authoritarian to call journalism from the New York Times "a product designed to manipulate" as a justification for censorship. Hell, you could make a case for echo chamber take journalism being addictive too.
THANK YOU! FIRE represents some valuable free speech principles, but they echo a corporate-friendly narrative that confuses product design with the town square. Those are not the same thing, and treating them as if they are only muddies the conversation around accountability.
Thank you for pointing out the role parents have here. You are the ones who choose what your child sees. You should be concerned, not just about content but also about the neurological implications of wiring up your child's growing brain with excessive screen time. The research is out there. From plopping your infant in front of the TV to putting a smartphone in their hands so you can experience peace and quiet, you are making the choice to change them and in some cases hurt them. Should you be screen free? That's not for any government to say. What you should do is make sure your children have developed real life skills at each stage before introducing technology and being honest with them about your choices. For example, even the AAP suggests NO SCREENS at all before age 3. This is because the brain development at that time needs large quantities of sensory experiences to form properly and screen time takes away from that. Perhaps you don't let your kid use the computer before they can kick a ball, climb a tree, and jump rope. Perhaps you share a family computer in a public place until they have established good study skills. In my house, my sons didn't have their own computer, gaming systems or smartphones until they 1) had a paid job outside of the house and 2) got their driver's license. Why? Because the two ways to lose a son online are gaming and porn. The two ways for a boy to become independent are work and a driver's license. I gave them the research both for porn and gaming and it's effects on their neurological development, I told them why work and driving a car were key real world skills, and then said I'd buy any system and phone they wanted once they met those conditions. I also pay their phone bills until they're 26.
Lest anyone say I held them back, my younger son is now a video game designer for a AAA company in LA and my eldest is getting his PhD in Materials Science Engineering. In my experience, by keeping them off tech until their brains were more developed, I allowed them to become creators rather than simply consumers. Moreover, I taught them that I valued free speech enough to do the work within my own home as a parent, rather than allow the government to do the hard work for me.
I think it’s obviously a nuanced issue that does need to be carefully thought about and decided on to prevent any sweeping free speech issues. But I think its a bad argument to say social media, as designed, doesn’t have physical effects in a similar way to cigarettes or alcohol.
Its pretty widely understood that social media and algorithmic suggested content (like Youtube) is highly addictive, especially to kids. And the younger those kids begin consuming such media the worse the mental health and psychological outcomes are as they age. Not unlike a teenager, preteen, who starts smoking. Sure, the physical effects of mental health may not be as immediately and visibly present as smoking or drinking or whatever other drug use, but they can be physical no less. Anxiety and depression can cause a whole host of physical ailments like insomnia, weight gain, high blood pressure, endocrine system dysfunction, etc. and there’s no argument to suggest that heavy social media use doesn’t cause some form of anxiety.
I think there is more than enough evidence that there needs to be age based safeguards around social media and at the very least some form of a campaign to educate the public on the harms of early social media and smart device (tablet, smart phone, etc.) use in kids. Not unlike the ad campaign that helped put a serious dent in the generational trap of smoking.
Saying the algorithm is exactly like how a newspaper is formatted I think is a bogus argument. A newspaper front page is designed to catch your attention for the handful of minutes it takes to read whichever articles were published. It’s not designed to hold your attention all day long everyday forever. That’s what an algorithm does. The parallels to the way Big Tobacco operated are strikingly similar. The unfortunate truth is that speech is in the middle of the “product”. The product itself isn’t the speech. Its the delivery system, the algorithm. Its obviously designed to hold the user’s attention for as long as possible, to keep them scrolling, to keep them seeing the ads that are being paid for to the social media companies for revenue. And its provably addictive. I believe putting some form of a safeguard around the algorithm doesn’t harm free speech. People and companies are still free post on the platforms, but they will hopefully not be inundated with content or sucked into an all-day doomscroll because everything they either willingly or accidentally click on or even speak out loud near their device (when the mic is turned on in the app) triggers the algorithm to shovel similar content to them immediately and in a manner that makes it difficult to stop engaging.
I find it difficult to see the general unhappiness and division within our society, the terrible physical health that anxiety and unhappiness is causing, the breaking up family and friends - while many sociologists and psychologists are recognizing a major influencer of the current state of affairs being social media and the additive and divisive nature of its design - and coming away with a defense of the companies that have caused so much social harm. I really believe its in a similar (not equal but similar) level to those who would defend Tobacco or any other companies knowingly addicting and poisoning a population for immense profit.
I agree and would add that any suggestion that an average parent has the capacity to combat social media's onslaught is just bonkers. We are way past the days of finding adult magazines under the bed and we need to start acting like it.
But aren’t the social media companies trying to have their cake and eat it? If the algorithm is speech (I agree, it is) it shouldn’t get sec. 230 protection. We need some way to deal with the obvious harm social media is causing our society. Products liability litigation isn’t the best way to do that for the reasons you state, but it’s what we get when the richest companies in the world are given liability shields for unprotected speech amplified by algorithms.
The algorithm doesn't get section 230 protection -- that is covered by IP law. The company gets section 230 protection by hosting public speech on it's platform. We want public speech on these platforms.
Going after section 230 protection because the algorithm is problematic is like going after the speed limit because a car manufacturer didn't include seat belts. They're entirely separate things.
Sorry guys, there’s a lot of nuance you’re left out here. Can you tell your readers what Operation MetaPhile was and what it showed in the New Mexico case?
Here's why it is not an artificial distinction: The algorithm observes user behavior, including unconscious behavior, and without disclosure, without concern for the wellbeing of the user, chooses content in a way that the user does not understand and cannot control. This is a very different situation from passively hosting or passing along content.
Actually, you don’t explain why it’s an artificial distinction. You just claim that it is. Then you personified the First Amendment, which is also not an argument, nor is it evidence or support for your claim. This article is really just a summation of your personal opinions with no substance to back up any of them, making it not particularly persuasive.
You're conflating users' posts and the design of the host product, and call both "free speech."
The design is a grey zone for me because these platforms are engineered to be addictive, AND the companies suppressed internal research showing the harm. Users didn't consent to being subjects of a behavioral engineering experiment.
This seems closer to an informed consent issue.
While a user's posts aren't the equivalent of a cigarette the platform IS a spiked drink.
Whenever these tech companies enter the debate for privacy abuses, dominant market share, mental health decline, or algorithmic polarization (and let's be clear: those things are definitely happening) the debate is only focused on how we can regulate them, sue them, have the government break them up into smaller companies, etc.
It's super weird that nobody ever talks about how we can just choose not to use their services.
I get it, some of those services are pretty great and you'll definitely miss them after you've moved on or migrated to services that aren't as good, but this "learned helplessness" phenomenon has got to stop. It's our responsibility to solve our own problems, not to wait for someone to solve it for us.
I think the entire point of the debate is the fact that social media, as designed, is highly addictive which makes “just choosing” not to use a lot more difficult for a lot of people than you seem to think it is. Not unlike smoking or alcohol. I don’t think it’s learned helplessness so much as people feeling like they don’t have many other options to curb the addictive nature of social media and getting its hooks into younger and younger people.
By this definition of addiction, anything that is pleasurable enough to make you want to keep doing it is addictive.
That’s a pretty obtuse view of what I’ve said. That’s not at all what’s being said, at least not what I’m saying. Addiction isn’t simply anything that is pleasurable enough to make you want to keep using it. In fact it’s quite the opposite. Addiction is when you compulsively keep going to that thing i.e. social media, cigarettes, alcohol, opioids, porn, etc. Many if not most addicts end up at a place where they don’t even actually enjoy or find pleasure in the substance or thing they’re addicted to. Yet they can’t help but gravitate to it because their body builds a psychological and even biological dependency to it. This behavior usually leads to depression, anxiety, and feelings of helplessness and/or hopelessness. Something that is simply pleasurable enough to make you want to do it again, casually, is not addictive. By that oversimplified definition going to the movies and having popcorn would be addictive. Its not. Going to the movies doesn’t result in compulsive behavior that disrupts people’s lives and relationships. Like what social media can do to.
Addiction is indeed one of several real issues with big tech, and each issue requires a specific discussion and follow-up.
For addiction specifically, it is highly individualistic as some people are much more susceptible than others, as well as the age dimension you brought up. The good news is the same generation that saw the decline of cigarettes is still around for this social media debate so we can reapply some of those lessons learned.
Sin taxes and surgeon general's warnings may(!) have helped with some progress, but the biggest reason smoking has declined is we all slowly agreed it wasn't cool anymore. (E.g., in movies the culture shifted from the hero being the smoker, to the villain being the smoker.) For children and highly susceptible people, more structural but still bottom-up initiatives can help such education, marketing campaigns, and voluntary self-exclusionary programs (like with gambling).
Social media is only twenty years old, and the addictive nature is already heavily publicly discussed, so, yes, I have faith that we can figure this out own our own.
this isn’t really true imo. there are massive and extremely robust studies that have shown that taxing the shit out of cigarettes caused (directly) a decline in use. this is true across countries and social contexts
Fair push-back -- my "may(!)" is too reductive. You're right they do have some direct effect for some groups. It's a distribution curve -- most effective for groups that haven't started yet or who are light users, and least effective (to zero effect) for established or heavy users. (It's also a very regressive tax among those groups, but that's a separate debate.) It's a useful tool in the toolbox but not a silver bullet, and not the primary driver.
Instagram is not an idea. It is not speech. It is a product designed to addict and manipulate. I hope they get sued into oblivion.
Just wait for some authoritarian to call journalism from the New York Times "a product designed to manipulate" as a justification for censorship. Hell, you could make a case for echo chamber take journalism being addictive too.
This simply isn't worth weakening the principle.
THANK YOU! FIRE represents some valuable free speech principles, but they echo a corporate-friendly narrative that confuses product design with the town square. Those are not the same thing, and treating them as if they are only muddies the conversation around accountability.
Thank you for pointing out the role parents have here. You are the ones who choose what your child sees. You should be concerned, not just about content but also about the neurological implications of wiring up your child's growing brain with excessive screen time. The research is out there. From plopping your infant in front of the TV to putting a smartphone in their hands so you can experience peace and quiet, you are making the choice to change them and in some cases hurt them. Should you be screen free? That's not for any government to say. What you should do is make sure your children have developed real life skills at each stage before introducing technology and being honest with them about your choices. For example, even the AAP suggests NO SCREENS at all before age 3. This is because the brain development at that time needs large quantities of sensory experiences to form properly and screen time takes away from that. Perhaps you don't let your kid use the computer before they can kick a ball, climb a tree, and jump rope. Perhaps you share a family computer in a public place until they have established good study skills. In my house, my sons didn't have their own computer, gaming systems or smartphones until they 1) had a paid job outside of the house and 2) got their driver's license. Why? Because the two ways to lose a son online are gaming and porn. The two ways for a boy to become independent are work and a driver's license. I gave them the research both for porn and gaming and it's effects on their neurological development, I told them why work and driving a car were key real world skills, and then said I'd buy any system and phone they wanted once they met those conditions. I also pay their phone bills until they're 26.
Lest anyone say I held them back, my younger son is now a video game designer for a AAA company in LA and my eldest is getting his PhD in Materials Science Engineering. In my experience, by keeping them off tech until their brains were more developed, I allowed them to become creators rather than simply consumers. Moreover, I taught them that I valued free speech enough to do the work within my own home as a parent, rather than allow the government to do the hard work for me.
I think it’s obviously a nuanced issue that does need to be carefully thought about and decided on to prevent any sweeping free speech issues. But I think its a bad argument to say social media, as designed, doesn’t have physical effects in a similar way to cigarettes or alcohol.
Its pretty widely understood that social media and algorithmic suggested content (like Youtube) is highly addictive, especially to kids. And the younger those kids begin consuming such media the worse the mental health and psychological outcomes are as they age. Not unlike a teenager, preteen, who starts smoking. Sure, the physical effects of mental health may not be as immediately and visibly present as smoking or drinking or whatever other drug use, but they can be physical no less. Anxiety and depression can cause a whole host of physical ailments like insomnia, weight gain, high blood pressure, endocrine system dysfunction, etc. and there’s no argument to suggest that heavy social media use doesn’t cause some form of anxiety.
I think there is more than enough evidence that there needs to be age based safeguards around social media and at the very least some form of a campaign to educate the public on the harms of early social media and smart device (tablet, smart phone, etc.) use in kids. Not unlike the ad campaign that helped put a serious dent in the generational trap of smoking.
Saying the algorithm is exactly like how a newspaper is formatted I think is a bogus argument. A newspaper front page is designed to catch your attention for the handful of minutes it takes to read whichever articles were published. It’s not designed to hold your attention all day long everyday forever. That’s what an algorithm does. The parallels to the way Big Tobacco operated are strikingly similar. The unfortunate truth is that speech is in the middle of the “product”. The product itself isn’t the speech. Its the delivery system, the algorithm. Its obviously designed to hold the user’s attention for as long as possible, to keep them scrolling, to keep them seeing the ads that are being paid for to the social media companies for revenue. And its provably addictive. I believe putting some form of a safeguard around the algorithm doesn’t harm free speech. People and companies are still free post on the platforms, but they will hopefully not be inundated with content or sucked into an all-day doomscroll because everything they either willingly or accidentally click on or even speak out loud near their device (when the mic is turned on in the app) triggers the algorithm to shovel similar content to them immediately and in a manner that makes it difficult to stop engaging.
I find it difficult to see the general unhappiness and division within our society, the terrible physical health that anxiety and unhappiness is causing, the breaking up family and friends - while many sociologists and psychologists are recognizing a major influencer of the current state of affairs being social media and the additive and divisive nature of its design - and coming away with a defense of the companies that have caused so much social harm. I really believe its in a similar (not equal but similar) level to those who would defend Tobacco or any other companies knowingly addicting and poisoning a population for immense profit.
I agree and would add that any suggestion that an average parent has the capacity to combat social media's onslaught is just bonkers. We are way past the days of finding adult magazines under the bed and we need to start acting like it.
But aren’t the social media companies trying to have their cake and eat it? If the algorithm is speech (I agree, it is) it shouldn’t get sec. 230 protection. We need some way to deal with the obvious harm social media is causing our society. Products liability litigation isn’t the best way to do that for the reasons you state, but it’s what we get when the richest companies in the world are given liability shields for unprotected speech amplified by algorithms.
The algorithm doesn't get section 230 protection -- that is covered by IP law. The company gets section 230 protection by hosting public speech on it's platform. We want public speech on these platforms.
Going after section 230 protection because the algorithm is problematic is like going after the speed limit because a car manufacturer didn't include seat belts. They're entirely separate things.
Sorry guys, there’s a lot of nuance you’re left out here. Can you tell your readers what Operation MetaPhile was and what it showed in the New Mexico case?
Isn’t this about platform design features not ideas or speech?
In this very piece I explain why that is an artificial distinction
Here's why it is not an artificial distinction: The algorithm observes user behavior, including unconscious behavior, and without disclosure, without concern for the wellbeing of the user, chooses content in a way that the user does not understand and cannot control. This is a very different situation from passively hosting or passing along content.
Is FIRE not funded indirectly by sources who wish to protect interests in the technology sector? If so, shame on you bro.
Actually, you don’t explain why it’s an artificial distinction. You just claim that it is. Then you personified the First Amendment, which is also not an argument, nor is it evidence or support for your claim. This article is really just a summation of your personal opinions with no substance to back up any of them, making it not particularly persuasive.
You're conflating users' posts and the design of the host product, and call both "free speech."
The design is a grey zone for me because these platforms are engineered to be addictive, AND the companies suppressed internal research showing the harm. Users didn't consent to being subjects of a behavioral engineering experiment.
This seems closer to an informed consent issue.
While a user's posts aren't the equivalent of a cigarette the platform IS a spiked drink.
….and a 12 year old is not a 22 year old. Parents are ultimately responsible , yes; and they are begging for help. Get the kids off social media now!
Isn't this essentially just Hogan v. Gawker all over again?
This is very elitist. Are you saying that speech is freer after unlimited scroll?
How does regulating these companies curtail speech?
Facebook serves ads for discount stamps.