With so much of our national conversation taking place online, there’s an almost reflexive tendency to search for online causes — and online solutions — when tragedy strikes in the physical world.
I think many efforts like this legislation assume that people receiving information on social media don’t have agency. I believe that citizens who value liberty should be responsible for the information and speech they ingest. I have a right to speak and to hear, therefore I also have the right to choose when not to speak and not to hear. However it comes down to my agency and choice. People need to curate the information they take in.
Your last sentence is the main crux of the debate. I agree. At this point we should all know how the algorithm works and either know that if I “like” or share or comment on a post I will very likely get similar content in my feed in the future. So we either interact with the platform responsibly (not viewing or sharing or commenting on literally everything that comes across your feed) or we don’t interact with it at all. Its difficult to do, no doubt but its likely the only way to sustain the First Amendment protection while also guarding ourselves and society as a whole against the dangers social media can present.
I agree with the main premise of the article, against the bill being presented by Sens. Kelly and Curtis. But I think the writer misses a crucial point. Publishing, as in the days of the printed and/or broadcast televised word was/is intended for a mass audience. The First Amendment wins in the cases presented in the article are all valid when it comes to video game and movie creators, music artists, and newspaper publishers.
However, a main difference here is that the algorithms used by social media companies and online platforms, I would say primarily Meta and Youtube, are designed to hyper-focus on an individual. A newspaper or news broadcast is typically designed for a mass audience. A video game or movie or song is as well. Those mediums tend to have less of a violently radicalizing effect by readers/watchers/listeners/players because the people consuming said content aren’t usually inundated with it.
As opposed to social media, where a user easily opens an app and has content flooded on their newsfeed. One click on a post can result in barrage of similar content. So if I click on or watch a controversial post, comment on it, like it, and/or share it, then the algorithm zeroes in on that interaction and begins to flood my individual feed with such content and potentially, if not likely, more controversial or radical content. It can, and I think arguably has shown to, lead people into a more radicalized and potentially violent state. More so than the traditional means of consuming information.
It’s been proven that social media consumption, and general overuse of one’s cellphone or tablet, have very poor effects on mental health for a general and outsize portion of the population. Part of that reason is the extreme targeting of the algorithms to individualize content based on what people interact with or even just happen to come across. It has also been proven to be tied to engagement, i.e. commenting and sharing, which happens far more with things that outrage us than the things that would be considered “good news.” The effects of such algorithms being the primary function of social media propagation and profit have had detrimental effects on society, not to mention the arguably inadvertent self-propagandizing effect it has had on people in both directions of the social and political debate.
I think if there is any merit to the bill being presented, and I don’t think there currently is as far as Section 230 is concerned, targeting the inherent function of the algorithm isn’t an unworthy aim. It may be misguided in how these senators are trying to accomplish that, but the algorithms are actively working against a cohesive society. The free spread of information and healthy debate are vital to a free society, but the algorithms are clearly not fostering such vitality. We are more divided and more anxious than at almost any other time in our country’s history and we are beginning, if not already, approaching conversations and debates with shockingly opposed versions of basic reality. I would argue that allowing tech companies to continue benefitting from such algorithms and software under the guise of the First Amendment would be to allow for the continued and overall disintegration of our society. There IS a conversation to be had on regulating such technology, and one that I think avoids encroaching on the First Amendment.
I can't see any reason why speaking directly to a person would be any less expressive, or receive any less First Amendment protection, than speaking to a mass audience.
But that’s the thing. No one is actually speaking. The algorithm is basically unconsciously, because its a program a with a directive, flooding a person’s newsfeed across their social media profiles with various content for the sole purpose of engagement. A person engaged on one post and the algorithm, which is not a person, works aggressively to keep that person engaging on as many posts as possible. It then potentially, if not likely, subjects that person to more and more and more radical content that may or may not be based in fact. If the person doesn’t understand how the algorithm works or its purpose they can become overwhelmed by the constant negative bias of the content they come across all day everyday. My argument is that individuals are subject to hyper-focused and curated content, not by a publisher but by a program, for the sole purpose of engagement to drive ad revenue which can and does lead many of those people to become more anxious, antisocial, and radicalized. I think there’s a big difference between a newspaper company, a broadcast news channel, or an individual speaking at a town hall or in a pub than an algorithm designed to take published content and flood someone’s feed with it. I don’t think the content itself should be regulated, but how the algorithm works to overwhelm an individual with said content. I know it can be overwhelming and floods a newsfeed because I was subjected to it. One click on a link in a post or a comment on a post or a share of a link and my feed would be flooded with twice as many posts. The result was me being politically demoralized and angry. It took understanding that I was causing my own demoralization because of the purposed design of the algorithm. I hate to say it, but I think there’s an argument for regulation on the design of the algorithm. I don’t think that necessarily causes a First Amendment issue.
But someone is speaking, as you acknowledge by saying that an algorithm is "a program with a directive." Algorithms don't just spring up unprompted, they are created by humans to do certain things, and in this case, express certain messages based on the desires, inputs, and rules of its creator. One of those scrolling LED signs that displays various messages isn't less speech because somebody isn't typing in each letter every time. There's no meaningful and cohesive way to separate an algorithm from anything other speech, for First Amendment purposes.
Even if a platform's "sole purpose" is engagement (which I think is dubious at best), that doesn't change anything from a First Amendment perspective, just like a profit motive doesn't make speech "commercial speech." From M.P. v. Meta:
For instance, newspaper editors choose what articles merit inclusion on their front page and what opinion pieces to place opposite the editorial page. These decisions, like Facebook's decision to recommend certain third-party content to specific users, have as a goal increasing consumer engagement. See, e.g., Above the Fold, Cambridge Business English Dictionary (2011) (explaining that newspaper editors place the stories they think "will sell the newspaper... above the fold"). But a newspaper company does not cease to be a publisher simply because it prioritizes engagement in sorting its content.
You may raise a normative argument for regulation (which I wouldn't agree with, but reasonable minds can disagree on that),, but the First Amendment is absolutely implicated (and most likely dispositive).
I hear you. I suppose that’s a valid explanation of an algorithm being, at the very least, some form of speech. I’m not entirely sure I fully agree with it. I think defining speech in this case matters. An LED sign is simply a medium for speech, its not speech itself. The messages it displays are the speech. Which, yes were prompted by someone who programmed the sign. An algorithm I suppose wouldn’t be entirely different from that as a medium for speech. I think that’s the cohesive separation right there.
The algorithm is to social media as the LED sign is to its scrolling message, or ink and paper are to a periodical. Its a vehicle to convey a message. In the algorithm’s case its directives are, primarily, to shovel individualized content (aka speech) to a user based on how they interact with previous content on their feed. I suppose that’s not entirely unlike a newspaper editor organizing and choosing what gets printed on what page.
But my contention is the hyper-focused individualized content being curated by the algorithm is proving to have detrimental societal effects. So, not unlike an LED sign flashing various messages to drivers-by, when that medium for said speech is proven to have detrimental effects to public safety such as drivers being distracted at night by the brightness and rapidity of flashing messages of the signage, municipalities have a reasonable case to regulate how that sign is displayed and used. They have very limited reason to regulate the messages themselves. I think a similar case can be made for how algorithm’s are programmed in the case of individualized content curation. With the correlations of seriously negative effects on social cohesion, mental health and personality disorders, and political radicalization with the rise of algorithm based social media, I think it calls for a discussion of how social media algorithms are used and programmed.
Again, I’m not calling for the content itself on social media to be censored or erased. I think there needs to be a discussion in how and how much said content is delivered to individual users. Because as it stands right now, I’d have to say, most social media users are not actually consenting to what is coming across their feeds.
If a person subscribes to a newspaper, enjoys an particular article from that single paper, they don’t then have 10 other newspapers with very similar and more extreme versions of the article show up on their front step the next morning. And if that person has a desire to be informed by what they read they may take time to try and read a lot of those articles, or simply their headlines, and over time become more and more disgruntled if not radicalized because the content has an inherent negative bias in order to get them to open the paper in the first place. That’s what social media is currently designed to do.
I don’t follow FIRE because I don’t believe in Free Speech or the value of the First Amendment. I think the mission they are after, in service of Free Speech and the First Amendment are as important as it can possibly get, I’m glad you’ve taken up such a mantel. But I think in this case, there are some real life concerns where allowing tech companies to go unchecked in how they deliver information to the public. I think its worth defining what is and what isn’t actually speech and who is saying what. Because its obvious our society is beginning to splinter because the basic understandings around what we take in to inform ourselves have become terribly muddied and its leading to more and more people silo themselves in echo chambers while losing the ability to discuss and debate different ideas.
I think many efforts like this legislation assume that people receiving information on social media don’t have agency. I believe that citizens who value liberty should be responsible for the information and speech they ingest. I have a right to speak and to hear, therefore I also have the right to choose when not to speak and not to hear. However it comes down to my agency and choice. People need to curate the information they take in.
Your last sentence is the main crux of the debate. I agree. At this point we should all know how the algorithm works and either know that if I “like” or share or comment on a post I will very likely get similar content in my feed in the future. So we either interact with the platform responsibly (not viewing or sharing or commenting on literally everything that comes across your feed) or we don’t interact with it at all. Its difficult to do, no doubt but its likely the only way to sustain the First Amendment protection while also guarding ourselves and society as a whole against the dangers social media can present.
I agree with the main premise of the article, against the bill being presented by Sens. Kelly and Curtis. But I think the writer misses a crucial point. Publishing, as in the days of the printed and/or broadcast televised word was/is intended for a mass audience. The First Amendment wins in the cases presented in the article are all valid when it comes to video game and movie creators, music artists, and newspaper publishers.
However, a main difference here is that the algorithms used by social media companies and online platforms, I would say primarily Meta and Youtube, are designed to hyper-focus on an individual. A newspaper or news broadcast is typically designed for a mass audience. A video game or movie or song is as well. Those mediums tend to have less of a violently radicalizing effect by readers/watchers/listeners/players because the people consuming said content aren’t usually inundated with it.
As opposed to social media, where a user easily opens an app and has content flooded on their newsfeed. One click on a post can result in barrage of similar content. So if I click on or watch a controversial post, comment on it, like it, and/or share it, then the algorithm zeroes in on that interaction and begins to flood my individual feed with such content and potentially, if not likely, more controversial or radical content. It can, and I think arguably has shown to, lead people into a more radicalized and potentially violent state. More so than the traditional means of consuming information.
It’s been proven that social media consumption, and general overuse of one’s cellphone or tablet, have very poor effects on mental health for a general and outsize portion of the population. Part of that reason is the extreme targeting of the algorithms to individualize content based on what people interact with or even just happen to come across. It has also been proven to be tied to engagement, i.e. commenting and sharing, which happens far more with things that outrage us than the things that would be considered “good news.” The effects of such algorithms being the primary function of social media propagation and profit have had detrimental effects on society, not to mention the arguably inadvertent self-propagandizing effect it has had on people in both directions of the social and political debate.
I think if there is any merit to the bill being presented, and I don’t think there currently is as far as Section 230 is concerned, targeting the inherent function of the algorithm isn’t an unworthy aim. It may be misguided in how these senators are trying to accomplish that, but the algorithms are actively working against a cohesive society. The free spread of information and healthy debate are vital to a free society, but the algorithms are clearly not fostering such vitality. We are more divided and more anxious than at almost any other time in our country’s history and we are beginning, if not already, approaching conversations and debates with shockingly opposed versions of basic reality. I would argue that allowing tech companies to continue benefitting from such algorithms and software under the guise of the First Amendment would be to allow for the continued and overall disintegration of our society. There IS a conversation to be had on regulating such technology, and one that I think avoids encroaching on the First Amendment.
I can't see any reason why speaking directly to a person would be any less expressive, or receive any less First Amendment protection, than speaking to a mass audience.
But that’s the thing. No one is actually speaking. The algorithm is basically unconsciously, because its a program a with a directive, flooding a person’s newsfeed across their social media profiles with various content for the sole purpose of engagement. A person engaged on one post and the algorithm, which is not a person, works aggressively to keep that person engaging on as many posts as possible. It then potentially, if not likely, subjects that person to more and more and more radical content that may or may not be based in fact. If the person doesn’t understand how the algorithm works or its purpose they can become overwhelmed by the constant negative bias of the content they come across all day everyday. My argument is that individuals are subject to hyper-focused and curated content, not by a publisher but by a program, for the sole purpose of engagement to drive ad revenue which can and does lead many of those people to become more anxious, antisocial, and radicalized. I think there’s a big difference between a newspaper company, a broadcast news channel, or an individual speaking at a town hall or in a pub than an algorithm designed to take published content and flood someone’s feed with it. I don’t think the content itself should be regulated, but how the algorithm works to overwhelm an individual with said content. I know it can be overwhelming and floods a newsfeed because I was subjected to it. One click on a link in a post or a comment on a post or a share of a link and my feed would be flooded with twice as many posts. The result was me being politically demoralized and angry. It took understanding that I was causing my own demoralization because of the purposed design of the algorithm. I hate to say it, but I think there’s an argument for regulation on the design of the algorithm. I don’t think that necessarily causes a First Amendment issue.
But someone is speaking, as you acknowledge by saying that an algorithm is "a program with a directive." Algorithms don't just spring up unprompted, they are created by humans to do certain things, and in this case, express certain messages based on the desires, inputs, and rules of its creator. One of those scrolling LED signs that displays various messages isn't less speech because somebody isn't typing in each letter every time. There's no meaningful and cohesive way to separate an algorithm from anything other speech, for First Amendment purposes.
Even if a platform's "sole purpose" is engagement (which I think is dubious at best), that doesn't change anything from a First Amendment perspective, just like a profit motive doesn't make speech "commercial speech." From M.P. v. Meta:
For instance, newspaper editors choose what articles merit inclusion on their front page and what opinion pieces to place opposite the editorial page. These decisions, like Facebook's decision to recommend certain third-party content to specific users, have as a goal increasing consumer engagement. See, e.g., Above the Fold, Cambridge Business English Dictionary (2011) (explaining that newspaper editors place the stories they think "will sell the newspaper... above the fold"). But a newspaper company does not cease to be a publisher simply because it prioritizes engagement in sorting its content.
https://scholar.google.com/scholar_case?case=4436118071448728280#p526
You may raise a normative argument for regulation (which I wouldn't agree with, but reasonable minds can disagree on that),, but the First Amendment is absolutely implicated (and most likely dispositive).
I hear you. I suppose that’s a valid explanation of an algorithm being, at the very least, some form of speech. I’m not entirely sure I fully agree with it. I think defining speech in this case matters. An LED sign is simply a medium for speech, its not speech itself. The messages it displays are the speech. Which, yes were prompted by someone who programmed the sign. An algorithm I suppose wouldn’t be entirely different from that as a medium for speech. I think that’s the cohesive separation right there.
The algorithm is to social media as the LED sign is to its scrolling message, or ink and paper are to a periodical. Its a vehicle to convey a message. In the algorithm’s case its directives are, primarily, to shovel individualized content (aka speech) to a user based on how they interact with previous content on their feed. I suppose that’s not entirely unlike a newspaper editor organizing and choosing what gets printed on what page.
But my contention is the hyper-focused individualized content being curated by the algorithm is proving to have detrimental societal effects. So, not unlike an LED sign flashing various messages to drivers-by, when that medium for said speech is proven to have detrimental effects to public safety such as drivers being distracted at night by the brightness and rapidity of flashing messages of the signage, municipalities have a reasonable case to regulate how that sign is displayed and used. They have very limited reason to regulate the messages themselves. I think a similar case can be made for how algorithm’s are programmed in the case of individualized content curation. With the correlations of seriously negative effects on social cohesion, mental health and personality disorders, and political radicalization with the rise of algorithm based social media, I think it calls for a discussion of how social media algorithms are used and programmed.
Again, I’m not calling for the content itself on social media to be censored or erased. I think there needs to be a discussion in how and how much said content is delivered to individual users. Because as it stands right now, I’d have to say, most social media users are not actually consenting to what is coming across their feeds.
If a person subscribes to a newspaper, enjoys an particular article from that single paper, they don’t then have 10 other newspapers with very similar and more extreme versions of the article show up on their front step the next morning. And if that person has a desire to be informed by what they read they may take time to try and read a lot of those articles, or simply their headlines, and over time become more and more disgruntled if not radicalized because the content has an inherent negative bias in order to get them to open the paper in the first place. That’s what social media is currently designed to do.
I don’t follow FIRE because I don’t believe in Free Speech or the value of the First Amendment. I think the mission they are after, in service of Free Speech and the First Amendment are as important as it can possibly get, I’m glad you’ve taken up such a mantel. But I think in this case, there are some real life concerns where allowing tech companies to go unchecked in how they deliver information to the public. I think its worth defining what is and what isn’t actually speech and who is saying what. Because its obvious our society is beginning to splinter because the basic understandings around what we take in to inform ourselves have become terribly muddied and its leading to more and more people silo themselves in echo chambers while losing the ability to discuss and debate different ideas.