Disagree with FIRE on this one... Why should another have the right to generate an extremely realistic version of me saying this that I do not want to be associated with saying. Seems to violate my right of association. Also major privacy concerns.
Totally agree. This issue also seriously ramps up the, up until now, dud of a freakout over misinformation. If an actor, hostile or otherwise, can create an indistinguishable replica of say Alexandra Occasio-Cortez, or any other politician or public figure, outright advocating for the elimination of entire races of people or any other outright false thing how would that not be the best example of actual mis and disinformation? Those are real threats in the information and free speech landscape seeing as creating something like that could very well actually misinform people of something a public figure said. It could also revise factual history if we begin attributing certain quotes to historical figures who lived during the age of televised and recordable speech (roughly the late 19th and early 20th century and beyond). This is a very serious issue. I’m with FIRE on most counts of protecting free speech and the press, but this issue has much more nuance and real consequences when we take a less nuanced approach to the concerns being raised. And to your point, a person’s name, image, likeness, and voice/speech is their own. Allowing anyone let alone a hostile actor to appropriate my NIL and voice to speak things I would never condone or support or associate with would be to allow the disintegration of individuality and autonomy. Even benign instances of attributing acceptable historical quotes to someone who never said such things should at the very least be determined by that person and/or their next of kin to approve of such words being artificially put into their mouth.
Agree completely. Think of all the ways governments and institutions have tried to silence speech because of "misinformation," allowing something like this to continue only strengthens their argument that ALL speech should be censored. Down the road allowing deepfakes to continue will only make the oppression of speech greater.
I agree we've seen very problematic attempts to silence speech over misinformation, but wouldn't laws outlawing deepfakes have all the *exact* same foot-guns? For example, if we give government the authority to police deepfakes, they will abuse that authority to silence legitimate speech by claiming one deepfake is valid while another is not. Exactly the way we saw them do with misinformation.
The article does respond to the point you opened with: if someone deepfakes you, we do have existing laws that deal with fraud and defamation.
Misinformation and deepfakes are a serious problem, but we're better off figuring out how to deal with that problem without hoping the government will get it right on our behalf.
Your example identifies the current problem. There needs to be laws outlining how deepfakes should be regulated, otherwise you're just at the whim of the enforcing authority, who, right now, can basically say one thing is legitimate or not. Additionally, this is just a much bigger issue than misinformation was. There can be some tracking of information and a determination of whether you agree or disagree with an assessment of its validity. Deep fakes will become indistinguishable from valid content. Tech companies are doing nothing to try and stop that. Even experts are having trouble keeping up.
Also, the case law about how to apply deepfakes to current defamation laws is not established and it's unclear how it could go. Government does play a role in regulating technology.
I'm not sure I follow your opening. My example was the government tried to assume authority to police misinformation (in clear violation of the 1A), and they _immediately_ abused it. Thus my point was they cannot be trusted to be the enforcing authority, as you said.
I'd much prefer to wait for a few fraud and defamation court cases to establish norms and precedent, and for people to figure out how to adjust. Bottom-up solutions. The alternative of top-down regulation -- or worse: 1A erosions -- is much more systemically dangerous and longer-lasting. If regular people don't know what the solution is yet, then the government certainly doesn't know yet either. Despite not knowing the solution, they'll rush to implement one anyway if the voters are clamoring for it.
Our founders put the 1A in place to protect against government overreach, and while your closing is technically correct, regulatory overreach can be especially dangerous because it can bypass Congress entirely. We shouldn't rush toward it without knowing the tradeoffs first.
I also disagree with FIRE. I support free expression, and fair use of likenesses for critical or artistic purposes. I do not support realistic fakes that portray events that didn’t happen. That’s defamation and slander, unless it’s clearly labeled satire. And it has nothing to do with AI, the principle is the same. It’s just that AI makes it easier to create such fakes.
A history teacher making a video of Ronald Reagan saying things he never said for “educational purposes”? What is the possible benefit of that? Have you lost your mind?
(I don't know the answer to the following question. Like everyone else, I'm just puzzling through possible problems and edge-cases in this new age we find ourselves in.)
If an AI creation strays into defamation or slander, would those existing laws be enough to prevent abuses? It seems the article is suggesting the latter, you seem to be suggesting the same, but also said you disagree. Do you think AI-specific laws are needed?
I also wonder on your final point. It would be paramount to good education to avoid confusion between an AI recreation "with artistic liberties" and claiming direct quotes. However, there are many instances of quotations associated with historical figures that persist because they capture the essence of the person, even if they're ahistorical. As a concrete example from a philosophy lecture, "The historical Socrates comes to us, almost entirely, through the literary embellishments of Plato. You end up with two Socrates. So what's the relationship between them? Well, they can't really be separated. It's impossible. We can't deny the historical Socrates but the *point of Socrates* is not the historical person, but rather how he is taken up and internalized as the beginning of western thought." He also repeats that claim about quotes associated with Jesus that are attributed to writers that came well after. I've heard similar things about quotes attributed to Churchill.
Among other things that are horrible about this act is the fact that it's a litigation lawyer's full employment dream. There are fuzzy grey lines everywhere.
We Americans already spend way too much time squabbling over things the law has imprudently stuck its nose into, and thus guaranteed that there will be pointless disputes, which in a free society would be left to individual choice.
Excellent analysis! What if this act also hinders creators from using AI for satire or critical commentary on public figures? It feels like a realy overreach.
Disagree with FIRE on this one... Why should another have the right to generate an extremely realistic version of me saying this that I do not want to be associated with saying. Seems to violate my right of association. Also major privacy concerns.
Totally agree. This issue also seriously ramps up the, up until now, dud of a freakout over misinformation. If an actor, hostile or otherwise, can create an indistinguishable replica of say Alexandra Occasio-Cortez, or any other politician or public figure, outright advocating for the elimination of entire races of people or any other outright false thing how would that not be the best example of actual mis and disinformation? Those are real threats in the information and free speech landscape seeing as creating something like that could very well actually misinform people of something a public figure said. It could also revise factual history if we begin attributing certain quotes to historical figures who lived during the age of televised and recordable speech (roughly the late 19th and early 20th century and beyond). This is a very serious issue. I’m with FIRE on most counts of protecting free speech and the press, but this issue has much more nuance and real consequences when we take a less nuanced approach to the concerns being raised. And to your point, a person’s name, image, likeness, and voice/speech is their own. Allowing anyone let alone a hostile actor to appropriate my NIL and voice to speak things I would never condone or support or associate with would be to allow the disintegration of individuality and autonomy. Even benign instances of attributing acceptable historical quotes to someone who never said such things should at the very least be determined by that person and/or their next of kin to approve of such words being artificially put into their mouth.
Agree completely. Think of all the ways governments and institutions have tried to silence speech because of "misinformation," allowing something like this to continue only strengthens their argument that ALL speech should be censored. Down the road allowing deepfakes to continue will only make the oppression of speech greater.
I agree we've seen very problematic attempts to silence speech over misinformation, but wouldn't laws outlawing deepfakes have all the *exact* same foot-guns? For example, if we give government the authority to police deepfakes, they will abuse that authority to silence legitimate speech by claiming one deepfake is valid while another is not. Exactly the way we saw them do with misinformation.
The article does respond to the point you opened with: if someone deepfakes you, we do have existing laws that deal with fraud and defamation.
Misinformation and deepfakes are a serious problem, but we're better off figuring out how to deal with that problem without hoping the government will get it right on our behalf.
Your example identifies the current problem. There needs to be laws outlining how deepfakes should be regulated, otherwise you're just at the whim of the enforcing authority, who, right now, can basically say one thing is legitimate or not. Additionally, this is just a much bigger issue than misinformation was. There can be some tracking of information and a determination of whether you agree or disagree with an assessment of its validity. Deep fakes will become indistinguishable from valid content. Tech companies are doing nothing to try and stop that. Even experts are having trouble keeping up.
Also, the case law about how to apply deepfakes to current defamation laws is not established and it's unclear how it could go. Government does play a role in regulating technology.
I'm not sure I follow your opening. My example was the government tried to assume authority to police misinformation (in clear violation of the 1A), and they _immediately_ abused it. Thus my point was they cannot be trusted to be the enforcing authority, as you said.
I'd much prefer to wait for a few fraud and defamation court cases to establish norms and precedent, and for people to figure out how to adjust. Bottom-up solutions. The alternative of top-down regulation -- or worse: 1A erosions -- is much more systemically dangerous and longer-lasting. If regular people don't know what the solution is yet, then the government certainly doesn't know yet either. Despite not knowing the solution, they'll rush to implement one anyway if the voters are clamoring for it.
Our founders put the 1A in place to protect against government overreach, and while your closing is technically correct, regulatory overreach can be especially dangerous because it can bypass Congress entirely. We shouldn't rush toward it without knowing the tradeoffs first.
I also disagree with FIRE. I support free expression, and fair use of likenesses for critical or artistic purposes. I do not support realistic fakes that portray events that didn’t happen. That’s defamation and slander, unless it’s clearly labeled satire. And it has nothing to do with AI, the principle is the same. It’s just that AI makes it easier to create such fakes.
A history teacher making a video of Ronald Reagan saying things he never said for “educational purposes”? What is the possible benefit of that? Have you lost your mind?
(I don't know the answer to the following question. Like everyone else, I'm just puzzling through possible problems and edge-cases in this new age we find ourselves in.)
If an AI creation strays into defamation or slander, would those existing laws be enough to prevent abuses? It seems the article is suggesting the latter, you seem to be suggesting the same, but also said you disagree. Do you think AI-specific laws are needed?
I also wonder on your final point. It would be paramount to good education to avoid confusion between an AI recreation "with artistic liberties" and claiming direct quotes. However, there are many instances of quotations associated with historical figures that persist because they capture the essence of the person, even if they're ahistorical. As a concrete example from a philosophy lecture, "The historical Socrates comes to us, almost entirely, through the literary embellishments of Plato. You end up with two Socrates. So what's the relationship between them? Well, they can't really be separated. It's impossible. We can't deny the historical Socrates but the *point of Socrates* is not the historical person, but rather how he is taken up and internalized as the beginning of western thought." He also repeats that claim about quotes associated with Jesus that are attributed to writers that came well after. I've heard similar things about quotes attributed to Churchill.
Among other things that are horrible about this act is the fact that it's a litigation lawyer's full employment dream. There are fuzzy grey lines everywhere.
We Americans already spend way too much time squabbling over things the law has imprudently stuck its nose into, and thus guaranteed that there will be pointless disputes, which in a free society would be left to individual choice.
Excellent analysis! What if this act also hinders creators from using AI for satire or critical commentary on public figures? It feels like a realy overreach.