“AI, Free Speech, and Duty”
Just two weeks ago, my Free Speech Unmuted co-host Prof. Jane Bambauer (Florida) and I discussed Garcia v. Character Technologies. (Besides being a leading First Amendment scholar, Jane also teaches and writes about tort law.) I’m therefore especially delighted to pass along some thoughts from her on yesterday’s decision in the case:
AI, Free Speech, and Duty [by Jane Bambauer]
The case against Character.AI, based on the suicide of a teenager who had become obsessed with his Daenerys Targaryen chatbot, produced an important opinion yesterday. Judge Conway of the U.S. District Court in Orlando declined to dismiss almost all of the plaintiffs’ claims, and also refused “at this stage in the litigation” to treat AI or chatbot output as speech under the First Amendment. I think the opinion has problems.
Eugene has already laid out some of the reasons that the court’s First Amendment analysis is flawed. (E.g., could the Florida legislature really pass a law banning ChatGPT from producing content critical of Governor DeSantis? Of course not.) I want to pile on a little bit—I can’t help myself—and then connect the free speech issues to the analysis of tort duty.
Is Chatbot Output “Speech”?
The defendants (Google and Character AI) argued that chatbot output is protected speech, much like computer-generated characters in video games. The defendants made analogies to other expressive technologies that were once new as well. But the court found that these arguments “do not meaningfully advance their analogies” because the defendants didn’t explain how chatbot output is expressive.
In the court’s opinion, with an assist from Justice Barrett, chatbot output is not expressive because it is designed to give listeners the expression that they are looking for rather than choosing for them:
The Court thus must decide whether Character A.I.’s output is expressive such that it is speech. For this inquiry, Justice Barrett’s concurrence in Moody on the intersection of A.I. and speech is instructive. See Moody, 603 U.S. at 745–48 (Barrett, J., concurring). In Moody, Justice Barrett hypothesized the effect using A.I. to moderate content on social media sites might have on the majority’s holding that content moderation is speech. Id. at 745–46. She explained that where a platform creates an algorithm to remove posts supporting a particular position from its social media site, “the algorithm [] simply implement[s] [the entity’s] inherently expressive choice ‘to exclude a message.'” Id. at 746 (quoting Hurley v. Irish-American Gay, Lesbian and Bisexual Grp. of Boston, Inc., 515 U.S. 557, 574 (1995)). The same might not be true of A.I. though—especially where the A.I. relies on an LLM:
But what if a platform’s algorithm just presents automatically to each user whatever the algorithm thinks the user will like …? The First Amendment implications … might be different for that kind of algorithm.
This reasoning was ill-conceived when Justice Barrett first wrote it. When the Disney company greenlights a superhero movie, it’s plausible that a decision to make a movie about people flying around and looking cool is mostly or even entirely motivated by giving paying movie-goers whatever they want, and that they would choose the backstory, dialog, wardrobe, and everything else to maximize profits if they could. But this wouldn’t change the fact that the movie is speech.
Justice Barrett’s reasoning is even more untenable in a case against chatbots. Is there really any doubt about the nature of written correspondence responding to a person’s prompts and questions? It’s difficult to conceive of a more expressive product than words designed to address the questions and interests that a listener has actively cultivated and asked for.
Lest there be any doubt, Judge Conway’s own opinion, just a couple sections later, can’t help but use the word “expression” to describe chatbot output. When discussing the products liability claim, Judge Conway decided that the case may proceed to the extent the product claim is based on the app’s failure to confirm users’ ages or to give users greater control over excluding indecent content, and not on the actual content of the conversations. “Accordingly, Character A.I. is a product for the purposes of Plaintiff’s product liability claims so far as Plaintiff’s claims arise from defects in the Character A.I. app rather than ideas or expressions within the app” (emphasis added). This analysis seems correct to me, and by restricting the products claims the court has sidestepped the First Amendment defenses that media defendants typically bring to design defect cases. My point here is just to show that the court couldn’t even get through its own opinion without referring to chatbot output as expression.
Free speech first principles also strongly suggest that AI-generated content is protected. In my opinion, the most basic and sensible c
Article from Reason.com
The Reason Magazine website is a go-to destination for libertarians seeking cogent analysis, investigative reporting, and thought-provoking commentary. Championing the principles of individual freedom, limited government, and free markets, the site offers a diverse range of articles, videos, and podcasts that challenge conventional wisdom and advocate for libertarian solutions. Whether you’re interested in politics, culture, or technology, Reason provides a unique lens that prioritizes liberty and rational discourse. It’s an essential resource for those who value critical thinking and nuanced debate in the pursuit of a freer society.