Judge Strikes Part of Anthropic (Claude.AI) Expert’s Declaration, Because of Uncaught AI Hallucination in Part of Citation
From Friday’s order by Magistrate Judge Susan van Keulen in Concord Music Group, Inc. v. Anthropic PBC (N.D. Cal.)
At the outset, the Court notes that during the hearing, Publishers asked this Court to examine Anthropic’s expert, Ms. Chen and strike her declaration because at least one of the citations therein appeared to have been an “AI hallucination”: a citation to an article that did not exist and whose purported authors had never worked together. The Court gave Anthropic time to investigate the circumstances surrounding the challenged citation. Having considered the declaration of Anthropic’s counsel and Publishers’ response, the Court finds this issue is a serious one—if not quite so grave as it at first appeared.
Anthropic’s counsel protests that this was “an honest citation mistake” but admits that Claude.ai was used to “properly format” at least three citations and, in doing so, generated a fictitious article name with inaccurate authors (who have never worked together) for the citation at issue. That is a plain and simple AI hallucination. Yet the underlying article exists, was properly linked to and was located by a human being using Google search; so, this is not a case where “attorneys and experts [have] abdicate[d] their independent judgment and critical thinking skills in favor of ready-made, AI-generated answers….”
A remaining serious concern, however, is Anthropic’s attestation that a “manual citation check” was performed but “did not catch th[e] error.” It is not clear how such an error—including a complete change in article title—could have escaped correction during manual cite-check by a human being. Furthermore, although the undersigned’s [i.e., the Magistrate Judge’s] standing order does not expressly address the use of AI by parties or counsel, Section VIII.G of [District] Judge Lee’s Civil Standing Order requires a certification “that lead trial counsel has personally verified the content’s accuracy.” Neither the certification nor verification has occurred here. In sum, the Court STRIKES-IN-PART Ms. Chen’s declaration, striking paragraph 9 [which contains the footnote that contains the citation with the hallucination], and notes for the record that this issue undermines the overall credibility of Ms. Chen’s written declaration, a factor in the Court’s conclusion.
Thanks to ChatGPT Is Eating the World for the pointer; it also discusses more about the substantive role of paragraph 9 in the declaration. Here’s more backstory (from an earlier p
Article from Reason.com
The Reason Magazine website is a go-to destination for libertarians seeking cogent analysis, investigative reporting, and thought-provoking commentary. Championing the principles of individual freedom, limited government, and free markets, the site offers a diverse range of articles, videos, and podcasts that challenge conventional wisdom and advocate for libertarian solutions. Whether you’re interested in politics, culture, or technology, Reason provides a unique lens that prioritizes liberty and rational discourse. It’s an essential resource for those who value critical thinking and nuanced debate in the pursuit of a freer society.