Anthropic Admits Error in Legal Battle Over Music Publishers
In a recent court filing in Northern California, a lawyer representing Anthropic confessed to using an incorrect citation generated by the company’s Claude AI chatbot in their ongoing legal dispute with music publishers. According to the document submitted, Anthropic acknowledged that the citation created by Claude contained inaccurate information regarding the title and authors of the source.
The filing, first reported by Bloomberg, states that despite a manual citation check, Anthropic’s legal team failed to identify the errors caused by Claude’s hallucinations.
Anthropic expressed regret for the mistake, attributing it to an “honest citation error” rather than an intentional act of deception. The company’s lawyers admitted that several other inaccuracies in the legal documents were also a result of Claude’s misinterpretations.
This revelation comes after allegations were made against Anthropic’s expert witness, Olivia Chen, who was accused of utilizing Claude to reference fictitious articles in her testimony. Following these accusations, Federal judge Susan van Keulen instructed Anthropic to address the claims made by the music publishers.
The lawsuit between the music publishers and Anthropic is part of a broader trend of disputes between copyright holders and technology companies concerning the use of their content in generative AI technologies.
Despite these setbacks, the legal industry continues to embrace AI technology for various tasks. Startups like Harvey, which leverage generative AI models to aid legal professionals, are attracting significant funding. Harvey is reportedly in discussions to secure a funding round exceeding $250 million, valuing the company at $5 billion.