Artificial intelligence (AI) has become a hot topic in the tech world, with billions of dollars on the line as courts in the US and UK grapple with the question of whether tech companies can legally train their AI models on copyrighted books. Authors and publishers have raised concerns, leading to multiple lawsuits being filed on this issue. In a surprising turn of events, researchers have discovered that one AI model not only used popular books in its training data but also memorized their contents verbatim.
The debate surrounding this issue revolves around whether AI developers have the legal right to use copyrighted works without obtaining permission. Previous research revealed that many large language models (LLMs) powering AI chatbots and other generative AI programs were trained on a dataset known as “Books3,” which includes nearly 200,000 copyrighted books, some of which are pirated copies. Developers argue that the AI models generate new combinations of words based on their training, transforming rather than replicating the copyrighted material.
However, recent research findings have shed light on the extent to which AI models retain the exact text of the books in their training data. While many models do not reproduce the books verbatim, it was discovered that one of Meta’s models has memorized significant portions of certain books. Should the courts rule against the company, researchers estimate that Meta could face damages of at least $1 billion.
Mark Lemley, a professor at Stanford University, emphasized that AI models do more than just learn general word relationships and are not merely “plagiarism machines.” The legal implications of AI training on copyrighted materials remain complex, with ongoing cases like Kadrey v Meta Platforms challenging the boundaries of fair use.
In a recent study, Lemley and his team tested AI memorization by splitting book excerpts into prefix and suffix sections to see if the models could complete the text verbatim. Excerpts from 36 copyrighted books, including popular titles like “A Game of Thrones” and “Lean In,” were used in the experiment. Results showed that Meta’s Llama 3.1 70B model had memorized significant portions of books like “Harry Potter,” “The Great Gatsby,” and “1984.”
The researchers estimated that even a 3% infringement on the Books3 dataset could lead to damages nearing $1 billion, highlighting the potential financial risks for AI developers. While this testing method offers insights into AI memorization, legal experts like Randy McCarthy caution that it does not resolve the broader question of whether companies have the right to train their AI models on copyrighted works under the US fair use rule.
In the UK, where copyright laws are stricter, the issue of AI memorization could have significant implications. Robert Lands, a lawyer at Howard Kennedy, noted that UK copyright law follows the “fair dealing” concept, providing limited exceptions to copyright infringement. Models memorizing pirated books may not qualify for this exception, raising further legal challenges in the AI landscape.
As the legal battles continue, the intersection of AI and copyright law remains a complex and evolving area that will shape the future of AI development and intellectual property rights.