A class-action lawsuit against Anthropic could expose the AI company to billions in copyright damages over its alleged use of pirated books from shadow libraries like LibGen and PiLiMi to train its models. While a federal judge ruled that training on lawfully obtained books may qualify as fair use, the court will hold a separate trial to address the allegedly illegal acquisition and storage of copyrighted works. Legal experts warn that statutory damages could be severe, with estimates ranging from $1 billion to over $100 billion.Leading AI lab Anthropic is reckoning with a legal battle that could jeopardize the company’s future.The class-action lawsuit against the company centers on Anthropic’s use of potentially pirated books to train its large language model, Claude, and could leave the company on the hook for billions of dollars’ worth of damages.According to court filings, the company downloaded millions of copyrighted works from shadow libraries like LibGen and PiLiMi to train AI models and build a “central library” of digital books that would include “all the books in the world” and preserve them indefinitely. The plaintiffs—who include authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson—allege that millions of these works were obtained from piracy websites in direct violation of copyright law.The judge presiding over the case, William Alsup, has recently ruled that training AI models on lawfully acquired books qualifies as “fair use,” and that AI companies do not need a license from copyright holders to conduct such training, a decision that was viewed as a major win for the AI sector.However, the still unresolved issue is how Anthropic obtained and stored the copyrighted books. The judge drew a distinction when it came to the use of pirated materials, advising Anthropic that a separate trial “on the pirated copies” and “the resulting damages” would be forthcoming.“The problem is that a lot of these AI companies have scraped piracy sites like LibGen … where books have been uploaded in electronic form, usually PDF, without the permission of the authors, without payment,” Luke McDonagh, an associate professor of law at LSE, told Fortune.“The judge seems to be suggesting that if you had bought a million books from Amazon in digital form, then you could do the training, and that would be legal, but it’s the downloading from the pirate website that is the problem, because there’s two things, there’s that acquiring of the copy, and then the use of the copy,” he added.Santa Clara law professor Ed Lee said in a blog post that the ruling could leave Anthropic facing “at least the potential for business-ending liability.”The plaintiffs are unlikely to prove direct financial harm, such as lost sales, and are likely to instead rely on statutory damages, which can range from $750 to $150,000 per work. That range depends heavily on whether the infringement is deemed willful. If the court rules that Anthropic knowingly violated copyright law, the resulting fines could be enormous, potentially in the billions, even at the lower end of the scale.The number of works included in the class action and whether the jury finds willful infringement is still a question mark, but potential damages could range from hundreds of millions to tens of billions of dollars. Even at the low end, Lee argues, damages in the range of $1 billion to $3 billion are possible if just 100,000 works are included in the class action. That figure rivals the largest copyright damage awards on record and could far exceed Anthropic’s current $4 billion in annual revenue.Lee estimated that the company could be on the hook for up to $1.05 trillion if a jury decides that the company willfully pirated 6 million copyrighted books.Anthropic did not immediately respond to a request for comment from Fortune. However, the company has previously said it “respectfully disagrees” with the court’s decision and is exploring its options, which might include appealing Alsup’s ruling or offering to settle the case. A trial, which is the first case of a certified class action against an AI company over the use of copyrighted materials, is currently scheduled for Dec. 1.The verdict could determine the outcomes of similar cases, such as a high-profile ongoing battle between OpenAI and dozens of authors and publishers. While the courts do appear to be leaning toward allowing fair use arguments from AI companies, there’s a legal divergence regarding the acquisition of copyrighted works from shadow sites.In a recent copyright case against Meta, Judge Vince Chhabria argued that the transformative purpose of the AI use effectively legitimizes the earlier unauthorized downloading. The ruling, according to McDonagh, suggested that the positive, transformative use of the works could “correct” the initial problematic acquisition, whereas Judge Alsup viewed the downloading of books from unauthorized shadow libraries as “inherently wrong,” suggesting that even if the AI training use might be considered fair use, the initial acquisition of works was illegitimate and would need compensation.The two judges also diverged on whether AI-generated outputs could be deemed to compete with the original copyrighted works in their training data. Judge Chhabria acknowledged that if such competition was proved it might undercut a fair use defense but found that, in the Meta case, the plaintiffs had failed to provide sufficient evidence of market harm, whereas Judge Alsup concluded that generative AI outputs do not compete with the original works at all.The legal question around AI companies and copyrighted work has also become increasingly political, with the current administration pushing to allow AI companies to use copyrighted materials for training under broad fair use protections, in an effort to maintain U.S. leadership in artificial intelligence. McDonagh said the case against Anthropic was unlikely to leave the company bankrupt, as the Trump administration would be unlikely to allow a ruling that would essentially destroy an AI company.Judges are also generally averse to issuing rulings that could lead to bankruptcy unless there is a strong legal basis and the action is deemed necessary. Courts have been known to consider the potential impact on the company and its stakeholders when issuing rulings that could result in liquidation.“The U.S. Supreme Court, at the moment, seems quite friendly to the Trump agenda, so it’s quite likely that in the end, this wouldn’t have been the kind of doomsday scenario of the copyright ruling bankrupting Anthropic,” McDonagh said. “Anthropic is now valued, depending on different estimates, between $60 and $100 billion. So paying a couple of billion to the authors would by no means bankrupt the organization.”This story was originally featured on Fortune.com