It was clear that things had gone off the rails when a run-of-the-mill dispute with a homeowner’s association spiraled so far that the plaintiff started invoking the Racketeer Influenced and Corrupt Organizations (RICO) Act — a 1970 federal law meant for prosecuting organized crime groups.The drama started back in early 2025. A married couple in Florida was late on HOA fees totaling a few hundred dollars. Rather than dispute the fees directly, they took the unusual step of filing a lawsuit against the association, arguing that a state statute rendered the collection of the fees illegal. They opted to represent themselves as they took their fight to the court — “pro se,” in attorney lingo — with the help of generative AI, which they used to draft and file an increasingly bizarre barrage of legal paperwork.The couple were “just swinging a sword at anything [they] could possibly hit,” a lawyer involved with the case told Futurism. “Initially, nobody realized how unhinged things would get.”The husband-and-wife duo was using AI to churn out virtually unlimited new accusations and legalese, resulting in a dizzying flood of AI-generated court documents. And as hundreds of pages of AI-generated material piled up, the attorney we spoke to recalled, the plaintiffs’ claims grew increasingly wild. Within weeks of filing the suit, what had started as a minor dispute devolved into bombastic claims that read less like housing law and more like a screenplay: the HOA and the lawyers were, together, involved in a sprawling RICO conspiracy to defraud homeowners, the plaintiffs alleged, and needed to be held to account by federal investigators.“It was just draining,” recalled the lawyer, who spoke on the condition of anonymity because she didn’t want to risk provoking the couple further. “We were just getting hammered. Every day.”Eventually, the couple started filing AI-generated bar complaints against individual lawyers involved with the case, and claiming to have alerted the FBI to their supposed crimes.“It evolved into this thing where everyday it’d be five, ten, 12 different filings, all sort of doing the same thing, everyday, saying, ‘I want my judgment today. I want sanctions against all the lawyers. All the lawyers should be disbarred. All of them are committing fraud. There are RICO violations,'” said the lawyer.Eventually, one of the firms involved with the suit requested that the plaintiffs share their AI prompts with the court. They refused, responding that they were in the process of building a “proprietary” AI framework designed to interpret and analyze Florida law, which they planned to turn into a business.The allegations were eventually dismissed with prejudice, meaning that the plaintiffs are forbidden from appealing the court’s decision — a sanction handed down to cases judges find to be frivolous or abusive of the court system. The fees the couple failed to pay in the first place, meanwhile, totaled roughly about what they paid to file the initial complaint.***Chaotic provocateurs who pepper courts with frivolous legal actions, often motivated by animus or a poor understanding of the law, are nothing new. But the use — and misuse — of AI chatbots like OpenAI’s ChatGPT and Google’s Gemini by litigants, particularly non-lawyers who choose to represent themselves in legal fights, can pour gas on the flames of this old problem. Easy-to-access chatbots offer a powerful new way to generate a firehose of legal paperwork that looks, at least on a first glance, legitimate — and it’s falling on judges, clerks, and attorneys to sort out the resulting deluge, in work that lawyers say is intensely time-consuming and expensive.“It triples the amount of paperwork that I have to go through,” said Sophia Ficarrotta, an attorney in Washington state who represents victims of intimate partner violence and often encounters defendants who are using AI to make their way through the court system. “It’s really tedious. And then when I go through all those filings, I have to bill my clients for it.”“It’s really difficult,” she added, “to spend time looking through something that I know that my clients shouldn’t have to pay for.”While some people are reportedly finding success in legal standoffs with the help of AI, lawyers like Ficarrotta told us that chatbots are also facilitating chaotic and burdensome legal conflicts across the US as pro se parties like the Florida couple use them to file oceans of documents in support of flawed or groundless claims, supercharging the impact of flimsy cases and wreaking havoc on already slow-moving courts.We spoke to lawyers and paralegals — most of whom opted not to be identified by name, citing concern over client privacy or fear of inciting further legal action from eager litigants — who work in a broad selection of specialties. We also reviewed large numbers of AI-generated court documents they pointed us to, filed by self-represented plaintiffs in local, state, and federal court.The AI cases were all over the map. Some dealt with payment, collections, and foreclosures. Others were family law matters including custody disagreements and divorce settlements. There were disputes between individuals and small local businesses, as well as individuals against one another. Some of the more immediately fantastical allegations were brought against the government, large corporations, and public figures like billionaires.The phenomenon is even exasperating institutions. In a lawsuit filed earlier this month against OpenAI, the insurance company Nippon Life Insurance alleged that ChatGPT’s legal counsel led a woman to fire her human lawyer and, opting to instead represent herself, launch a dubious new lawsuit against the insurer over a settled disability claim that had originally been dismissed.In the complaint, which accuses ChatGPT of acting as an unlicensed lawyer, Nippon claims that it incurred a staggering $300,000 in legal fees as it defended against the wave of frivolous AI-generated content being filed by the woman. By stoking groundless legal theories, the complaint argues, ChatGPT “aided and abetted” the woman’s “abuse of the judicial process.” In response to Reuters, OpenAI said that Nippon’s lawsuit “lacks any merit whatsoever” and pointed to its terms of service, which forbid using ChatGPT-generated output “for any purpose that could have a legal or material impact” on another person. (We reached out to a legal representative for Nippon, but didn’t hear back.)The phenomenon is the latest illustration of AI’s narrative gap. While optimists say it can help amateurs take on intimidating institutions without the help of expensive subject matter experts like attorneys, that’s not always a good thing. It can also embolden cranks and agitators — or, lawyers we spoke to suggested, people dealing with mental health or substance abuse issues — to embark on quixotic legal quests that waste time and money, or unnecessarily cause what should be standard proceedings to drag on for weeks or even months as people leaning on the tech churn out a flood of bad arguments.It’s one thing to scheme about the law with a sycophantic chatbot; it’s another to craft and present a sound argument. And a good human lawyer’s job, legal professionals we spoke to emphasized, isn’t just to spam the court with documents on your behalf. It’s also to tell you when you’re wrong.“A lot of what you pay the lawyer for is lawyers’ judgment,” said the lawyer from the HOA case. “Knowing when to push, when not to push, what’s going to work with the judge, what’s not going to work with the judge, what’s a feasible argument, what’s not.”“If you’re just mindlessly firing all these things that are AI-generated,” she added, “somehow that judgment just goes out the window.”***Legal professionals we spoke to emphasized sheer volume as the most frustrating new element of chatbot-era pro se cases. Whereas self-represented litigants pre-AI might’ve submitted initial complaints as brief as one or two pages, these filings sometimes now total hundreds of pages, which is unusual in conventional practice.“These filings that are coming from the generative AI, are long, very long,” said one lawyer. Another said one self-represented plaintiff she encountered filed a nearly 600-page complaint.It doesn’t stop there. Once a complaint is filed, attorneys told us, litigants using AI often proceed to file a steady drip of new motions and other documents, prompting the professionals on the other side of the case to pour a huge number of hours into reading and responding to the outflow of material.“Some file as many as four [motions] a week, and all need to be responded to,” lamented another lawyer. “It takes countless hours each week just to respond to them.”Compounding that volume, legal workers say, is a disorienting veneer of legibility that AI can bring to flawed or baseless arguments. AI-generated court documents we reviewed showed self-represented litigants filing complex-looking theories packed with confident legal jargon. But often, these documents are the product of a process that could be referred to as cogency-washing: chatbots taking incomplete, biased, or even delusional claims and organizing them into authoritative nonsense.“The burden shifts over to who you’re suing to have to rebut everything, and actually litigate against you, and do an enormous amount of work,” observed another attorney, who works for a local government on the west coast. His office, he told us, has been struggling to keep up with a wave of AI-generated legal actions and correspondence from locals.All this time adds up. One lawyer relayed that a dispute that historically would’ve cost a client about $2,000 wound up costing over $20,000 as the opposing party filed AI-generated motion after AI-generated motion; another said that a similar case pushed what should’ve been about $5,000 in client fees to over $70,000.“There’s no easy or inexpensive way to get a vexatious litigant out of a case,” said one lawyer. “Before, such a person would need to spend the money on an attorney or find an attorney that is willing to work on an arrangement that does not require money up front (hard to find). That is no longer the case.”In some cases, lawyers told us, they’ve been able to successfully petition a judge to make AI-happy pro se litigants reimburse their clients for a portion of those fees. But in other instances, state statutes can prevent attorneys from making that kind request, meaning that the financial burden of AI-exacerbated court battles may still fall on their clients.Several lawyers noted the persistence of hallucinations in these cases, with plaintiffs citing AI-fabricated laws that don’t actually exist, or mangling legal advice so badly that it’s completely misleading the user.“It’s a civil case where the rules of evidence don’t apply, but their AI is telling them that they should be citing criminal cases and using criminal pattern jury instructions,” said Ficarrotta, the Washington lawyer. “But my case doesn’t have a jury. We’re not going to trial. It’s just a hearing.”Even when an AI-contrived legal argument proves to be unserious on its merits, the consequences can be anything but. In addition to ending up saddled with the opposing council’s attorney fees, many of these AI-focused petitioners have faced court sanctions including expensive fines and harsh dismissals from fed-up judges. Others have wound up being labelled vexatious litigants, which generally means that they now need to request permission to file future lawsuits.The time and energy these cases eat up go beyond that of individual lawyers. As Ficarrotta noted, the commissioner or the judge hearing the case “has to go and spend time reading all of it, and reviewing all of it, before they even come out and sit on the bench.”In one case she was involved in, Ficarrotta said, that meant the commissioner had to spend her night looking through “500 pages of AI-generated pleadings, none of which were relevant.” And meanwhile, she added, “there were a bunch of people also on that docket who were waiting to be heard.”“They’re preventing other people from accessing the justice that they need,” she continued, “just by putting themselves on the docket.”“The judges in my somewhat rural county are not pleased,” added another lawyer, “with their dockets being crammed with pleadings that are mostly nonsense generated by AI.”As one Texas-based paralegal put it, the disruption happens “all the way down” the court.“The courts take all filings seriously. And all of this sh*t, before it gets in front of a judge, is clogging the system,” said the paralegal. “We have to respond on the defense side, but also, the clerks have to do stuff at the courts. Staff attorneys are reviewing some filings. They’re having to look up these bullsh*t cases that don’t even exist.”Have you been involved in a legal situation involving AI? Get in touch with us at tips@futurism.com. We can keep you anonymous.***At times, AI use has spilled over into personal harassment against legal professionals.A few years back, one lawyer told us — before ChatGPT and similar bots were even released — he took on a pro bono lawsuit on behalf of a young artist. The case centered on a straightforward payment dispute; the plaintiff he represented won, meaning that the defendant now owed money. Her payments were spotty, the lawyer said, but overall, the case had been open-and-shut.But in 2025, the defendant suddenly re-emerged. In a torrent of emails, she outlined a new and seemingly AI-generated legal theory, in which she argued that she didn’t have to pay after all — and that the lawyer himself had engaged in some kind of grave misconduct.“Everything is blown out into such crazy proportions,” the lawyer said of the emails, which he described as a “fan fiction” narrative of the case.“I mean, I’m a lawyer. I’m the opposing counsel here. I’m sort of the bad guy,” he said. But it’d been “pretty vanilla, my representation in this. It wasn’t even that adversarial.”As of January 2026, he said, the defendant had sent him over 300 accusatory emails, all of which appeared to be AI-generated. And like the plaintiffs in the HOA case, she escalated her qualms to the state, urging in AI-spun complaints to the state bar that the lawyer should lose his license to practice.“Whatever version of events that she fed into it,” said the lawyer, “the AI somehow compounded it.”***To be sure, people filing their own lawsuits certainly aren’t the only ones who’ve been caught misusing AI in a courtroom. Many trained lawyers have been caught submitting drafts with nonexistent, AI-hallucinated case law, drawing ire and sanctions from judges. As judges have noted in searing condemnations, this kind of lazy error is an issue of negligence: legal professionals who should know better are leaning on cheap-and-easy tech to do sloppy work.In contrast, much of the problematic AI use by self-represented litigants, as described by lawyers we spoke with and outlined in AI-generated court filings we reviewed, would be poorly understood as straightforward negligence. The laypeople bringing the suits think they have a case, and instead of explaining why they don’t — like a responsible lawyer would — chatbots designed to be agreeable to users are helping to craft spurious legal theories that send everyone involved into an unnecessary legal quagmire.“AI will absolutely tell you what you want to hear,” said the local government attorney. “And right now it will do that and hallucinate case law. But even if it starts getting law correct, that doesn’t mean it can really make independent heuristic judgment on whether the case is still worthwhile.”Lawyers we spoke to emphasized that their value is, in large part, discernment: identifying strong arguments, understanding the processes and nuances of the court, and steering potential litigants toward a best course of action — or, at times, a lack thereof.“If someone says to me, ‘Do you think I have a strong case?’ I’m always going to be real with them. Because, one, we can never guarantee anything, but two, we have to set realistic expectations,” Ficarrotta said. “There is no sounding board with AI. There is no setting realistic expectations. Which is why it’s so surprising to pro se litigants when they come in and things aren’t the way that they thought they were going to be.”In other words, a sycophantic lawyer is a bad lawyer. And when ingratiating chatbots collide with consequential legal choices, they can steer users down self-destructive paths.“There are a lot of serious consequences once you start engaging the judicial system, and if you’re not doing it in good faith, or if you start screwing up a lot, you can get in quite a bit of trouble,” the local government attorney warned, pointing to consequences like sanctions, as well as the reputational damage someone could suffer once a case goes into the public record. “And that’s just on the civil side.”“I think people underestimate,” he added, “how litigation impacts their lives.”Beyond sanctions, some courts that have encountered this issue have taken some limited steps to curb unwieldy AI lawsuits, for example requiring that all parties disclose when documents are AI-generated. But the phenomenon sits at a complicated crossroads, and even lawyers who say AI has burdened courts were hesitant to dissuade self-represented people from using the tech entirely.The legal system is riddled with serious access problems that prevent a massive number of people — particularly poor and marginalized ones — from obtaining justice, regardless of whether their claims have merit. The vast majority of those who turn to self-representation do so purely out of need; that a new technology could serve as a democratizing force within the legal world, legal access advocates urge, is essential terrain to explore.“We have an access to justice crisis in our country,” said Lou Rulli, a professor of law at the University of Pennsylvania who has written extensively on legal access. “So many Americans just don’t have the ability to obtain counsel, even in the most important things affecting their lives.”“We’re still at an early stage of AI,” he continued, “but this represents an opportunity to democratize our legal system, to demystify our complex court procedures, and to help to give folks who don’t have access to counsel an opportunity to understand complex things in more simple language — and to have the support, at least in limited ways, to protect their most vital interests.”Jennifer Gundlach, a law professor at Hofstra University who directs the Hofstra Law Pro Se Legal Assistance Program — a legal aid center that helps self-represented people navigate the complexities of the legal system — is also optimistic about generative AI. She shared that she’s seen first-hand self-represented folks find success with the help of the tech, particularly when chatbot use is coupled with informed guidance from organizations like hers.“We have to be creative and forward-thinking in how to use this tool responsibly,” Gundlach said of generative AI. “And if lawyers have great power in the monopoly of the legal profession, we also have to make it our responsibility to do what we can to improve access to our justice system when we are not willing or able to provide the realm of legal representation that’s needed.”In one case we found, after a self-represented plaintiff was accused of using AI to formulate unclear arguments, a judge’s final order — to dismiss the case, which the judge decided lacked merit — spoke to this very tension. He wrote that “artificial intelligence may ultimately prove a helpful tool to assist pro se litigants in bringing meritorious cases to the courts,” and that “in that way, artificial intelligence has the potential to contribute to the cause of justice.”But, he cautioned, “accessing any beneficial use of artificial intelligence requires carefully understanding its limitations.”“For example, if merely asked to write an opposition to an opposing party’s motion or brief, or to respond to a court order, an artificial intelligence program is likely to generate such a response,” the order continues, “regardless of whether the response actually has an arguable basis in the law.”Indeed, the risk of not understanding the tech’s limitations is that over-reliance on chatbots can send folks tunneling down expensive, life-changing holes that they may never have needed to start digging in the first place — and dragging others down with them as they go.“The barriers shouldn’t be insurmountable,” the local government attorney reflected. But AI “lowers that barrier immensely,” he warned, and “just because you can” file a lawsuit, it “doesn’t mean you should.”“And just because the ‘can’ part has suddenly become a lot easier,” he added, “it doesn’t mean it’s still a good idea.”More on AI and the real consequences of reinforcing unreality: AI Delusions Are Leading to Domestic Abuse, Harassment, and StalkingThe post Absurd AI-Powered Lawsuits Are Causing Chaos in Courts, Attorneys Say, “Clogging the System” and Driving Up Costs appeared first on Futurism.