MIT researcher gives advice on how to tame, harness AI ‘workslop’ 

Wait 5 sec.

Good morning. If you’ve ever spent an afternoon untangling an AI-generated report that looked convincing but made no sense, you’ve encountered “workslop.”“Workslop” is AI-generated content that masquerades as good work but lacks the substance to meaningfully advance a given task. According to a recent study by researchers at BetterUp Labs and the Stanford Social Media Lab, about 40% of U.S. desk workers encounter workslop in a given month. Each incident takes an average of two hours to resolve, resulting in an estimated monthly cost of $186 per employee and $9 million in annual costs for a company with 10,000 employees.This summer, I spoke with Michael Schrage, a research fellow at MIT Sloan’s Initiative on the Digital Economy, about AI prompt-a-thons—structured, sprint-based sessions for developing prompts for large language models (LLMs). I recently reconnected with him to discuss the implications of workslop.His prediction: Workslop won’t just be a productivity cheat; it’ll become a governance and oversight challenge.“Ultimately, serious senior management will demand workslop metrics the same way they demand quality metrics,” Schrage anticipates. “They’ll use LLMs to detect slop patterns in computational tasks—essentially, you’ll fight AI with AI.”He continued, “We’ll soon see all kinds of countermeasures. You’ll tune or train ChatGPT or Gemini to recognize and filter slop before high-value humans have to waste time on it.”The bigger question isn’t if or when organizations will develop slop detection, Schrage said. “It’s whether they’ll formalize it or keep it underground,” he explained. “If I suspect you’re giving me slop, I’m going to drop it into my slop detector—and then you and I are going to have a little conversation about your professional judgment. Slop detection should push people to thoughtfully step up instead of outsourcing their thinking to LLMs.”Transparency and the new definition of “show your work”At MIT, for example, Schrage confessed he’s basically given up on plagiarism detection and accepts that bright students cut corners with LLM help. But he wants people to be honest about their choices.In his executive education classes, for example, he warns students: “If you’re using LLMs, all I ask is that you include your prompts. Show me how you’re prompting your work. That’s my notion of transparency and invisibility. If you won’t proudly share your prompts, then I’ll assert you’re faking what’s yours.”“Frankly,” he said, “my bet is we’re going to see more and more organizations insist that showing your work means showing your prompts.” This will become even truer as multi-media/multi-modal LLMs join the enterprise, he added.So perhaps certified public accountants will become certified prompting associates, he half-jokes. Maybe finance professionals will audit prompts much the way they now audit spreadsheets. Ultimately, transparency won’t be optional.On the compliance side, Schrage offers a tactical workaround for companies worried about feeding proprietary data to LLMs: Do competitive analysis instead. Analyze publicly available data from competitors—earnings calls, projections, filings. “An FP&A department that can’t use LLMs with internal projections can still analyze competitor projections and incorporate those insights,” he said. “Sometimes the external view is more valuable anyway.”“If I want to be provocative,” Schrage said, “I’ll predict your prompt history will soon matter as much as your performance reviews. Because performance reviews measure outcomes. Prompts reveal whether you can actually think.”He added, “And there’s no hiding from that—no matter how smart your Copilot or LLM becomes.”Sheryl Estradasheryl.estrada@fortune.comThis story was originally featured on Fortune.com