Have a nice day Photo/ShutterstockGovernments across the world want AI to do more of the heavy lifting when it comes to public services. The plan is apparently to make make things much more efficient, as algorithms quietly handle a country’s day to day admin.For example, AI might help tackle tax fraud, by working out ways of targeting those most likely to be offending. Or it might be to help public health services screen for various cancers, triaging cases at scale and flagging those deemed most at risk.But what happens when such a triaging system makes a mistake? Or when government agencies deploy AI to identify fraud and the model simply gets it wrong? There is already sobering evidence that AI errors can have devastating consequences. In the Netherlands for example, flawed algorithmic assessments of tax fraud were dealt with in ways which tore families apart and separated children from their parents. In that case, a risk‑scoring system was used to identify families it deemed likely to be committing benefits fraud. It then fed these assessments into automated operations that ordered repayments, driving innocent households into financial ruin. So states should be extremely wary of substituting human judgement with AI. The assumption that machines will almost always get it right is simply not true. People’s lives cannot be easily reduced to data points for algorithms to draw conclusions from. And when things do go wrong, who is responsible? What happens to human accountability? These are the kind of questions that have often been overlooked amid all the clamour – and vast levels of investment – that AI has attracted. Yet even if we set aside the possibility that this is another speculative bubble ready to burst, there is growing evidence that AI in its current form does not deliver what it promised. The problem of “hallucinations” – when AI generates plausible yet nonfactual content – [remains unresolved] [https://dl.acm.org/doi/pdf/10.1145/3703155], and expensive developments have often been underwhelming.Even leading figures in the industry, including the co-founder of OpenAI have acknowledged that that simply making large language models (LLMs) larger will not improve things significantly. Yet these systems are rapidly being embedded into key sectors of our lives, including law, journalism and education.It’s not even that hard to imagine a future university where lectures and assignments are generated by LLMs operated by a particular faculty, to be absorbed and completed by LLMs operated by students. Human learning could then become a byproduct of machine-to-machine communication, and the long-term consequence could be that critical thinking and expertise are hollowed out in the very institutions charged with cultivating them. All In?But all of this integration is highly profitable for AI companies. The more AI is woven into public infrastructures and business operations, the more indispensable these firms become, and the harder they are to challenge or regulate. Integration into the defence sector for example, with the development of autonomous weapons could simply make a firm too big to fail, if a country’s military security depended on it. And when things go wrong, the asymmetry of expertise between governments and citizens on one side, and AI developers on the other, simply increases the overall reliance on the very firms whose systems created the problems in the first place. To understand where this trajectory might lead, it’s worth looking back a couple of decades to when social media companies first appeared, apparently with the simple goal of connecting people across the world.Today though, the reach and power of some of those firms is the source of major concerns around privacy, surveillance and manipulation. There have been scandals on everything from undermining democracy and spreading misinformation to inciting violence. Yet we now find ourselves experimenting with a potent mix of social media, AI and machine learning. Social media feeds on attention while LLMs can generate vast amounts of attention grabbing content. Meanwhile, machine learning systems determine what each of us sees on our various screens, trapping us in ever tighter informational bubbles. Graffiti at a beach in Cornwall. studiogeorge/Shutterstock So even if, for the sake of argument, AI evolves as promised, becoming more accurate, more robust and more capable, should we really be ceding control over more domains of life to algorithmic coordination in pursuit of order and efficiency? Technology alone cannot resolve social, economic or moral problems. If it could, children would not go hungry in a world that already produces enough food to feed everyone. Critics of AI are often dismissed as Luddites. But this is a misreading of history. Luddites, the 19th-century English textile workers who opposed some automated machinery in the mills where they worked, were not opposed to technology per se. They were simply opposed to its misuse and unreflective deployment, and sought a deeper examination of how technology reshapes work, communities and everyday life. Some 200 years later, surely that remains a reasonable demand.Akhil Bhardwaj does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.