Decentralised artificial intelligence has been hailed as one of the most profound innovations of our time, promising to give users control of the most transformative technologies. Yet the industry faces some daunting challenges if the vision is to be fulfilled.Proponents of decentralisation imagine a world where AI is not controlled by a select few big tech corporations, but rather by a global community that invites everyone to participate and have their say. It’s an audacious goal, but as it slowly comes into view, a question arises – are we really on the cusp of democratising access to intelligent automation, or are we creating a recipe for disaster?The dream of decentralised artificial intelligenceThe best known AI models in the world are controlled by a few select companies – OpenAI, Google, Microsoft, Anthropic, DeepSeek et al. – creating a familiar feeling that the AI industry, much like today’s internet, will be dominated by a handful of all-powerful monarchs.This has fueled the desire for a more equitable and open AI landscape, and it has attracted some vocal supporters. The founder of Stabiliy AI Emad Mostaque made headlines when he sensationally quit his role in March 2024, saying he wanted to “pursue decentralised AI” in order to ensure that the technology remains open and accessible to everyone.Mostaque’s vision resonates with legislators. In France, the Competition Authority Chief Benoît Cœuré pointed out that AI is the first technology that has been “dominated by major players from the outset”, and pointed to decentralised AI as the only chance to change this state of affairs before it’s too late.Those who champion decentralised AI argue it will lead to a world where individual developers, students, startups and hobbyists will be able to pool their knowledge, computing resources and data to enable anyone to participate, resulting in what MIT says will be “democratised innovation”.They also point to transparency as another major benefit, with open AI models running on blockchain, ensuring that any biased or toxic algorithms will quickly be identified and rejected. Greyscale Research, in a study, found that open networks do indeed have the ability to eliminate bias in AI, in stark contrast to the opaque, centralised models used today, which are often referred to as “black boxes.”Other benefits of decentralised AI include resistance to censorship and accessibility. The likes of Google and OpenAI typically bake in content filters, blocking their models from discussing or answering questions on certain topics, and charge for access. While decentralised models may also have content filters, their open nature means that these can easily be bypassed. Moreover, no one can charge for access to a decentralised, community-owned model, which means use isn’t restricted to only those with the financial means to pay for access.The general consensus among the decentralised AI community is that the world will be much better off if this technology is collectively owned and open to contributions from every corner of the globe.The reality might be differentFor all of these positives, the decentralised AI industry must run through a gauntlet of formidable challenges to live up to this vision. By bringing AI out of its carefully controlled, centralised data centres and letting it loose on a global network owned by everyone, it opens it up to numerous risks.One of the most difficult questions pertains to data integrity and synchronisation. Mechanisms like federated learning can solve the latter challenge, but it doesn’t provide much of a solution to the risk of data poisoning, which could skew the outputs of decentralised models. We can, perhaps, add a blockchain layer to increase transparency, but this may increase complexity, complicating data processing tasks and slowing down innovation.In addition, there are well-founded concerns that, while distributed networks mean lower costs and potentially reduced bias, these benefits come at the sacrifice of efficiency, which can hamstring the capabilities of decentralised AI models.The need for immense computational resources is a barrier, too. While Chinese firms like DeepSeek have apparently achieved success with more limited resources, generally the most sophisticated AI models require access to vast numbers of powerful GPUs. Acquiring these resources, and coordinating them, remains a major challenge for decentralised networks.That said, there are some promising solutions to this. For instance, 0G Labs recently announced a promising breakthrough in the shape of its DiLoCoX framework, which breaks down model training tasks to their individual parts, spreading them in multiple nodes so they can be done in parallel, before synchronising the results with the network once these training jobs are completed. In doing this, 0G claims to be able to train vastly more powerful decentralised models on only limited resources, regardless of the available network bandwidth.“By enabling the training of massive AI models on slower and cheaper networks, and with more accessible hardware than a high-speed data centre, even smaller businesses and individuals will be able to train their own advanced models with speed and accuracy,” says 0G Labs CEO, Michael Heinrich.However, the solutions for issues around decentralised AI’s security are less apparent. It’s something of a paradox, because while decentralised control significantly reduces the risk of a single point of failure, it also increases the attack surface to a potentially infinite number of endpoints.Lastly, there are still questions around the governance of decentralised AI models. For instance, who makes the decisions on what parts of the model should be improved, what guardrails should be built in, and so on? And who is accountable should any problems arise with a decentralised model?The lack of accountability could lead to a kind of “ethical vacuum”, resulting in massive abuse of decentralised AI models that are every bit as powerful as their centralised cousins, with extremely negative consequences. As a solution, Ethereum’s Vitalik Buterin has proposed a kind of hybrid model, with “AI serving as the engine and humans sitting behind the wheel.” The approach, Vitalik believes, would combine AI’s power with human judgment to create a more balanced and decentralised system.Decentralised ADecentralised AI’s future remains uncertain, and while its development is motivated by grand intentions, the path ahead will be tricky to navigate. For advocates, it’s the only way we’re ever going to democratise AI technology and unlock its true potential. Critics, on the other hand, point to the ethical challenges and the alarming potential for abuse, due to the lack of accountability.Nonetheless, it’s clear that the decentralised AI community is pushing forward anyway, in spite of these risks. For believers, the dream of a truly open, transparent, community-led AI industry that’s accessible to all is just too powerful to ignore, and so there’s nothing to stop them. We’ll just have to hope that as they pursue this dream, they don’t lose sight of the risks and take time to build the guardrails that can prevent things from getting out of control.Image source: UnsplashThe post Decentralised AI: Full of promise, but not without challenges appeared first on AI News.