AI Regulation is Not Enough. We Need AI Morals

Wait 5 sec.

Pope Leo XIV recently called for “builders of AI to cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.” Some tech leaders, including Andreessen Horowitz cofounder Marc Andreessen have mocked such calls. But to do so is a mistake. We don’t just need AI regulation—we need AI morals. [time-brightcove not-tgx=”true”]Every technology carries a philosophy, whether we care to admit it or not. The printing press spread knowledge and weakened hierarchies. Electricity dissolved distance. The internet shattered the boundary between public and private life. Artificial intelligence may prove to be the most revealing yet, because it forces us to confront what, if anything, is uniquely human.Governments are scrambling to keep up. The European Union’s AI Act is the most ambitious attempt so far to regulate machine learning; the United States has produced its own orders and plans. Industry leaders speak loudly of “guardrails” and “alignment.” The language of safety dominates, as though ethics were a checklist that could be coded and deployed.Rules are necessary. They can limit harm, deter abuse, and provide accountability. But they cannot tell us what kind of world we want to build. Regulation answers how but it rarely answers why. Ethics treated as compliance becomes sterile—a process of risk management rather than moral reflection. What is missing is not another rulebook but a moral or human compass.The deeper question is not whether machines can think, but whether humans can still choose. Automated algorithms already shape what we read, where we invest, and who or what we trust. The screens we’re all glued to every day influence emotions and elections alike. When decisions are outsourced to data models, moral responsibility drifts from the human to the mechanical. The danger is not that machines develop too much intelligence, but that we stop exercising our own.Technologists often describe ethics in computational terms: alignment, safety layers, feedback loops. But conscience is not a parameter to be tuned. It is a living capacity that grows through empathy, culture, and experience. A child learns right from wrong not through logic, but through relationship—through being loved, corrected, and forgiven. In the same way, they learn accountability: that actions carry consequences, and that responsibility is inseparable from choice. That is the essence of human moral growth, and it cannot be replicated by computation.Artificial intelligence will force a new reckoning with human dignity—a concept older than any technology, yet curiously absent from most conversations about it. Dignity insists that a person’s worth is intrinsic, not measurable in data points or economic output. It is a principle that stands against the logic of optimisation. In a world built on engagement metrics, dignity reminds us that not everything that can be quantified should be.Capital has an especially powerful role here. What gets funded gets built. For decades, investors have rewarded speed and scale—growth at all costs. But the technologies emerging today are not neutral tools; they are mirrors, reflecting and amplifying our values. If we build systems that exploit attention or reinforce bias, we cannot be surprised when society becomes more distracted and divided.Ethical due diligence should become as routine as financial due diligence. Before asking how large a technology might become, we should ask what kind of behaviour it incentivises, what dependencies it creates, and who it leaves behind. That is not moral idealism or altruism; it is pragmatic foresight. Trust will be the scarce commodity of the AI century, and it cannot easily be bought (or coded) back once lost.The challenge of our time is to keep moral intelligence in step with machine intelligence. We should be using technology to expand empathy, creativity, and understanding—not to reduce human complexity into patterns of prediction. The temptation is to build systems that anticipate every choice. The wiser path is to preserve the freedom that allows choice to mean something.None of this is to romanticize the past or resist innovation. Technology has always extended human potential, which is typically a good thing. Today, we must ensure that AI extends human potential—and not dilute it. This will ultimately depend not on what machines learn, but on what we remember—that moral responsibility cannot be delegated, and that conscience, unlike code, cannot be run on autopilot.The moral project of the coming decade will not be to teach machines right from wrong. It will be to remind ourselves. We are the first generation capable of creating intelligence that can evolve without us. That should inspire not fear, but humility. Intelligence without empathy makes us cleverer, not wiser; progress without conscience makes us faster, not better.If every technology carries a philosophy, let ours be this: that human dignity is not an outdated concept but a design principle. The future will be shaped not by the cleverness of our algorithms, but by the depth of our moral imagination.