A ‘global call for AI red lines’ sounds the alarm about the lack of international AI policy

Wait 5 sec.

On Monday, more than 200 former heads of state, diplomats, Nobel laureates, AI leaders, scientists, and others all agreed on one thing: There should be an international agreement on “red lines” that AI should never cross — for instance, not allowing AI to impersonate a human being or self-replicate. They, along with more than 70 organizations that address AI, have all signed the Global Call for AI Red Lines initiative, a call for governments to reach an “international political agreement on ‘red lines’ for AI by the end of 2026.” Signatories include British Canadian computer scientist Geoffrey Hinton, OpenAI cofounder Wojciech Zaremba, Anthropic CISO Jason Clinton, Google DeepMind research scientist Ian Goodfellow, and others. “The goal is not to react after a major incident occurs… but to prevent large-scale, potentially irreversible risks before they happen,” Charbel-Raphaël Segerie, executive director of the French Center for AI Safety (CeSIA), said during a Monday briefing with reporters. He added, “If nations cannot yet agree on what they want to do with AI, they must at least agree on what AI must never do.” The announcement comes ahead of the 80th United Nations General Assembly high-level week in New York, and the initiative was led by CeSIA, the Future Society, and UC Berkeley’s Center for Human-Compatible Artificial Intelligence. Nobel Peace Prize laureate Maria Ressa mentioned the initiative during her opening remarks at the assembly when calling for efforts to “end Big Tech impunity through global accountability.” Some regional AI red lines do exist. For example, the European Union’s AI Act that bans some uses of AI deemed “unacceptable” within the EU. There is  also an agreement between the US and China that nuclear weapons should stay under human, not AI, control. But there is not yet a global consensus. In the long term, more is needed than “voluntary pledges,” Niki Iliadis, director for global governance of AI at The Future Society, said to reporters on Monday. Responsible scaling policies made within AI companies “fall short for real enforcement.” Eventually, an independent global institution “with teeth” is needed to define, monitor, and enforce the red lines, she said. “They can comply by not building AGI until they know how to make it safe,” Stuart Russell, a professor of computer science at UC Berkeley and a leading AI researcher, said during the briefing. “Just as nuclear power developers did not build nuclear plants until they had some idea how to stop them from exploding, the AI industry must choose a different technology path, one that builds in safety from the beginning, and we must know that they are doing it.” Red lines do not impede economic development or innovation, as some critics of AI regulation argue, Russell said. ”You can have AI for economic development without having AGI that we don’t know how to control,” he said. “This supposed dichotomy, if you want medical diagnosis then you have to accept world-destroying AGI — I just think it’s nonsense.”