The UN’s AI warnings grow louder

Wait 5 sec.

Welcome back to In the Loop, TIME’s new twice-weekly newsletter about AI. We’re publishing these editions both as stories on Time.com and as emails. It was a busy week for our team: Tharin Pillay was on site during the UN General Assembly in New York, while Harry Booth and Nikita Ostrovsky were at the “All In AI” event in Montreal.If you’re reading this in your browser, why not subscribe to have the next one delivered straight to your inbox?[time-brightcove not-tgx=”true”]Subscribe to In the LoopWhat to Know: The UN Takes On AIAI takes the podium — The United Nations General Assembly met this week in New York. While the assembly members spent much of their time on the crises in Palestine and Sudan, they also devoted a good chunk to AI. On Monday, Nobel Peace Prize laureate Maria Ressa called attention to a campaign for “AI Red Lines,” imploring governments to come together to “prevent universally unacceptable risks” from AI. Over 200 prominent politicians and scientists, including 10 Nobel Prize winners, signed onto the statement.“A new curtain”— On Wednesday, the Security Council engaged in an open debate on “artificial intelligence and international peace and security.” Over three hours, each country took turns delivering roughly the same spiel: That AI held the promise for both good and harm. Over and over, representatives declared that AI was not sci-fi but a fact of modern life, and that international regulatory guardrails needed to be developed immediately, especially around autonomous weapons and nuclear.One of the most interesting perspectives came from Belarus, which called attention to the growing global inequities in AI development. “There is a new curtain being created, not ideological this time, but technological … to divide the West and the other part of the world, and to bring the global majority countries into an era of neocolonialism,” the representative said. “This is leading to a deadlock and abyss. AI should be available and accessible to all countries barring none.”New institutions— This section was filed by Tharin Pillay from UN Headquarters. On Thursday, the UN hosted a three-hour “informal meeting” launching the Global Dialogue on Artificial Intelligence Governance—a new forum for all UN member states, as well as private-sector companies and civil society organizations, to coordinate on AI governance. “For the first time, every country will have a seat at the table of AI,” said UN Secretary-General António Guterres. Guterres also announced that nominations were now open for candidates to join the new International Independent Scientific Panel on AI, designed to provide impartial scientific evidence on the impacts of the technology. “I will soon begin consultations with Member States, potential funders and partners on the establishment of a Global Fund for AI Capacity Development,” he added. This was followed by Bhutan, the UAE, and China sharing accounts of how AI is bolstering economic growth, and a litany of “interventions” from government ministers, tech executives, and representatives from academia and civil society—many of them, such as Nigeria’s minister Bosun Tijani, godfather of AI Yoshua Bengio, and AI Now Institute director Amba Kak, members of the TIME 100 AI community. Common calls came for gaps to be bridged, capacity to be built, benefits to be shared, stakeholders to be consulted, and inequalities to be redressed. “The rise of AI is unstoppable,” said Pedro Sánchez, Spain’s Prime Minister. “But it cannot be ungovernable.”Is anyone listening? It’s unclear if any of this will have an actual impact on AI development, as Silicon Valley companies are not bound by UN advisories. But the UN’s convening does represent a more societally holistic and globally inclusive path forward for the technology. Tracking SB 53Take two— Last year, California Governor Gavin Newsom vetoed a state AI safety bill, SB 1047, and told researchers to come up with an alternative. They did, and lawmakers took their recommendations and passed an updated, watered-down version, SB 53. Once again, all eyes turned to Newsom: Would he actually stay true to his word and support this version—or cave to a tech industry crying that the legislation went too far?Widespread support— While Newsom has yet to put pen to paper, he signaled his support for the bill onstage in New York on Wednesday. “We have a bill—forgive me, it’s on my desk—that we think strikes the right balance,” he said. “We worked with industry, but we didn’t submit to industry.”SB 53 is important because it carves out whistleblower protections for AI employees and requires the largest developers to publish safety plans and report safety incidents. Anthropic supports it, as does Dean Ball, who worked on Trump’s AI Action Plan.An incremental win— And AI safety groups are celebrating the bill as a major victory, despite the fact that it lacks many of the most stringent protections that 1047 had last year. “Policymaking necessarily is about compromising. You want to push the envelope as far as you can, but you can’t give up on incrementalism,” says Sacha Haworth, executive director of the Tech Oversight Project. “The whistleblower protections are tremendous, and the fact that the regulations in SB 53 apply to the largest AI developers is crucial.”TIME in ActionOne of the signatories of the “AI Red Lines” campaign was Yoshua Bengio—who, as fate would have it, was interviewed onstage by TIME’s own Harry Booth at the All In conference in Montreal on Wednesday. (The conference is unrelated to the popular tech podcast with the same name.) The pair spoke about the development of reasoning models, governments’ pivot away from safety, and his new nonprofit, LawZero, which aims to redesign AI safety in the face of commercial imperatives.“It’s not that these systems are going to kill anyone tomorrow,” he told Harry. “But future generations of these systems, if science continues to advance, will have stronger and stronger reasoning abilities, more and more knowledge.”“And if we can’t make sure they act according to our norms, then they could be used by ill-intentioned people to do immoral things, and we could lose control of them,” he continued. “So even if we don’t have certainty about these possibilities, the stakes are so high that we need absolutely to make sure even a low probability accident is not going to happen.”You can watch the entire interview here.What We’re WatchingI just watched the movie Soundtrack to a Coup d’Etat, which was nominated for Best Documentary at the Oscars this year. While the film is not about AI, I found many resonances between its Cold War story to this AI moment, especially in its depiction of two global superpowers fighting for control of resources and mindshare.The film centers on the Congo, which was strategically important to both the U.S. and the U.S.S.R. for its uranium mines, which were essential to building atomic bombs. In the movie, footage shows Belgian Premier Gaston Eyskens saying that the occupation of the Congo was necessary “not to satisfy colonial or imperial aspirations but to complete a mission of civilization for the benefit of a less developed people … for its salvation and ascension.” That language sounds awfully familiar to rhetoric the tech leaders have been promoting, especially when they stress that AI development is just as important to national security as nuclear energy.As tech leaders deploy strategies and narratives that echo those of colonialism (read Karen Hao for more), the Congo is once again in the crosshairs, with miners harvesting the cobalt that powers smartphones, computers and electric vehicles in slave-like conditions.You can rent Soundtrack to a Coup d’Etat for $4 on YouTube.With reporting by Tharin Pillay/New York.