Sam Altman Admits He’s Made a Huge Mistake

Wait 5 sec.

OpenAI CEO Sam Altman went into full damage control mode over the weekend. A day before the United States attacked Iran, the embattled CEO announced that the company had signed a new agreement with the Pentagon over how its AI models could be used — and the blowback is clearly impacting the company’s bottom line, because Altman is sounding deeply defensive.Many users saw the military terms move as an attempt to swoop in and yank a multibillion-dollar government contract from the clutches of its rival, Anthropic. Last week, Anthropic’s CEO Dario Amodei refused to give in to the Department of Defense’s demands, drawing a line in the sand and insisting that its AI models may not be used for autonomous killing machines or mass surveillance of Americans, a decision lauded by many users of its chatbot Claude.Regardless of the genuineness of Amodei’s continued assurances — there are plenty of reasons not to take billionaire CEOs by their word — OpenAI effectively handed Anthropic a major PR victory. The shifting dynamic triggered a mass exodus from OpenAI’s ecosystem, with uninstall rates of OpenAI’s ChatGPT spiking 295 percent day-over-day on Saturday, the day after OpenAI announced its deal with the Pentagon.Now, Altman is continuing his apology tour, conceding in a lengthy tweet on Monday evening that OpenAI “shouldn’t have rushed” its Department of Defense deal.After what many saw as OpenAI giving in to the Pentagon’s wishes, Altman claimed that OpenAI would be altering the terms of the deal after the fact — a bizarre twist that likely won’t sit well with Trump’s military or the company’s already disillusioned customers.Altman claimed that the company would “amend our deal” to add the prohibition of “deliberate tracking, surveillance, or monitoring of US persons or nationals.”“There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety,” the CEO wrote. “We will work through these, slowly, with the [Department of War], with technical safeguards and other methods.”It’s important to note that Altman’s tweet makes no mention of autonomous AI-enabled weapon systems, the other key issue that led to the rift between Anthropic and the Department of Defense.Whether that means such weapon systems are on the table or not for OpenAI remains unclear, but given Altman’s language, it’s certainly not out of the question.It also remains unclear whether the Defense Department will agree to these revised terms or whether it had originally agreed to accommodate OpenAI’s original terms and not Anthropic’s, as CNBC points out.At the very least, Altman admitted the optics of his eleventh-hour amendment were abysmal.“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” he wrote. “Good learning experience for me as we face higher-stakes decisions in the future.”Altman also called on the government not to designate Anthropic a supply chain risk to national security. After Anthropic refused to sign the deal with the Pentagon, defense secretary Pete Hegseth announced on February 27 that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”“Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military,” Hegseth fumed. “That is unacceptable.”Whether Altman’s latest mea culpa will meaningfully address OpenAI’s growing PR crisis is dubious at best. For many users, the damage has already been done. Besides, it’s not a one-horse race, with ChatGPT increasingly lagging behind every other leading AI company on LLM leaderboards.“People are p*ssed and there are better products,” one Reddit user wrote. “It’s a recipe for disaster.”But Anthropic’s Amodei may not be a knight in shining armor, either. As the Wall Street Journal reported over the weekend, the Department of Defense selected targets in Iran using Anthropic’s Claude chatbot, highlighting the AI company’s preexisting ties with the military.In other words, while Amodei told CBS News in a carefully-timed interview on Sunday that mass surveillance and autonomous weapons are the “two red lines” the company had since “from Day One,” the company still signed a prior deal with the Pentagon that let it use Claude to execute deadly attacks.More on the contract: Sam Altman in Damage Control Mode as ChatGPT Users Are Mass Cancelling Subscriptions Because OpenAI Is “Training a War Machine”The post Sam Altman Admits He’s Made a Huge Mistake appeared first on Futurism.