Anthropic’s Ethical Stand Could Be Paying Off

Wait 5 sec.

At first glance, last week looked like a catastrophe for Anthropic.The AI company refused to let the U.S. government use its products to surveil the American public or direct autonomous weapons without human oversight. In response, the Department of Defense canceled its $200 million contract. On Truth Social, President Trump called the company “leftwing nut jobs” and ordered every federal agency to immediately stop using its products. Defense Secretary Pete Hegseth went a step further, designating Anthropic as a “Supply-Chain Risk to National Security.” OpenAI, Anthropic’s chief rival, quickly signed its own deal with the Pentagon.Anthropic’s principled stand continues to pose enormous risks for the company. But some early indications suggest that it just might pay off.The company’s confrontation with DOD has proved more effective than some of the world’s most expensive advertising—at least according to one metric. After a Super Bowl campaign earlier this year, Anthropic’s AI model, Claude, became one of the top 10 most-downloaded free apps in America, per Apple’s charts. The day after Hegseth announced that the government was severing ties, it took the No. 1 spot, a position it still holds as of this writing. Downloads have topped 1 million a day, according to Anthropic’s chief product officer. A spokesperson told me that the company “has broken its own sign-up record every day since early last week, across every country where Claude is available.”[Read: Inside Anthropic’s killer-robot dispute with the Pentagon]Users aren’t just signing up for Claude—they are also abandoning OpenAI (which has a corporate partnership with The Atlantic). Uninstalls of ChatGPT, OpenAI’s flagship app, spiked 295 percent on February 28, as details of OpenAI’s deal with the Pentagon emerged. One-star reviews surged nearly 800 percent, and five-star reviews fell by half.Perhaps more consequential, Anthropic has gained the trust and admiration of engineers across the AI industry. Letters of support for the company are circulating among its competitors’ employees. One such letter had some 850 signatures as of Monday. Many of these employees are demanding that their companies show solidarity with Anthropic and honor the same red lines. Some have reportedly threatened to leave if those demands are not met.Anthropic has won admiration outside Silicon Valley too. Before the company’s clash with DOD, former Republican Representative Denver Riggleman, who now leads a cybersecurity firm, was preparing to pick an AI firm to partner with. He was considering a range of options; Anthropic’s stand narrowed them to one. Riggleman has since directed his company to work with Anthropic on all future projects. “Anthropic had its nonnegotiables,” he told me, and “we have ours.”Drawing from his experience on a congressional AI task force focused on foreign adversaries, Riggleman thinks that Hegseth’s decision to label Anthropic a supply-chain risk will likely be overturned in court. The U.S. government has never applied the label to an American company, typically reserving it for corporations run by hostile foreign actors, such as Huawei. Moreover, this is the first time that the label appears to have been used in retaliation for a business declining contract terms. “To say it rests on shaky legal ground,” Riggleman said, “would be generous.”The former congressman once trusted his country to regulate technologies that had the power to reshape Americans’ lives. “These days,” Riggleman said, “the government is no longer creating those safeguards—it’s destroying them.” He continued, “I don’t think we appreciate yet, as a society, what it means to have private firms controlling this amount of information about citizens.”The Department of Defense has said that the contract it offered Anthropic contained adequate safeguards, in part because the text limited AI’s uses to “all lawful purposes.” Anthropic argued that this clause wasn’t sufficient—that a new executive order or reinterpretation of statute could shift the existing legal boundaries. “We don’t want to sell something,” Anthropic CEO Dario Amodei said, “that could get our own people killed, or that could get innocent people killed.”OpenAI has contended that its subsequent deal with the Pentagon is safer than Anthropic’s. Its contract does appear to prohibit mass surveillance and autonomous weapons. But it retains the “all lawful purposes” language, rendering that prohibition dependent on DOD’s willingness to respect legal norms. Even Sam Altman, OpenAI’s CEO, conceded that the deal was “definitely rushed” and that “the optics don’t look good.” On Monday, the company said it had added restrictions to the contract regarding surveillance, but critics are skeptical that they will prove any more binding.[Read: OpenAI is opening the door to government spying]The events of the past week reminded me of my early days as a Navy pilot nearly three decades ago. One of my first tasks was to sign a document pledging never to surveil American citizens. By the time of the 9/11 attacks, I was an aircraft commander, leading combat-reconnaissance aircrews that gathered large-scale intelligence and informed battlefield targeting decisions. I took for granted that somewhere along those decision chains, a human being was in the loop.I could not have defined artificial intelligence then, but I understood instinctively that a person, not a machine, would bear the weight of life-and-death choices. This was not a bureaucratic consideration. It was a hard line that those of us in uniform were expected to hold.In the standoff between Anthropic and the Pentagon, a private company was forced to hold the line against its own government. In doing so, Anthropic may have earned something more valuable than the contract it lost. In an industry where trust is the scarcest resource, Anthropic just banked a substantial deposit.