AI’s Imperial Agenda

Wait 5 sec.

After OpenAI CEO Sam Altman launched ChatGPT in 2022, the race for dominance in the field of artificial intelligence hit warp speed. Silicon Valley has poured billions of dollars into developing AI, building data centers, and promising a future free from the chains of unfulfilling work across the globe.But in “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI,” tech reporter Karen Hao pulls back the curtain, unveiling the human and environmental cost of artificial intelligence and the colonial ambitions undergirding Silicon Valley’s efforts to fuel the rise of AI.This week on The Intercept Briefing, host Jessica Washington speaks to Hao about her book and the dawn of the AI empire. “Empires similarly consolidate a lot of economic might by exploiting extraordinary amounts of labor and not actually paying that labor sufficiently or at all,” says Hao. “So that’s how they are able to amass wealth — because they’re not actually distributing it.”“The speed at which they’re constructing the infrastructure for training and deploying their AI models” is what shocks Hao the most, as “this infrastructure is actually not technically necessary, and … somehow the companies have effectively convinced the public and governments that it is. And therefore there’s been a lot of complicity in allowing these companies to continue building these projects.”“They have effectively been able to use this narrative of [artificial general intelligence] to accrue more capital, land, energy, water, data. They’ve been able to accrue more resources — and critical resources — than pretty much anyone in history,” Hao says, warning of “the complete aggressive and reckless” growth of AI infrastructure, but stresses that none of this is inevitable. “There is a very clear path for how to unlock the benefits of AI without accepting the colossal cost of it.”Listen to the full conversation of The Intercept Briefing on Apple Podcasts, Spotify, or wherever you listen.Transcript Jessica Washington: Welcome to The Intercept Briefing, I’m Jessica Washington.In 2022, Sam Altman’s company OpenAI launched ChatGPT, an AI chatbot that unleashed a wave of excitement over artificial intelligence. And it kickstarted a race for dominance in the field. Tech CEOs from Altman at OpenAI, to Mark Zuckerberg at Meta, and Alex Karp at Palantir have lauded artificial intelligence as the “future” of humanity.During a New York Times New Work Summit in 2019, years ahead of Open AI’s launch of ChatGPT, Altman predicted that artificial intelligence could “eliminate poverty.” Sam Altman: It can be great, we have the potential to eliminate poverty, solve climate change, cure a huge amount of human disease, like educate everyone in the world phenomenally well. JW: In a more recent CNBC interview, Palantir CEO Alex Karp claimed that AI made the United States the “dominant country in the world”:Alex Karp: AI makes America the dominant country in the world. So just start there. Every other country in the world — like, I spent half my life in Europe — they’re whining and crying. We have the right chips. We have the right software. We have the right engineers. We have the right culture. We have the right people.JW: And in a video posted to Facebook, unveiling Meta’s new AI research lab in July, Meta CEO Mark Zuckerberg promised to develop personal “superintelligence” that would free its users to focus on what truly matters.Mark Zuckerberg: Advances in technology have freed much of humanity to focus less on subsistence and more on the pursuits that we choose. And at each step along the way, most people have decided to use their newfound productivity to spend more time on creativity, culture, relationships, and just enjoying life. And I expect superintelligence to accelerate this trend even more. JW: Only — what if these utopic visions mask a far, darker reality?In “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI,” Karen Hao exposes the underlying reality of the lofty promises made by Sam Altman and the tech industry. Hao reveals the human toll of artificial intelligence from its extreme water usage, to its exploitation of data laborers, to AI companies’ disturbing resemblance to the colonial empires that ravaged the planet for centuries.Joining me now to discuss “Empire of AI” and Silicon Valley’s grip on our world is Karen Hao. Karen, welcome to The Intercept Briefing.Karen Hao: Thank you so much for having me, Jessica.JW: Before we begin, we should start off by mentioning that The Intercept is a party in a lawsuit against OpenAI for allegedly using copyrighted materials to train ChatGPT.So, Karen, of all of the tech CEOs in the artificial intelligence rat race to profile, why Sam Altman, and why OpenAI?KH: So I actually didn’t set out to write an OpenAI book. I was trying to write a book about these parallels that I had been documenting for several years between the AI industry and colonialism. And I realized as I was putting together that idea, that in order to really illustrate how every single thing that we know about AI today in the public consciousness, like I had to trace the history of OpenAI, because those decisions were made within that company. So the fact that we associate AI in the public with large language models with ChatGPT, with these colossally consumptive technologies that need massive amount of data, massive amounts of data centers — those were all because OpenAI made certain choices. And Sam Altman was at the helm of the company when it made many of those choices. So yeah, it really is, I would say the book is not just a history of Open AI, it’s really a history of the modern-day AI boom.JW: As you’ve alluded to in the book, you masterfully, in my opinion, weave the promises of Silicon Valley against the backdrop of its impact on the communities that host its data centers and feed other parts of the AI machine. What made you want to tell these two stories alongside each other, instead of just a tech book, or instead of just a book about the impact?KH: I’ve always felt that the most important questions on people’s minds about technology or about AI is just: How is it going to affect their lives? And the only way to really tell that story is to ground it in the experiences of people that have already been affected by the development of the technology, because they are the canaries in the coal mines, so to speak, of how the rest of the world is going to experience it. And if you only tell the story from the perspective of San Francisco and from the tech companies themselves and the elites that run the companies at the top, you’re largely going to get a story about the technology working because it’s designed by these people for these people.But that’s not actually the real, full scope of the story. And so philosophically, in a lot of my reporting even before the book, I always believe that you really start to see where things fall apart when you go furthest away from Silicon Valley to the places that work fundamentally differently from SF, from the U.S., with people speaking fundamentally different languages who look different, who have a different history and culture.And that is actually more indicative of how the average person is going to ultimately be impacted by this technology because San Francisco’s a really weird place. It’s an extreme bubble. There’s an extraordinary amount of wealth that is pretty much not replicated anywhere else in the world. There’s an incredible amount of homogeneity.And so that’s why I wanted to interweave both the inside story and the ideology of these people and the decisions and the context in which they make these decisions, but then quickly expand to the far reaches of the empire, as I call it, to document really how it’s going to affect the vast majority of the world.JW: Yeah, I want to dive into the empire of it all. So the obvious through line of your book is colonialism and the ways in which these AI companies and tech companies have resembled these colonial empires of old. And I’m curious, how do you see the comparisons and where do they differ?KH: Yeah, I mean, there’s honestly so many comparisons. But I really focus on four in the book. The first one is that empires, they consolidate an extraordinary amount of wealth and power in part by just taking a lot of resources that are not their own. That refers to the intellectual property — as The Intercept knows well — that they take to just train their models without any creditor compensation. That’s also taking the private data of people that they might leave in places like a Flickr photo album that they never realized could get hoovered up into these image generation tools. Also, second parallel: Empires similarly consolidate a lot of economic might by exploiting extraordinary amounts of labor and not actually paying that labor sufficiently or at all. So that’s how they are able to amass wealth — because they’re not actually distributing it. And I talk in my book extensively about the ways that the industry does exactly the same thing with workers in Kenya or [who are] in crisis in Venezuela, who are doing some of the lifeblood data annotation tasks that the AI industry needs to thrive but who see only a couple dollars a day or even at all for that kind of work.The third parallel is that empires always engage in this kind of control of information flows in order to perpetuate their ability to continue expanding unfettered. And we see this in the industry as well, where most AI researchers today are either employed by the companies or bankrolled by the companies in some way. And so the entire research agenda and AI development agenda has been completely distorted by the empire’s agenda, and any research that reveals inconvenient truths is actively censored. So we don’t have a true scientific picture of the limitations and capabilities of these technologies.And then the final parallel is: Empire is engaged in this narrative that they have to exist because of a moral or existential imperative. So they are the “good” empire that’s on a civilizing mission to bring progress in modernity to all of humanity. And they’re competing with an evil empire that’s trying to bring the demise of humanity. Related OpenAI’s Pitch to Trump: Rank the World on U.S. Tech Interests And so in OpenAI’s history, there have been many examples of it framing “Google was the evil empire.” Now, Silicon Valley largely says, “China is the evil empire.” And the idea is that if the evil empire crosses the finish line, then we’re going to end up in an AI hell. And they say, AI could kill us all, or AI is going to lead to complete total authoritarianism in the wrong hands.Whereas when the good empire crosses the threshold first, we end up in this utopia — eliminating poverty, curing cancer, all of the things that you mentioned in the beginning are their common talking points.JW: Yeah. One thing that strikes me about tracking these empires as opposed to older, when you think of the British Empire, is the pace at which they’re moving and the pace at which things are changing.We’re in a vastly different landscape when it comes to AI than we were a year ago, or arguably even a month ago. Did you predict the pace at which this technology would proliferate and the kind of full-throated embrace of it from people in power really in both parties, or is there something that’s surprising you about where we’re at now?KH: I’m definitely really shocked at the pace. And you’re 100% right that one of the key differences of the classical empires of old and empires of AI is just the sheer speed. The British Empire moved at the pace of ships. And with the empires of AI, they’re moving at the pace of bits. They can make like 60 decisions in an hour that affect billions of people around the world.But the thing that has shocked me the most is the speed at which they’re constructing the infrastructure for training and deploying their AI models. Part of the shock is that this infrastructure is actually not technically necessary, and so I’ve been shocked that somehow the companies have effectively convinced the public and governments that it is and therefore there’s been a lot of complicity in allowing these companies to continue building these projects. “Sometimes I feel like that’s a strategy to get people so shocked or confused by these large numbers that they can’t even wrap their minds around that it allows the companies to continue doing what they’re doing.”But the other shock is just what they’re trying to do is insane. It is hard to explain just how baffling the scale is. Sam Altman has recently said that he aims to build 250 gigawatts of data centers by 2033, which he estimates would cost $10 trillion. And when you just think about that figure of just $10 trillion, that’s already insane. Like most people in the world have never encountered 10 trillion of anything, let alone dollars. And sometimes I feel like that’s a strategy to get people so shocked or confused by these large numbers that they can’t even wrap their minds around that it allows the companies to continue doing what they’re doing. But 250 gigawatts is also an insanely baffling number because New York City on average is 5.5 gigawatts of power. So what he’s talking about is constructing almost four dozen New York cities of data centers in the world to power and train his AI technologies.And Meta has talked about building supercomputers where the facilities are almost the size of Manhattan. And so like this is the largest infrastructure build-out that humanity has ever seen, and it’s being controlled by a tiny group of people that are aggressively trying to build this out in communities around the world, many of whom actually do not want this infrastructure. There’s huge protests that has started breaking out all around the world and all across the U.S. and so that’s the thing that has shocked me is just the complete aggressive and reckless nature of the growth.“ This is the largest infrastructure build-out that humanity has ever seen, and it’s being controlled by a tiny group of people.”JW: When you talk about the growth, the first thing that comes to mind for me is the impact of that growth and what that could mean. Your book gets into some of these direct environmental harms. When we’re talking about building out the kinds of infrastructure that Sam Altman is talking about, what are those harms?KH: So when talking about these data center facilities, one of the harms is the energy is coming from fossil fuels. Even Sam Altman has, when he was testifying in Congress, he admitted in the short term it would likely come from natural gas. From reporting we’ve also seen that it comes from coal. There are coal plants that were meant to be retired that are now having their lives extended because of the utilities needed to meet an energy demands that they cannot meet with any other energy source.And essentially we are starting to see the AI industry provide a lifeline for the fossil fuel industry. So it’s bringing extraordinary amounts of emissions into the air. “We are starting to see the AI industry provide a lifeline for the fossil fuel industry.” Those emissions are also pollutants. So it’s polluting working-class communities most often and rural communities. There has been phenomenal reporting on Memphis, Tennessee, hosting Colossus, the supercomputer that Elon Musk built to train Grok and it’s being powered by 35 methane gas turbines that is pumping toxins into that community’s air, which actually has a long history of environmental racism and inability to access the fundamental right to clean air.Then you have to talk about the fact that these data centers also require fresh water to cool the facilities. If they’re going to use water, it needs to be fresh water and even drinking water — because any other type of water would lead to corrosion of the equipment or to bacterial growth. And so you often see in proposals for data centers the request from the company to the local government for potable water — to connect directly to the city drinking water supply.And many of these facilities are being put in places that don’t have that drinking water to spare. There was a Bloomberg investigation that found that two-thirds of these data centers are going into already water-scarce areas. So there are communities that are actively competing with this computer infrastructure for life-sustaining resources. So it’s basically layer upon layer of environmental and public health crises that are already underway, that are being massively accelerated by this push.JW: With the Trump administration moving to massively deregulate a lot of environmental protections, do you expect these costs to grow?KH: I do, and it’s not just the deregulatory stance. The Trump administration and actually the Biden administration also had enabled data centers to be built on federal lands. So the federal government has been aggressively using all of the different mechanisms that they can to try to facilitate the recklessness of the tech industry. Related Trump’s Big, Beautiful Handout to the AI Industry And of course, Trump also signed an executive order that is trying to neuter state AI regulation as well. So not only deregulating federal laws, but also trying to prevent any states from stepping into the vacuum. And so all of the trends that we see, if the public did nothing about it — if there was no contestation, if there were no protests, and everyone was just laid back and allowed this trajectory to barrel forward — I absolutely think that it could get worse. But I also think that there is an incredible amount that people can in fact do in the absence of leadership at the top to show leadership from the bottom.BreakJW: There’s been some public pushback to your water usage calculations, primarily from supporters of artificial intelligence. Andy Masley, executive Director of Effective Altruism DC published a Substack in November questioning some of your data around water usage, and you issued two changes to your book regarding the water footprint data recently. I wanted to just give you a moment to respond to that critique.KH: Yeah, for sure. So yeah, Andy brought up some very valid criticisms. One was on a particular data point that, after he brought up the criticisms, we investigated it and realized it was wrong. This was a data point that appears in Chapter 12 of my book, where we are describing a proposed Google data center in Cerrillos, Chile, outside of the outskirts of Santiago. And I was trying, in that particular case study, to explain the water impact that this facility would have within the community by comparing it to the water use of that community. And basically what happened was the government document that stated the water usage of the community had a unit error. And so instead of quoting the numbers in meters cubed as they should have, they quoted it in liters. One meter cubed is 1,000 liters, so they underestimated the water use of the community by a factor of 1,000, which meant that when I then divided the data center proposed water usage by what the document said was the water usage, my comparison was off by a magnitude of 1,000.And so the corrected statement is that this proposed Google data center could use more water than the population of the town — which is already substantially bad. But of course, in the error of the calculation, I had said that it was going to be more than 1,000 times what the town uses, which is just incorrect. And basically I worked with my Chilean collaborator to figure out, contacted the Chilean government agency that had issued the document to get to the bottom of it, confirmed that it was in fact a unit error. We issued the correction.The second change that I made, which is also based on Andy’s feedback, was that there was a part of my explanation or citation of a study about the overall water impact of AI that also used the wrong terminology. So I had used this term that AI was going to lead to this amount of “water consumption.” But there’s actually a technicality: “Water consumption” is not the same as “water use.” And I should have actually used the term “water use” because in consumption with data centers, it means that the water’s evaporated and it just disappears. Whereas “water use” means that it’s running through the system, but then it exits out the system. Not that it’s completely unchanged. It can have a lot more pollutants in that water, and it can have a higher temperature, and it might not actually be able to return safely to the environment, but it’s different from pure evaporation.So I made that change as well and added some more language to explain that the study was referring to the water impact of data centers — both in terms of the water used to cool the facilities, but also the water used to generate the electricity to power the facilities, because that is also a huge important part of the water footprint of data centers.So those changes will be made in the next reprint of the physical edition and will also be made in the digital and audiobook edition.JW: Thank you for explaining that. I want to switch gears to one of my favorite chapters of your book where you talk about the concept of intelligence and this kind of mythical idea of superintelligence. What is superintelligence, and is it just something that tech CEOs are saying to sound futuristic?KH: [Laughs] So superintelligence, colloquially, I guess refers to a theoretical point at which AI exceeds human intelligence. That’s why it’s called superintelligence. And the problem with this term is that there is no scientific consensus around what human intelligence is.There’s a long history of trying to define and quantify human intelligence. Much of it is a very dark history motivated by the desire to show through “scientific means” that certain races are superior to others. And we’ve never landed on one test that definitively proves that this is like the marker of intelligence.“Artificial general intelligence — which also, what does that mean?”And so superintelligence is just like a totally unmoored concept. And indeed, this is very useful for executives of companies where when they want to market themselves, because there is no definition around this term, they can just define it however they want. They do the same thing with the term artificial general intelligence — which also, what does that mean? It’s supposed to be the point right before superintelligence when the AI system theoretically matches human intelligence.And use see OpenAI define and redefine AGI constantly, based on what it wants to do at the next steps. So when Sam Altman is talking with consumers, he says AGI is going to be this amazing digital assistant that’s going to solve all your problems — because he wants those people to buy it. When he is talking with Microsoft, The Information reported at one point that Microsoft in the agreement between OpenAI and Microsoft, they define AGI as a system that can generate a $100 billion of revenue. When Altman is talking to Congress, he says AGI is going to cure cancer and eradicate poverty and so on and so forth to try and ward off the regulation.And so you can see that it just shape-shifts based on the audience that needs to be convinced in that moment for the company to just continue its agenda.JW: Speaking of promises made by the tech industry about AI, one of the biggest promises is that it’s going give people their time back to use on more fulfilling activities and that AI will eliminate the need to work essentially, since the expectation is that it’s going to take our jobs.How exactly is that going to help people who then lose their income? Is the government supposed to step in and sufficiently take care of people, or are the titans of this industry going to pay more taxes to take care of people? I guess, what is the promise and what are they saying we’re going to have in the future that’s supposed to be so great?KH: [Laughs] Right. The answer is, they promise whatever they need to promise to convince whoever they need to convince. So the promises keep shape-shifting, but generally, they fall in the line of, “There’s going to be so much abundance that we’re not going to have a competition for resources anymore. Everyone’s going to live wild and free and it’s going to be amazing, and, like, all science will be solved.” But the fine-grain details of this vision are not there.It’s interesting, in OpenAI’s early years they explored the idea of instituting some kind of tax structure upon which if an AI company had windfall profits, then there would be a ceiling to how much they could keep, and the rest of it would be redistributed as universal basic income to everyone. That’s as far as I’ve ever seen anyone in the industry go towards actually articulating a mechanism by which everyone gets a piece of the pie. But of course, this was like very early days in OpenAI, and we’ve never heard about this proposal since.And what we’re actually seeing instead is the complete opposite, right? We are currently seeing these companies get more and more and more and more wealthy, while the average American is struggling more and more with an affordability crisis, with inflation, with job loss — sometimes driven by AI.And we are in a moment right now where the economy is k-shaped. All of the AI-related stocks are flying, while everything else is going south. And so this, I think is the clearest signal that we have of the true tally that AI — in Silicon Valley’s conception of it — what it’s actually delivering us and will continue to deliver us if we allow the empires to continue on.JW: In that vein, there’s been this growing concern that we’re in an AI bubble that companies are overvalued and overspending on data centers, on microchips. What do you make of that concern and the way that tech leaders are responding to that concern?KH: I think we’re in a huge bubble, and I’m deeply worried about what might happen if that bubble pops, especially for the ripple effects that it’s going to have on average people, because the people at the top are going to be fine. Like, they are not going to be the ones that are suffering from the fallout that could happen with a market correction. But of course, the industry leaders are trying to project the fact that we’re not in a bubble. They’re trying to project continued confidence in the fact that their technology is going to lead to continued crazy GDP growth that will somehow get redistributed to the average person. But I think average Americans are starting to realize that this is totally not true.“They’re trying to project continued confidence in the fact that their technology is going to lead to continued crazy GDP growth that will somehow get redistributed to the average person.”And that’s why we’ve seen in the past few months the attitude towards the AI industry towards the way that these companies are developing AI in particular has really soured because people are actually experiencing their kids being harmed or having worries that their kids will be harmed. They’re seeing data centers pop up in their communities that could hike up their utility bills or potentially contaminate their water, and they didn’t have any say in that project.They’re seeing a shrinking job market where they might themselves have been laid off in part because an executive is saying that they’re engaging in an AI strategy. And so I think, as much as the executives are really trying to create this veneer that everything is fine, most people know that it’s not fine.JW: As you’ve mentioned throughout this conversation, we’ve been focusing on the effects of AI outside of Silicon Valley, but there are red flags, as you’ve mentioned in San Francisco, in the larger Bay Area in California, where wealth inequality has grown really exponentially as the tech industry has grown in the last 15 years. How do you view that, what we’ve seen as a microcosm in that region, against the backdrop of this kind of larger exploitation?KH: This is something that I think about all the time because I used to live in San Francisco. And part of the reason why I left the tech industry and ended up becoming a journalist was because I felt like what I was seeing in San Francisco was really a manifestation of the real ideology that undergirded the industry. And there is this extraordinary amount of wealth. Bloomberg reported at one point that the AI industry is minting billionaires faster than any other industry in history. It’s an extraordinary amount of wealth. And there’s been reporting talking about how this year, 2026, is going to see some massive IPOs that’s going to create even more extraordinary wealth generation than we’ve ever seen in this town. “It’s just so crazy to me that they can talk all these utopic lofty goals about solving science and eradicating poverty — when they haven’t eradicated poverty in their own town.”And yet at the same time, there’s rampant homelessness there. There’s a huge housing crisis in general, and there is just an obliviousness almost to the people who are within the industry to the things that happen at their very doorstep. And it’s just so crazy to me that they can talk all these utopic lofty goals about solving science and eradicating poverty — when they haven’t eradicated poverty in their own town. They haven’t done anything to solve the social ills within their own town, and in fact, they’ve only done things to make it worse.JW: On that point, what is their larger goal? What do these tech billionaires, maybe even soon to be, some of them trillionaires, what do they actually want? They have all this money, as you’ve said, they could spend on social welfare in the communities that they’re already in. What are they actually after?KH: The reason why I use the metaphor of empire is because … the revealed agenda is an imperial agenda. They have effectively been able to use this narrative of AGI to accrue more capital, land, energy, water, data. Like, they’ve been able to accrue more resources — and critical resources — than pretty much anyone in history. So that to me is what they’re after.But also, it’s complicated in the sense that there are also these, what I can only describe as quasi-religious movements that undergird the push for AGI as well. So there are some people that are more political actors that are seeing the opportunity to leverage these narratives about AGI to amass more and more power. But there are also genuine cohorts of people who believe in the myth of AGI or the religion of AGI, where they think that when the moment comes that AI actually matches or begins to surpass human intelligence, that it is somehow going to truly lead us, as I mentioned, like to an AI heaven, to an other worldly civilization 2.0, so to speak, where we finally unlock the next era of human evolution.“We actually have no idea how to define AGI, because we have no idea how to define human intelligence.” The reason why I call it quasi-religious is because it’s not actually backed in scientific reality. In 2025, there was a survey of researchers that found this — AI researchers — that found 75 percent of them do not think that we’re on the path to AGI, and this is still actually an open question of “Can we even reach AGI?” Because once again, we actually have no idea how to define AGI, because we have no idea how to define human intelligence. So people call themselves believers when they say that they’re AGI believers. They use this religious rhetoric of saying AGI is akin to an AI god, or the bad version of AGI might be akin to summoning the demon, as Elon Musk once said.And that is why in order to really understand what is truly motivating this industry, you can’t actually just view it through a capitalistic lens. You have to also view it through an ideological one. And once again, that returns us back to this is why it’s colonialism. Colonialism is the fusion of capital and ideology.JW: This has been fascinating, and I want to give you a chance to just share any final thoughts if you have anything you want to say.KH: I cannot stress enough that none of this is inevitable. I alluded to the fact that this scale is totally technically unnecessary. AI is actually a word that refers to such a wide array of different types of technologies.I think it’s very akin to the word “transportation.” Transportation can literally refer to anything from a bicycle to a rocket. Those are systems that all get you from point A to B, but have fundamentally different designs. They have fundamentally different cost-benefit trade-offs. And generally when we speak about transportation, we have a much more nuanced discussion of saying we need more public transit, rather than just saying we need more transportation in general.“The tech industry is able to manipulate public understanding by constantly selling the benefits of the bicycle version of AI, when they’re actually building the rocket version of AI.”And we are currently stuck in a moment where there isn’t that nuance with AI, and the tech industry is able to manipulate public understanding by constantly selling the benefits of the bicycle version of AI, when they’re actually building the rocket version of AI. And the reason I feel so strongly that none of this is inevitable is that there is a very clear path for how to unlock the benefits of AI without accepting the colossal cost of it. And that is just by simply shifting from building rockets to building bicycles.And even though there is no government willingness to hold the industry accountable, there are plenty of ways that individuals and communities can engage in collective action to hold the industry accountable themselves, and we are seeing remarkable movements of this already happening and already working.There have been, I believe, at this point, $60 billion-plus of data center projects that have been blocked because of protests. There have been lawsuits from families of victims who have suffered egregious mental health harms, including dying by suicide after extended uses of ChatGPT that has led to a massive momentum around shoring up the safety of these models. There has been litigation around copyright, intellectual property. There have been huge discussions sparked in schools about whether or not these tools should actually be actively adopted within schools. And I think all of this pushback is forcing the companies — even without regulation — to shift their practices, hopefully will force them to downsize away from empires to just being businesses that actually provide valuable products and services that are not built on extraordinary exploitation and extraction.I think that’s like the final message that I want to leave with people: Any single person that’s listening to this has an active role to play in shaping the future of AI development. And we absolutely can get to a point where we have the benefits of AI without any of the costs by just changing what types of AI systems we design.JW: Well, thank you so much. I really learned a lot reading your book and even more in this conversation. So appreciate you taking the time and thank you for joining me on The Intercept Briefing.KH: Thank you so much, Jessica.JW: That does it for this episode. This episode was produced by Andrew Stelzer. Laura Flynn is our supervising producer. Sumi Aggarwal is our executive producer. Ben Muessig is our editor-in-chief. Maia Hibbett is our managing editor. Chelsey B. Coombs is our social and video producer. Desiree Adib is our booking producer. Fei Liu is our product and design manager. Nara Shin is our copy editor. Will Stanton mixed our show. Legal review by David Bralow.Slip Stream provided our theme music.If you want to support our work, you can go to theintercept.com/join. Your donation, no matter the amount, makes a real difference. If you haven’t already, please subscribe to The Intercept Briefing wherever you listen to podcasts. And leave us a rating or a review, it helps other listeners to find us.If you want to send us a message, email us at podcasts@theintercept.com.Until next time, I’m Jessica Washington.The post AI’s Imperial Agenda appeared first on The Intercept.