Published on June 5, 2025 3:37 PM GMT(Crosspost from my blog). (I’ll be at EAG London this weekend—come say hi. Also, this is my thousandth blogpost—cool milestone!)Several people have wondered why I haven’t written much about AI. The main reason is that I don’t feel that I have anything very original to contribute. I try to only speak about things that I know something about. I don’t really know much about AI. While I use it regularly for editing, and have read a decent amount about it, it’s quite far outside of my area of expertise.But I feel that I should say something about it because it’s important. It’s plausibly the most important thing. We may be approaching a second industrial revolution with consequences more dramatic than the first. There is a real possibility of everyone dying.I’m on the more optimistic side. I think there’s only a few percent chance that AI kills everyone. Maybe 1 or 2%. In the EA circles in which I hang out, this often makes me outrageously optimistic. Lots of people, like Yudkowsky, think we’re nearly guaranteed to all die.But whether one’s P(doom) is 1% or 60%, it’s abundantly clear that we should be doing a lot more than we are currently doing. AI alignment research—research that makes sure AI does what we want—should take up a sizeable portion of the federal budget. AI governance and international treaties are sorely needed. (Here are some high impact careers—many related to AI—and here are a bunch of high impact charities for safeguarding the longterm, largely by aligning AI). Your odds of dying from AI are a lot higher than your odds of dying in a car accident.If AI goes well, it could usher in unprecedented prosperity. If it goes poorly, it could usher in an unimaginable catastrophe. In such a world, trying to steer AI so that it goes well should be a top global priority.A lot of the arguments for AI risk have been written with a great deal of technical language, but I think the core argument for AI risk is pretty straightforward. We are building things that are much smarter than we are. AI can already do many human jobs better than most people.A few years ago, AI couldn’t write competently. GPT2 was useless. GPT3 was revolutionary. GPT4 was better than humans at many tasks. What will GPT10 look like? Whatever AI is like in 30 years, it will be very impressive.AI has already surpassed us in lots of tasks. The best human chess player can’t hold a candle to the best chess-playing AI. If the best human played the best AI a million times, it would lose every time. It wouldn’t be close. What happens when AI surpasses us across the board as completely as it does in chess? What happens when it grows more agent-like and smarter than us?Probably the most detailed forecast of AI’s future capabilities is the AI 2027 report. It predicts rapidly advancing AI capabilities until fairly soon AIs are superintelligent agents. Daniel Kokotajlo, one of the lead authors on the report, had a frighteningly good predictive track record, which makes me sort of forgive him for being a proponent of the self-sampling assumption. The report predicts rapidly self-improving AI by 2027—that can improve its own capabilities leading to exponential improvements. When AI can recursively self-improve, so that each improvement also improves its ability to improve itself even more, there will be no stopping its takeoff. It will rapidly ascend to superintelligence—we will no more be able to outsmart it than beat it at chess.Sound outlandish? Three years ago, an AI that could write essays better than most college students would have sounded outlandish. I remember trying desperately, in high school, to convince people that AI would be a big deal. I don’t have to do any convincing now—everyone accepts it.If AI becomes superintelligent, the odds of peril are non-trivial. We are much smarter than chimpanzees. Famously, this hasn’t gone well for chimpanzees. We torture them in animal tests whenever we want. We’ve drastically shrunk their populations. When some creature is far smarter than other creatures, it has a big advantage over them.If AI progress doesn’t stall dramatically—and there’s no reason to think it will—we will quickly be eclipsed by AIs. AI will grow more agent-like. It will begin to have goals. Frighteningly, the alignment problem remains unsolved. We do not know how to get it to do what we want.Then there are other risks from AI. Even if AIs don’t destroy the world, they might experience suffering. Future AIs could very well be conscious. There’s even some chance present AIs are conscious (though I’d put it at around 5%). There’s a non-trivial probability of a moral catastrophe—of us creating digital factory farms.Even if AI does not suffer and we control it, there are huge and terrifying international implications of AI. What if a bad actor gets its hands on powerful AI? What if North Korea does? A cold war between the U.S. and China, staffed with superpowerful agents working to outdo each other could have cataclysmic implications. Better governance and cooperation is desperately needed.The present situation feels like the year of 1945. Soon a technology is coming that will change everything. We do not know what will happen. We cannot know. It might tear the world apart or it might usher in an era of peace and prosperity. One thing is certain: nothing will ever be the same.Now, I’m not as much of a doomer as some people. I think it’s likely enough that we’ll be able to align AI. Specific scenarios are hard to forecast: the general heuristic “assume we probably won’t die given that we never have before,” is a pretty good bet. And generally I see there as being many places one can get off the AI doom train, so that doom isn’t guaranteed.But while my credence in doom is only a few percent, I can easily see someone else having a higher one. One of the most philosophically competent people I’ve met—David Manley from my university—once told me that his p(doom) is about 30%. That doesn’t sound crazy.Whatever the numbers are precisely, it’s clear we should be doing more. We must do more. So I’d encourage you: give some money to places like the long-term future fund or EleosAI’s research on digital sentience. Consider taking a high-impact career. We are standing before the possibility of either a great and glorious future or of brutal annihilation. It is up to us which one we bring about. Discuss