Published on July 28, 2025 6:44 PM GMTTLDR:Three key pathways to slowing:General Worries Slowing: AGI is not securitised, but a selection of worries, likely especially over jobs but also potentially over XRisk, lead to slowing. National Securitisation isn’t strong enough to overcome this pressure.Existential Risk Slowing: The existential risk worries overcome the national securitisation. The object that poses the existential risk is “The AI” not “China”. This can occur either with fairly little awakeness (like the establishment of the UK AISI due to XRisk worries did), or very high amounts of awakeness (such that it can overcome even strong national securitisation forces)National Securitisation Slowing: The national security logic just happens to support slowing, likely because of MAIM or Manhattan Trap arguments.This leads to “The AGI Awakeness valley of doom”The very best actions occur with high levels of awakenessBut at moderately high levels of awakeness, strong racing tendencies (ala Situational Awareness) get made much more likely. Thus, these moderately high levels of awakeness are amongst the most dangerous scenarios - the valley of doomAt low levels of awakeness, General Worries and Existential Risk Slowing are viable, but may not be able to surpass weaker alternative priorities. They may, or may not, pass the “critical safety threshold” Since Katja Grace’s seminal piece on slowing AI in 2022, it has become a far more popular position in the community to think slowing the development of AI would be a good idea. Secondarily, especially post “Situational Awareness”, it has become increasingly common to think about how and whether the US Government might “wake up” to AGI, and how the spectre of decisive strategic advantage may interact with existential risk. Nonetheless, I do see a lot of unclear thinking about both slowing in general, and its interaction with how “awake” the US Government is to AGI., This blog post is hoping to give a short contribution to create clarity of the sorts of scenarios I expect slowing to be plausible in. Note, I think slowing is a pretty broad category - I basically use it to refer to scenarios where we go slower in the development of superintelligence than the default pathway would have been. So as well as coordinated pauses, work like RSPs is clearly a version of slowing, and even scenarios where we don’t accelerate, when this would have been the default, is as well.There are three broad categories of slowing, each of which seem to have very different strategies to bring them about. These will be taken in turn. I then discuss what I call the “AGI awakeness valley of doom” (Figure 1) - the notion that, whilst the very best worlds involve a US government that is very awake to AI, that the very worst worlds fall somewhere close to that in terms of “awakeness” as well. This forms a “valley of doom”. Figure 1: The AGI Awakeness Valley of Doom, showing the way that safety changes with degree of “awakeness” by the US Government. The Critical Safety Range is the range where the threshold dividing “future with existential catastrophe” to “future where we survive” lies, but we are uncertain over its position. Of course, this is in fact probabilistic - there are scenarios we end up in the valley of doom and survive.General Worries SlowingTechnologies - even those with significant excitement behind them - have been slowed before, despite fears not always being existential. Human gene editing, the deployment of nuclear power, “death dust” radiological weapons, geoengineering, GMOs in Europe. Publics have been worried about the impacts of these technologies, and often coalitions of concerns have been weaponised by particular consistuencies of the public to get politicians or scientists to adopt certain attitudes that have slowed such technologies’ development.Something similar could happen with AI. If timelines/takeoff are somewhat longer, and so AI diffuses more into the economy before superintelligence, non-XRisk worries about AI may become widespread. The most obvious of these is labour automation - vast swathes of the public may fear unemployment, and indeed may become unemployed. This may cause large scale raising of the political salience of AI, causing protests or other public calls to rein in AI companies and protect their jobs. Whilst some such calls are occurring right now, once the groups with labour worries starts to be a significant political constituency, it may raise high on the agenda. Its also possible we see movements by politicians with foresight before (or even generating) some of the political salience as well.There are other potential worries. Children’s safety and worries around deepfakes have been fairly political salient already. Privacy issues have thus far been somewhat muted, but one could imagine them rising more as AI is integrated into our lives in ways more profound than even social media. Other forms of "conventional" misuse, or low-level autonomous AI capabilities, such as cybercrime and fraud, may further turn the public away from AI. These, at least to an extent, seem to already be politically salient, and even if they don’t rise much, they may prime the public to be continually sceptical of AI.This model sees a pause coming about because the general public distrust of AI continues to grow, and particular (likely labour) worries raise the political salience. Politicians use this to put regulation to try and slow or stop the development and deployment of more advanced AI systems, or just put onerous regulations that slow AI companies. We’re already seeing hints of a world where this could happen - its plausible that an honest (but very unlikely) interpretation of Trump’s Woke AI Executive Order would mean no current model is “politically unbiased” and companies would need to devote lots of efforts to try and discover how this is even possible.In this scenario, X-Risk worries plausibly play a small part in a coalition that shares a common objection to the deployment of AI Systems at large. One thing that's not obvious, however, is whether this coalition would meaningfully object to the development of the technology as well as its deployment. Whilst likely any regulation will counterfactually slow, the degree of slowing may be less if the focus is only on regulation that effects how the systems are deployed, rather than also factors related to their development.I had previous called this scenario “non-securitised slowing” because it strikes me as important that securitisation isn’t strong enough to overcome public pressure. Security logics have a tendency to override even very significant domestic concerns - see for example the trampling of civil liberties in response to the war of terror - and so if there is a strongly securitised race to superintelligence, this approach may be difficult. Note, however, attempted securitisations can’t always overcome oppositions with non-security concerns - the attempted securitisation of COVID in some places, including in parts of the USA, failed because of economic concerns.One response to this would suggest that securitisation is already occurring in Washington. Discussion abounds of the US-China race, the recent AI Action Plan was literally titled “Win the Race” and there have been calls for a US Manhattan Project. My sense, however, is that this is mistaken. Mostly, the securitisation is around a more general “technology race”, rather than an existential struggle for survival. There definitely are those who wish to push it towards greater securitization, but the issue is nowhere near there yet. Other worries clearly get airtime, and narrow security logics do not trump all. There even seems to be fairly strong disagreement about what securitising it even means, and how much pure economic dominance should be significant - this is the core of the debates around export controls for example.So, assuming the existential race framing never gets strong, and securitisation remains relatively weak, this pathway to slowing seems viable. Public outrage at a variety of harms forms an anti-AI coalition that successfully pushes for regulations that slow down the AI companies. Public shows of strong support become more politically costly as the harms and worries mount, and so the government supports the AI industry less. A combination of less support and more onerous regulation slow down AI development.Existential Risk SlowingThis is often the form of slowing imagined by the AI Safety community. The basic story sees it both becoming much more obvious that AI poses an existential risk, and much more politically salient. Faced with this existential risk, the US government “wakes up” and realises slowing is needed. This causes ambitious slowing proposals, such as an international treaty with China or trying to establish a MAIM set up, to suddenly be in the Overton window. Domestic regulation clamps down on labs trying to build superintelligence. The sorts of policies that many in our community wish could be implemented get implemented.There are less extreme versions of this, where an XRisk focused faction gets significant amounts of power, so some policies can be implemented, but others can’t - in a sense, this is the story behind the establishment of the UK AISI, for example. We could imagine this scaled up by a few orders of magnitude, so ambitious policy can be carried out, but much of the policy is incoherent as national security factions and accelerationists compete with the safety community.Strong national securitisation generally plays against this sort of slowing. By national securitisation, I mean a securitisation process (where something is constructed as an object of security by a particular type of speech act) that centres the survival (and often by extension power) of the nation narrowly as the most important thing. Often, this is in opposition to a particular threatening “Other” - often in AI narratives, this “Other” is assumed to be China. Security logics - this logic of national survival above all else - justify the carrying out of exceptional measures, and supersede other interests and the normal rules of politics.In such strong national securitisation narratives, the existential threat gets constructed as “China” rather than “the AI”. Existential risk worries, and the need to show restraint, may be trumped by calls to accelerate. Aschenbrenner’s “Situational Awareness” is a good demonstration of this dynamic (and I have written a much more in depth discussion of securitisation with reference to that piece here). Despite acknowledging the worry of existential risk, Aschenbrenner nonetheless advocates for fairly extreme acceleration in order to stay ahead. This is a dynamic that Nathan Sears noted across case studies of attempts to combat perceived existential threats in the past - national security logics, when established, tend to promote actions in the narrow national interest to combat the threatening “Other” rather than the existential risk. This may make the issues in fact worse, not better. The failure of the Baruch Plan is an excellent example, where despite acknowledging the dangers of nuclear weapons, national security logics lead to the failure of international control of nuclear weapons. The legible threat of China, and the clear and obvious “way out” by racing makes the notions of restraint and slowing foreign - so if this strong securitisation narrative wins out, then slowing may be implausible.It's even possible that the weaker, “tech race” securitisation may be enough to make slowing very difficult. This is certainly true in scenarios of weaker awakeness to the existential risk from AI, although I do suspect if the political salience of existential risk became high, this would be overcome. But at present, the general US-China race narrative makes people reluctant to discuss any chance of slowing in DC circles, meaning these ideas get pushed out the Overton window. If a distinction is successfully made between the general “Tech race” and the race to superintelligence, as is already starting to be made by some in DC, then slowing superintelligence on XRisk grounds may be feasible. However, such a memetic move will be hard to carry out, and will require some bravery - and risk taking - from the XRisk community in DC. Currently, this feels very far from what is politically possible.It is often said that AI Safety might get a very strong boost if there are warning shots. I agree here. Given slowing and strong AI Safety actions seem so far from the fore in Washington, an external “warning shot” could drastically raise the political salience of the issue, as, for example, nuclear disasters have for the nuclear movement.Nonetheless, it is important to note that warning shots need to have a particular shape to them. Namely, they need to make it clear that the threat is the AI, and raise these strong, existential fears of the AI Systems themselves in the public consciousness. For a warning shot to be successful, we need the public - and thus politicians - to see the ever more advanced development of AI as the danger, and not for the worry to be “terrorists”, “China”, “bad actors” or even just “poor implementation”. In many of these scenarios, warning shots make things worse, as actors that desire acceleration will push a narrative that “the only way to stop a bad guy with an AI is a good guy with an AI”.A second fact about warning shots is that they don’t have to actually provide good evidence of XRisk threat models. The responses to nuclear disasters have often been wildly disproportionate to the risks they supposedly bring, and its not clear that a similar response can’t happen with AI. Rather, they need to do enough to make people - some of whom already seem to believe in XRisk - to viscerally believe in XRisk. They need to make AI immediately associated with notions of catastrophe, or at the very least take the nascent negative feeling, and make this explicit and salient. Politicians need to see slowing AI as at the very least a vote winner, if not a priority.All of this requires a certain amount of priming before such a warning shot. Warning shots will not have specific results by default. In their aftermath, there will be strong fights and debates over what they mean. The way that the public is primed to respond will in part be due to the narratives that are floating around before the warning shot, and whoever latches onto the warning shot with the most effective narrative in the days and weeks the situation comes to light. This brief window looks to be the time where, even against a threat that is originally national securitised, the game board can flip. But this will require work, both before and after such an event occurs.There is also another pathway that existential risk slowing happens - important individuals simply get worried about the issue. Perhaps demos just become convincing enough that danger is demonstrated. Perhaps it becomes obvious that superintelligence is near, and this causes the President to get deeply scared. This seems to be what happens in the AI2027 scenario, and I don’t think it should be ruled out. Yet again, however, much of this depends on the communication and narratives that gained prominence in Washington before this happens, and positioning individuals best in order to guide the President or other senior figures in such a time. However, it would require a very motivated President to fight against sentiment in Washington. If Washington’s attitudes to AI are as today, a president worried about superintelligence may be able to get a lot of legislation through. But if Washington strongly converges on a race to superintelligence, or the lobbying power of AI companies increase, it may be difficult for effective slowing policy to be implemented. Saying this, DC under Trump is not a normal place, and Trump is not a normal president - maybe if he became strongly convinced of the dangers of superintelligence, he could push for ambitious slowing actions even against the will of the rest of DC, as he did with the tariffs for example.There are obviously gradations of this as well, with important figures in DC needing to be less strongly convinced of superintelligence the less strong the national security narrative is and the less ambitious the measures needing to be implemented are. But what I wish to communicate is this trade off.National Security SlowingThe final version of slowing leans very strongly into the national security argument, and suggests that the logic of national security, from internal and external threats, lends in the direction of slowing and safety. This narrative wants to see the race framing be taken to an extreme. The “general tech race” frame cannot deliver slowing under this narrative - rather, the US government or national security establishment need to fully understand the potential of superintelligence. Once such potential of superintelligence is understood, the national security logic may play out that pausing is the only option.I attempted to argue something similar in my paper “The Manhattan Trap” with Corin Katzke earlier this year, and the Superintelligence Strategy paper by Hendrycks, Schmidt and Wang try to do something similar. We both tried to show how the logics of national security suggest that the development of superintelligent AI by anyone - including US domestic actors and even the government - fundamentally poses an existential threat to the security of the USA. Thus, actions to slow - such as a verified international treaty or a MAIM regime - are required in order to preserve US national security, interests and power.This is a dangerous game to play though - unlike the other framings, slowing being the conclusion seems fragile. Critics of MAIM have pointed out that if national technical means and espionage don’t work to identify a project, then MAIM doesn’t hold up. One may also think that China is unlikely to risk war over ASI, particularly if they are less situationally aware than a US national security establishment which (in this scenario) are strongly aware of the potential of superintelligence. The US may also be confident that its data centres are adequately protected from attack (e.g. are underwater, or under a mountain) that lets them build freely. Or, more worryingly, they may think that China has secret or hardened data centres meaning they can’t adequately restrain China. If the national security establishment isn’t convinced of the MAIM or Manhattan Trap arguments or others like them, then slowing would be strongly disadvantaged. This makes this strategy very fragile. Of course, in some sense a similar argument applies for existential risk slowing, although slowing, assuming one is motivated by XRisk concerns, is far more obvious than if you are motivated by national security concerns.One important fact of national security focused slowing arguments is that they often rely on the concept of a decisive strategic advantage, which is what leads to the conflict-promoting dynamics and the strong risk of coups. However, this assumption is controversial, even in the AI Safety community. With strong national securitisation and a focus on “winning the race”, without the decisive strategic advantage framing it seems hard to see how you get slowing.There is perhaps a national security frame focused on misuse risks. Here, the risk of proliferation is so severe that it motivates the national security establishment to suppress further development and come to an agreement with China, despite only being narrowly focused on national security. However, it seems that it will, in such scenarios, be hard to escape the pull of “the only way to stop a bad guy with an AI is a good guy with an AI”.Despite this negativity, it is possible that a national security focus does lead to technology restraint. The anti-ballistic missile treaty, for example, can be thought of as such an example from history. Missile defence provides some helpful analogies to today . When it was revisited with “Star Wars” in the 1980s, there was very clearly a narrative battle - one which was mostly won by those in favour of Star Wars - but such an outcome was not predetermined.One important addendum - whilst I distinguish between the existential risk and national security framing (often seeing them as opposed to each other), it is possible that the national security establishment could favour the former frame over the latter. The mere involvement of national security, does not necessarily mean that the national security narrative is favoured - for example, some of the discussions around the Biological Weapons Convention were dominated by more humanity focused rather than national security-like narratives. Nonetheless, the national security frame does often dominate, sometimes leading to favouring positions which deny the existential risk frame. For example, in the debates around nuclear winter, the US national security establishment consistently supported the view that nuclear winter, and even less serious versions, were implausible. Indeed, I have heard anecdotally that one of the reasons the US doesn’t have good post-nuclear war food security plans is that they operate under the assumption that the climatic and global infrastructure effects of a nuclear war may be fairly small. Whilst many are expecting the national security establishment to be most favourable to the AI XRisk messaging - and they have been so far - it is possible that if strong national securitisation occurs they become more resistant to such views that may undermine their perceived imperative to beat China.The AGI Awakeness Valley of DoomFigure 1 (again): In case you forgot! Although this time the different slowing pathways are labelled.Slowing will be hard - but plausibly necessary for many good futures. However, the strategy we will need to take will differ depending on how we hope to get slowing to happen. How we want to get slowing to happen will, in turn, be influenced by how we expect politics to play out - what we expect securitisation to do, what we expect the labour impacts are before we hit points of no return, whether we expect to see warning shots. Some of these features we as a community can impact, and some we can’t. But I think it's important to get a sense on what these critical parameters are.I think there is one critical dynamic to focus on here - what I call “The AGI Awakeness Valley of Doom”. When the government is not awake to the prospect of AGI and decisive strategic advantage, a “General Worries Slowing” can happen. It's also plausible that an “Existential Risk Slowing” can happen, if the strong national securitisation of AI is avoided. As the government becomes more aware of AGI and decisive strategic advantage, it plays strongly into an all or nothing race framing - strong national securitisation is favoured. The only scenario this leads to slowing is if the national security logic plays out in a way that favours slowing. In this “valley of doom”, only “National Security Slowing” can happen - and that's fragile. Finally, as the government becomes more aware of AI, and fully understands the threat, they end up leaving the valley of doom and moving to the best case actions.Notably, therefore, whether you want to wake the government up probably depends on two key factors - how likely you think you can push beyond the valley of doom, and whether any of the pre-valley “moderate risk scenarios” are above the critical level of safety or not. This latter question is dependent on a bunch of important factors: the non-existential impacts of AI (labour effects are probably key, but so are others); the strength of national securitisation of the “general tech race” as opposed to the concerns of XRisk; the sorts of government actions that are sufficient to make us safer.One important question is what I mean by “awake”. Admittedly, this is a confused concept, because it tries to reduce many factors into one. But I roughly mean this as some combination of: aware of the possibility of superintelligence, taking this possibility seriously enough to strongly prioritise it, aware and appreciative of DSA, see something related to ASI as an existential threat (but the big question is whether that threat is “China” or the “AI”). This might help explain how you could be in the “Moderate Risk” zone and still get “Existential Risk Slowing”: politicians can be worry about existential risk, but not fully “feel the AGI” in such a way that prioritises it. This could, for example, would be how the UK AISI and the AI Summit series got set up, for example.So what sorts of actions would I like to see the AI Safety community take?I might, at some point, write a more comprehensive piece laying out my views on slowing and what we should do, but as a non-comprehensive list:Propagate a narrative trying to distinguish between a general US-China Tech Race, and an ASI RaceThis is the default mode that I think the AI Safety community should engage with US-China racing narratives. I think if this distinction isn’t made, then it is pretty unlikely that either the “Existential Risk Slowing” or “National Security Slowing” pathways occur.2. Prime and plan for warning shotsThere is loads of discussion about warning shots, and a lot of what this community is doing is trying to position ourselves to take advantage of them. Nonetheless, I see far less of the “priming” - spreading memes that make it likely a warning shot will be reacted to beneficially - than I would like. I also don’t see us gaining all of the skills, especially public communications skills, that are needed to “win the narrative war” in the immediate aftermath of a warning shot. Public communications work is generally low-status in our community, and often either fairly amateurish, or highly detailed - these are unlikely to be what wins this narrative war. Whilst there are attempts in PauseAI to create a mass movement, one bet on a mass movement isn’t necessarily a great idea. The real thing I’d like to see is concerted, media focused public communications work, leveraging the legitimacy of many members of the AI Safety community to propagate narratives that both help prime the public in a particular way, and give us name recognition and the skills to immediately dominate the airwaves after a warning shot. I don’t think, at present, there is any member of the AI Safety community up to task of the sorts of public communications effort that would be needed in the days and weeks after a warning shot (I suspect Daniel Kokotajlo may be the closest we have, but we should have far more with the skills, connections etc).3. Keep building the broad coalition for “General Worries slowing” - and let the existential safety community be a useful part of this coalitionThere is work on this coalition, as seen by the AI moratorium fight. But I think the coalition, particularly around labour automation, may end up being really important. Without warning shots, it's amongst the most likely way we get concerted action. However, we do want to make sure we can convert this to actions we want. Hence, actually being a useful member of the coalition opposing dangerous AI, supporting labour concerns in the expectations we get something back as well. Because there’s a real worry in scenarios where the labour movement gets momentum, but only on deployment and not on development, leading to huge gaps between internal and externally deployed systems.4. We need to make sure the national security reasons for slowing are watertight if we are to pursue themMAIM is an attractive proposal, but it is highly dependent on the technical details. For them to be successful, we will have to convince the national security establishment of them. To do this, we need to move beyond the current high level argumentation, and into the realm of practical details. As I said, these national security slowing scenarios are on a knife-edge, so we need to be compelling. It’s important to realise national security doctrines are not just accepted - they have to be constructed and worked for. There is evidence to suggest the nuclear taboo, for example, was brought about in part by the anti-nuclear movement, and neo-conservative advocacy helped to bring about the Iraq war despite the protests of realists.5. Lowering the cost of slowing makes this easierPut briefly, working on verification should be an absolute priority of the community. This will likely allay some of the most hawkish worries around some sort of slowing. This won’t stop us having to do a bunch of memetic advocacy, but it can make it easier. Luckily, the community is moving in the direction of strongly prioritising verification.Discuss