Published on June 5, 2025 10:44 AM GMTA thoughtful post by Anton Leicht, “Powerless Predictions,” notes that forecasting may be underutilized by AI policy organizations. But how much is forecasting actually being used? Which organizations working on AI—particularly AI safety—already integrate forecasting into their decision-making, policy recommendations, and strategic planning? What forecasting work is being done, and how is it impacting the future of AI?I set out to investigate this.While I hope this post proves useful to others, I wrote it primarily for myself to gain an overview of forecasting work and its impacts. Feel free to skim—only read the details if you find them valuable.GrantmakingOpen Philanthropy, which funds numerous AI safety initiatives, regularly makes predictions about grant outcomes. Evaluating these outcomes helps them improve their ability to predict the results of future grantmaking decisions. Javier Prieto at Open Philanthropy explains this process and evaluates prediction accuracy in this post.They have also commissioned an long list of forecasting questions on AI from Arb Research. I couldn't find any publicly documented uses of this list, though Open Philanthropy may have used it for internal planning and strategic thinking.AI developmentAnthropic extrapolates rare AI behaviors to reduce concerns about concerning behaviors that may be missed during evaluations.In Anthropic's Responsible Scaling Policy (RSP), they incorporate forecasting for comprehensive AI assessment, making informal predictions about improvements of elicitation techniques and enhanced model performance between testing rounds. They aim to "improve these forecasts over time so that they can be relied upon for risk judgments."While you could argue that frontier AI development would be irresponsible without forecasting future capabilities and risks, it's nevertheless encouraging to see this explicitly incorporated into their practices. DeepMind's Frontier Safety Framework doesn't appear to explicitly include forecasting, and OpenAI's Preparedness Framework only mentions it in passing.Affecting Policy DecisionsWhile policy recommendations implicitly depend on predictions about the future, explicit and well-researched forecasting doesn't appear to be the norm. However, much policy work may be informed by forecasts without being transparently based on forecast analyses—making it difficult to determine how extensively forecasts are used in practice.It's also not always clear how organizations actually engage with policymakers. Some report specific engagements, like policy recommendations sent to particular institutions or congressional testimonies, while others simply report general strategies.I compiled a list of organizations doing forecasting while trying to influence AI policy in a positive direction[1]:Machine Intelligence Research Institute (MIRI): MIRI changed strategy in 2023 when AI timelines appeared too short for their alignment efforts to bear fruit—pivoting toward AI governance and communications. They developed a strategic landscape involving four high-level scenarios for cataloging important governance research questions. They consider the most promising objective to be building an "off switch"—the legal and technical capability needed to shut down dangerous projects or impose moratoriums.While they primarily want to reach policymakers, they also aim to reach policy advisors and the general public through a wide range of channels (see their communications strategy). MIRI CEO Malo Bourgon participated in the US Senate AI Insight Forum on December 6, 2023.Convergence Analysis: Their Theory of Change outlines a very explicit forecasting → policy impact pipeline. They use scenario forecasting to inform governance research and recommendations, and share insights with the AI safety community, policymakers & thought leaders, as well as the general public.They describe their strategy for informing policymakers and thought leaders: “we’re actively writing policy briefs for key governance proposals, performing reach-outs to leading governance bodies, and advising several private organizations. In particular, we’re advocating for a small number of critical & neglected interventions.” They directly advise Lionheart Ventures, Mauhn, and Aligned AI.Forethought: Their About page describes their approach: “We are building a small, focused team to tackle the most important questions we can find and share our findings, unrestricted by the current Overton window. We share our results with AI think tanks, companies, policy-makers who are focused on transformative AI, and the general public.”They explain that they focus on sharing research with “wonk-y folks thinking about AI in think tanks, companies, and government, rather than working directly with policymakers.”Epoch AI: Epoch is THE AI forecasting organization. They describe their approach: ”Epoch AI is a multidisciplinary research institute investigating the trajectory of Artificial Intelligence (AI). We scrutinize the driving forces behind AI and forecast its ramifications on the economy and society.” Epoch explores key trends and develop models (e.g. here and here) to investigate key questions about the future of AI.In Epoch’s 2023 impact report, they describe engagement with policymakers: “In 2023, we supported the work of the UK Department for Science, Innovation and Technology and the Joint Research Centre of the European Commission through consultations and collaborations. In particular, the UK DSIT discusses our work on Frontier AI: capabilities and risks. We also submitted evidence to a House of Lords inquiry on language models.” Their policy contributions are less clear in the 2024 report.Work by Epoch was referenced in the UK Government’s discussion paper for the AI Safety Summit 2023, in the Q&A for General-Purpose AI Models in the EU AI Act (Epoch's research may have informed the training compute threshold for AIs with systemic risk), and in the Government-wide vision on generative AI of the Netherlands.Institute for AI Policy and Strategy (IAPS): This US-based policy organization held a forecasting workshop for predicting the role of the US Government in building advanced AI and investigated Chinese chip production to forecasting their future AI chip production capabilities.They have responded to requests for public comment on AI policy from Bureau of Industry (here) and Security and Department of Defence (here).RAND: RAND leads the Rand Forecasting Initiative, which includes the INFER project (INtegrated Forecasting Estimates of Risk)—though there’s little information about how much focus is on AI. They also have a division focusing explicitly on global and emerging risks, which has published some limited AI forecasting research.RAND researchers have testified before Congress (e.g. here), and their Congressional Relations division has the explicit goal of “making RAND's research and analysis relevant to congressional priorities and readily accessible to policymakers.”Center for Security and Emerging Technology (CSET): CSET has led a forecasting pilot project, Future Indices, which appears to have had a heavy AI focus using forecast data collected through their Foretell crowd forecasting project (which later became a part of INFER).CSET’s director of strategy, Helen Toner, has “testified before the House Judiciary Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet on recommendations to bolster security and transparency around U.S.-developed frontier AI.”They describe their research communicating method: “We disseminate our research through a variety of means, including public events, social media, media outreach, congressional testimony and policy.ai, our newsletter.”Transformative Futures Institute (TFI): This organization was “formed to explore the use of underutilized foresight methods and tools in order to better anticipate societal-scale risks from transformative technologies”—but from a brief check their published work appears to have limited reliance on forecasting so far.They describe their communication as “Disseminating our relevant research among key stakeholders is key to our impact. We do this by disseminating our work directly to organizations and policymakers in DC, as well as through academic publishing.”Public awareness and opening the Overton WindowThe AI 2027 scenario leveraged multiple channels to reach a broad audience—including promotion through Astral Codex Ten, podcasts, social media, and a website with compelling visual design.Leopold Aschenbrenner's Situational Awareness similarly demonstrated forecasting's potential for influencing public discourse through accessible language while investigating the future of AI.Some policy organizations mentioned earlier aim to make their research accessible to the general public, often as an indirect approach to influencing policy—broadening the Overton Window and relying public pressure policymakers into encourage sensible policymaker decisions.There’s also a collection of blogs investigating the future of AI and society, including the AI Futures Project blog, Sentinel Global Risks Watch, Foxy Scout, and my own blog Forecasting AI Futures.Foundational Forecasting ResearchSome AI forecasting work is key to building a foundation for further research and applied efforts like positive policy influence.Notable examples include:Epoch AI's ML trends analysisSupporting research for the AI 2027 scenarioMETR’s AI time horizon studyAjeya Cotra's biological anchors framework for estimating AI timelines based on computational requirements derived from biological systemsDavidson’s compute-centric model for AI takeoff speedYou could include AI benchmark work as foundational forecasting research even though it's not directly about forecasting, since benchmark performance trends are very helpful for predicting future capabilities.I worry that foundational work may struggle to reach key decision-makers who could most benefit from the insights.With Claude Sonnet 4’s help I found three explicit references to Epoch AI research in policy documents (mentioned earlier),which shows their work is reaching some policymakers even outside direct engagements—though this is still very limited. Admittedly, some policy work may be implicitly informed by forecasting research without explicit references.Final wordsI believe my own work through the Forecasting AI Futures blog falls mostly in the foundational category—the focus has been on gaining a better understanding of key dynamics and potential outcomes. While I want to make it accessible to a broader audience and share it with key people, these objectives haven't been the primary focus.I hope to work toward forecasting with more direct applications. It seems too easy to fall into the trap of investigating things that seem somewhat important while hoping someone will notice and use the insights.I still feel uncertain in what these results say about how I should proceed—but at least I have a better overview and foundation for strategizing further. I should probably reach out to various organizations and people more.I still feel uncertain about what this investigation into forecasting says regarding how I should proceed—but at least I have a better overview and foundation for further strategizing.Thank you for reading! If you found value in this post, consider subscribing!^I would have liked to investigate examples where forecasting analyses have been explicitly referenced and used in recommendations and in policymaker engagement, but this would require much more time than I’m willing to spend. Instead, I investigated which organizations that are doing both forecasting and influencing policy (with the exception of MIRI, which explicitly uses forecasting for prioritizing their efforts.)Discuss