Artificial intelligence is transforming grant writing. A new wave of AI tools, known as agents, can now generate a research grant application, review it and submit it.AI agents are large language models (LLMs) equipped with tools that let them search the web, read documents, write and execute code, and call external services, for example. Given a goal, rather than a single prompt, they respond by planning a sequence of steps that they execute, evaluate and iterate until the goal is met — usually with little or no human intervention.We need a new ethics for a world of AI agentsAgents can be trained on a researcher’s entire published body of work, on the grant criteria of the most relevant funding panel and on the texts of the most recently funded grants from that panel, all of which are often publicly available. They can produce tens of ideas, from which a researcher can select the best for the agent to work up into fully-formatted applications. All this can be done in minutes, and with little work by the researcher.From a productivity perspective, this might sound exciting. But it could result in problems — and even herald the collapse of the grant-funding system as we know it.In our roles as leaders of research and innovation institutions, we’ve both heard anecdotally from the dozens of funders that we work with that the volume of grant applications they receive has risen sharply. Meanwhile, the quality of proposals seems to have improved, making it harder to discriminate between them. We suspect that one reason for this change is the increasing use of AI models and agents by researchers to aid them in writing applications.As the use of AI agents becomes more widespread among researchers, the scale of this challenge is likely to increase. Policymakers and funders will need to rethink how they allocate research funding before the system becomes unworkable.Pump up the volumeTo check whether this trend is real, we examined data on hundreds of thousands of grant applications from 12 multidisciplinary funders who work with the Research on Research Institute in London, which one of us (J.W.) heads. The funders are based in Australia, Belgium, Canada, China, Spain, the United Kingdom and the European Union — and include the Australian Research Council, the European Research Council and the biomedical funding charity Wellcome in London. The types of application varied depending on the funder, and ranged from postdoctoral fellowships to targeted calls for research in specific fields (see Supplementary information).All these funders saw increases in application numbers between 2022 and 2025 (see ‘Applications on the rise’), ranging from 14% for postdoctoral fellowship applications at the British Academy to 142% for EU Marie Skłodowska-Curie Actions fellowships. Local issues explain some changes — shifts in the national funding environment, for example, or differences in the number of funding calls put out. But we think that the use of AI is also likely to have played a part since the chatbot ChatGPT was released in November 2022.Source: G. Rees & J. WilsdonEvidence is mounting of a surge in the use of generative AI across science. For example, a 2025 survey by the publishing company Elsevier of 3,234 researchers across 113 countries found that 58% had used AI tools in their work (up from 37% in 2024). Of these, 61% were using AI to find and summarize the latest research; 41% were using it to help draft grant proposals.One analysis of academic writing1 found evidence of an increase in use the of LLMs in scientific articles, beginning in late 2022. The authors looked for patterns of text that might indicate an increased likelihood of LLM use in more than one million preprints and papers from January 2020 to September 2024 (this method is suggestive and not definitive). Computer-science papers showed the fastest growth, with up to 22% of sentences in abstracts estimated to be LLM-modified during 2024.Evidence of similar trends in grant proposals is harder to pin down, because the text is often not public. But one study published as a preprint reported an upswing of 10–15% in text patterns suggestive of LLM use in grants since 2023 at the US National Institutes of Health (NIH) and the National Science Foundation2.And a general rise in application quality was identified in an analysis of data from EU Marie Skłodowska-Curie Actions postdoctoral fellowships3. In 2025, just 5% of applications fell below the quality threshold for further consideration set by the European Commission, compared with 20% in 2018.It’s not yet possible to pinpoint exactly whether or how changes in application volume and quality are linked to AI use. But if these trends extend to other funders and continue, grant reviewers could soon face huge volumes of high-quality submissions. They will have to make largely arbitrary choices about what or who to fund. And that’s just with the availability of the current wave of AI tools.A second wave of problemsWidening use and the development of agentic tools represents a fundamentally different problem from that of applicants who use LLMs to polish up a draft.An LLM can improve craft, but an agent optimizes for outcomes. When a researcher asks an agent to produce the strongest possible application for a specific funding call, the proposal that emerges is not the researcher’s argument shaped by AI. It is a fully AI-generated proposal optimized to the funders’ brief, albeit shaped by the context that the researcher provides.Grant proposals drafted with AI help more likely to win NIH fundingSo far, research funders have responded to the changing AI landscape mostly by clamping down on the use of generative AI by applicants and reviewers. In July 2025, the NIH declared that applications that are substantially developed by AI tools (or contain sections substantially developed by them) would be ineligible for funding, owing to a lack of “original ideas”. The funding agency UK Research and Innovation (UKRI) prohibits reviewers from uploading proposal content into generative AI tools, as “a breach of confidentiality and integrity”.In our view, as well as being impossible to enforce, such bans are an inadequate response to the challenge at hand. A funding system that rewards well-crafted, cogently argued proposals creates an incentive to deploy whatever tools improve craft and cogency. Bans are likely to be widely ignored, because the incentive to use AI is strong and the probability of detection low. Prohibiting their use can disadvantage people who do not speak English as a first language4.Researchers using agentic AI to apply for funding might be maximizing their own chances of success, assuming that agents are deployed sensibly and with appropriate oversight. But the collective outcome is dysfunctional — a landscape in which all proposals meet the funding criteria and the differences between them are small.The use of AI has increased substantially since the release of the chatbot ChatGPT.Credit: Nicolas Economou/NurPhoto via GettyA shift to agentic AI might affect peer review, too. A survey by the publishing company Frontiers (see go.nature.com/4swdoxt) suggests that more than half of researchers already use AI to assist with peer review, frequently against guidance. Agents can read a proposal, retrieve relevant literature, assess methodological soundness and generate a structured evaluation before a person has finished reading the abstract.When both proposals and reviews are mediated by agents that are trained on the same body of previously funded work, the system will no longer be evaluating the quality of ideas. It will be evaluating how well agents have learnt to simulate the ideas that funders have previously rewarded.Rethinking fundingProposals put forward for dealing with overburdened funding systems include the use of lotteries (allocating funding at random to grants that fall within a set quality band), and distributed models of peer review, in which researchers who submit grants are tasked with reviewing the applications of others. These measures tackle issues of volume, but they cannot help if measures of quality become unreliable. Nor can they prevent researchers from using agents to peer review applications.In our opinion, funding bodies should consider shifting the emphasis of evaluation away from written proposals, and towards the principal investigator, their research team and their previous and ongoing research programmes. Funders should invest in track-record verification to look at the research performance of an individual or group over a sustained period, along with interviews and portfolio-based assessments of sustained team performance.Some funders are moving in this direction. In March, the UK Medical Research Council announced it was bringing back interviews for all shortlisted grant applicants; a move designed to counteract AI augmentation of proposals.