As governments begin deploying Artificial Intelligence (AI) tools in public administration, national security and policymaking, questions about its safe usage and accountability have taken centre-stage. The issue came into focus in the U.S. after reports revealed a dispute between the Pentagon and AI company Anthropic, which refused to remove safeguards that were in place to prevent mass surveillance and the usage of autonomous weapons. The incident underscored a deeper tension between governments seeking to deploy AI systems and the companies that control them.As states collaborate more closely with AI companies, who ultimately governs the systems that govern us? Isha Suri and Raman Jit Singh Chima discuss the question in a conversation moderated by Areena Arora.Edited excerpts: What kinds of state capacity could AI actually strengthen? And where should governments be cautious about relying on it?Raman Jit Singh Chima: It depends on the problem a government is trying to solve and who it is dealing with. AI can sometimes have transformative effects, such as improving access to data or enabling better analysis, but it is often deployed without clarity on the problem, the data available, or the costs involved. There are also high-risk areas such as facial recognition, surveillance, and certain health applications, where misuse can lead to significant harm. In such cases, a ‘do no harm’ principle should apply, and in some contexts, outright prohibition may be justified.Comment | AI-powered tax governance in India and its challengesIsha Suri: AI systems tend to work best in well-scoped use cases. For example, during the COVID-19 pandemic, image-processing tools could distinguish between different types of lung infections because the problem was clearly defined and the parameters were known.Governments should begin by defining the objective of deploying AI. That clarity is often missing. Before adopting any system, they should ask whether AI is necessary, whether less intrusive alternatives exist, and what the risks are. A necessity and proportionality test is critical. AI should not be adopted simply because it exists. If sharing more data with AI systems make government services faster and more efficient, why should people worry about privacy? What harms are on the table?Isha Suri: We need to question what ‘efficiency’ means and for whom. Evidence on productivity gains is still weak. In many cases, efficiency claims translate into labour substitution. There is also a lack of transparency. Data collected for one purpose can be used elsewhere. Welfare data can be used for policing or denying benefits.Also read | Global cooperation is crucial to tackle AI bias and risks, says PMThe idea that citizens are okay with sharing data assumes informed consent, which is often not the case. Many people do not fully understand how their data is used. In countries like India, what is often framed as cultural comfort with lower privacy is actually a function of low digital literacy.Also read | Karnataka government constitutes Committee on Responsible AISo, individuals may make decisions without full information, while the long-term consequences unfold. That is why the state must anticipate harms and build safeguards at the design stage, not after deployment.Raman Jit Singh Chima: The idea that better AI requires more personal data is flawed. It benefits certain commercial actors but it’s not technically necessary. More data is not always better. It can be inefficient, risky, and beneficial mainly to companies that rely on large-scale data extraction and compute-heavy infrastructure. There are alternatives. Smaller models, for instance, use limited data and can produce clearer outputs. There are also on-device AI systems that do not require constant data transfer to large data centres. We should challenge the assumption that handing over data is the price of better services. AI companies say they need access to large public datasets to build better systems. Should governments treat these datasets as strategic national assets, or can they be shared with private companies to accelerate development?Raman Jit Singh Chima: We should be very wary of the idea that data is something that can be monetised. Opening data up to private actors creates risks for privacy, security and sovereignty. It also repeats past mistakes where public systems were handed over without adequate safeguards.Also read | Fully embrace AI-driven solutions to simplify administrative systems: President Murmu to IAS officersIsha Suri: Framing data as a national or economic asset is problematic because it shifts attention away from its nature as a fundamental right tied to privacy. There are also issues of consent. Citizens may provide data for one purpose, but not for its use in commercial partnerships. We also need to question who is driving the demand for access to large datasets. Often, it is private companies with clear economic incentives.Also read | Government to help teachers use AI; will ease open schooling frameworkUltimately, what we risk is a situation where public data and public money enable private extraction of value, with limited accountability. That makes it critical for governments to step back and evaluate who benefits from such arrangements and whether they align with public interest. Governments have always worked with the private sector. Should we treat or fear AI differently in that partnership?Raman Jit Singh Chima: We should learn from past digital infrastructure projects. Systems should not be deployed first and regulated later. There is also a risk that technology becomes an end in itself. Governments may expand systems not because they improve outcomes, but to justify prior investments. Large partnerships can also lock governments into costly and inflexible arrangements.Comment | AI and the national security calculusIsha Suri: Projects like Aadhaar and DigiYatra should be seen as cautionary examples. Trade-offs in welfare delivery cannot be treated lightly. Even small exclusion rates can have severe consequences. There are also accountability gaps when public infrastructure is run through hybrid or private entities. If other governments begin adopting AI and it becomes inevitable globally, should India adopt it as well?Raman Jit Singh Chima: Governments should not follow global trends blindly. AI should be used only where it advances public interest and democratic values. Policymakers should focus on practical concerns like data protection, procurement and market concentration.Explained | How is AI going to be regulated in India?Isha Suri: The idea of inevitability is overstated and often driven by industry narratives. Governments should define their own objectives instead of reacting to the fear of missing out. If other governments adopt AI more fluently, does India risk falling behind?Raman Jit Singh Chima: If we are concerned about artificial general intelligence, the focus should be on building foundational scientific capacity.The current industry narrative suggests that supporting large AI companies and their infrastructure is the way to get there. That may actually be a distraction from building real capabilities. If you look at India’s past, investments in core science enabled major programmes in space and nuclear development. A similar approach is needed here if we want sovereign technological capability.Also read | India’s new AI governance guidelines push hands-off approachIsha Suri: We also need to question what it means to “fall behind.” AI is not just an application, it is a full stack, including compute, data and models. If domestic systems are built on infrastructure or partnerships controlled by large global companies, then we are not truly building indigenous capability. We are simply layering on top of existing dependencies. This creates risks of lock-in and long-term dependence on foreign technology monopolies. It also raises concerns about sovereignty and control. At the same time, a lot of the hype around artificial general intelligence and superintelligence may be diverting attention from present-day harms, including labour impacts, environmental costs and concentration of power. So where does that leave us?Raman Jit Singh Chima: Be cautious about how and where AI is deployed in government. Focus on clear use cases, avoid unnecessary dependence on large private players, and prioritise public interest, security and long-term sustainability.Also read | IIT-M study advocates participatory approach to AI governanceIsha Suri: Governments need to clearly define their objectives before adopting any technology. Ask whether AI is necessary, and whether the risks are proportional to the benefits.Listen to the conversationIsha Suri is Research Lead at the Centre for Internet and Society. Raman Jit Singh Chima is Asia Pacific Policy Director and Senior International Counsel at Access Now, a nonprofit committed to defending digital rightsPublished - March 20, 2026 01:26 am IST