The Ministry of Electronics and Information Technology (MeitY) has unveiled governance guidelines for Artificial Intelligence (AI), which could serve as a blueprint for how India regulates the technology, balancing innovation with accountability and growth with safety.The government had earlier signalled that it may not tighten the regulatory noose on AI just yet, as it believes the technology could help flourish an innovation economy in the country. As such, the guidelines recommend an India-specific risk assessment framework, a national AI incident database, and the use of voluntary frameworks and techno-legal measures, such as embedding privacy or fairness rules directly into system design.The guidelines do, however, flag the need to carry out effective “content authentication”, as synthetically generated images, videos and audio flood the Internet. Here, the government has already proposed legal amendments to a key legislation, which would require companies like YouTube and Instagram to add visible labels to AI-generated content.The launch of the guidelines comes ahead of the India–AI Impact Summit 2026, which will be the first-ever global AI summit hosted in the Global South.Prof Ajay Kumar Sood, Principal Scientific Advisor to the Government of India, said at the launch, “The guiding principle that defines the spirit of the framework is… ‘Do No Harm’. We focus on creating sandboxes for innovation and on ensuring risk mitigation within a flexible, adaptive system.”What the guidelines sayThe report’s key recommendations are organised around six pillars: infrastructure, capacity building, policy & regulation, risk mitigation, accountability, and institutions.Infrastructure: The report calls for expanding access to data and compute resources, including subsidised graphics processing units (GPUs) and India-specific datasets through platforms like AIKosh. It urges integration of AI with Digital Public Infrastructure (DPI) such as Aadhaar and Unified Payments Interface (UPI). It also urges the government to incentivise private investment and adoption by MSMEs, with tax rebates and AI-linked loans.Regulation: India’s approach will be agile and sector-specific, applying existing laws (like the IT Act and the Digital Personal Data Protection Act) while plugging gaps through targeted amendments. The report rules out an immediate need for a standalone AI law, but calls for updates on classification, liability, and copyright, including consideration of a “text and data mining” exception. It also urges frameworks for content authentication to counter deepfakes and for international cooperation on AI standards.Story continues below this adS Krishnan, Secretary, MeitY, said at the launch, “Our focus remains on using existing legislation wherever possible. At the heart of it all is human centricity, ensuring AI serves humanity and benefits people’s lives while addressing potential harms.”Risk mitigation: As stated earlier, the report proposes an India-specific risk assessment framework to reflect local realities, along with the use of voluntary frameworks and techno-legal measures.Accountability: A graded liability regime is proposed, with responsibility tied to function and risk level. Organisations must adopt grievance redressal systems, transparency reporting, and self-certification mechanisms.Institutions: The framework envisions a whole-of-government approach, led by an AI Governance Group (AIGG), supported by a Technology & Policy Expert Committee (TPEC), and technically backed by the AI Safety Institute (AISI).Story continues below this adCapacity building: The guidelines emphasise AI literacy and training for citizens, public servants, and law enforcement. They recommend scaling up existing skilling programs to bridge gaps in smaller cities and enhance technical capacity across government institutions.How guidelines were preparedThe guidelines were drafted by a high-level committee consisting of policy experts under the chairmanship of Prof. Balaraman Ravindran, IIT Madras.According to Abhishek Singh, Additional Secretary, MeitY, and CEO IndiaAI, “The committee went through extensive deliberations and prepared a draft report, which was opened for public consultation. The inputs received is a clear sign of strong engagement across sectors. As AI continues to evolve rapidly, a second committee was formed to review these inputs and refine the final guidelines.”Red flags over officials’ use of AIEven as the government looks to encourage AI with little regulatory burden, there are internal red flags over data privacy and inference risks, especially when such systems are being used by key government officials.Story continues below this adWhat happens when a government officer uploads an internal note to an AI chatbot for a quick summary? When a police department asks an AI assistant to optimise CCTVs across a city? Or when a policymaker uses a conversational model to draft an inter-ministerial brief? Can the AI system analyse such prompts at scale, identify the user, infer their role, draw patterns across queries and predict strategic intent?These questions are being debated in sections of the Union government, The Indian Express had earlier reported, amid growing concern about the rapid proliferation of generative AI (GenAI) platforms in India, especially those run by foreign firms, often bundled as free services with telecom subscriptions.Two broad areas are under discussion. First, whether queries made by top functionaries — bureaucrats, policy advisers, scientists, corporate leaders and influential academics — could be mapped to identify priorities, timelines, or weaknesses.Second, whether anonymised mass usage data from millions of Indian users could help global firms. One issue, sources said, is whether to “protect” official systems from foreign AI services.Proposed AI content labellingStory continues below this adAs per the draft amendments to the IT Rules, released last month, social media platforms would have to get users to declare whether the uploaded content is synthetically generated; deploy “reasonable and appropriate technical measures”, including automated tools or other suitable mechanisms, to verify the accuracy of such declaration; and, where such declaration or technical verification confirms that the content is synthetically generated, ensure that this information is clearly and prominently displayed with an appropriate label or notice.If they fail to comply, the platforms may lose the legal immunity they enjoy from third-party content, meaning that the responsibility of such platforms shall extend to taking reasonable and proportionate technical measures to verify the correctness of user declarations and to ensure that no synthetically generated information is published without such declaration or label.The action planEmpower the India AI mission, ministries, sectoral regulators and state governments to increase AI adoption, through initiatives on infrastructure development and increasing access to data and computing resourcesAdopt a graded liability system based on the function performed, level of risk, and whether due diligence was observedIntegrate AI with Digital Public Infrastructure (DPI) to promote scalability, interoperability and inclusivityConduct safety testing and evaluationsIncrease data availability, sharing, and usability for AI development and adoption with robust data portability standards and data governance frameworksEncourage the use of locally relevant datasets to support the creation of culturally representative models