As India’s law enforcement agencies turn to AI, the potential benefits, risks

Wait 5 sec.

A predictive artificial intelligence (AI) tool called MahaCrimeOS AI, built with support from Microsoft, is assisting Maharashtra Police in investigations. The Delhi Police plans to double down on AI-assisted facial recognition technology. Another deepfake detection software, developed by a research and development body under the IT Ministry, is being tested by law enforcement agencies. As generative AI takes off, India’s law enforcement architecture is keenly exploring the technology.For police forces, the appeal of AI lies in its promise to process vast amounts of data faster than human investigators. AI systems can sift through call records, CCTV feeds, financial trails and digital evidence to spot patterns, link cases and flag suspects in real time, helping stretched forces manage rising workloads. In India, where cybercrime and online fraud are growing rapidly and police resources remain uneven across states, officials see AI as a way to boost efficiency, improve response times and modernise policing without large increases in manpower.Critics, however, say AI-driven policing risks deepening existing biases in law enforcement. Because such systems rely heavily on historical police data, they can reinforce patterns of over-policing and lead to unfair targeting of certain communities. Concerns also persist around accuracy, transparency and the absence of clear legal safeguards.In India, broad exemptions for law enforcement under data protection laws further complicate accountability, even as AI-enabled tools like facial recognition become more widespread and intrusive in public spaces.What is MahaCrimeOS AI?MahaCrimeOS AI, unveiled earlier this month, is built on Microsoft Azure OpenAI Service and Microsoft Foundry, integrating AI assistants, automated workflows, and cloud infrastructure. “With built-in access to India’s criminal laws through integrated AI RAG (Retrieval-Augmented Generation), and open-source intelligence, MahaCrimeOS AI helps investigators link cases, analyse digital evidence, and respond to threats faster and more effectively,” Microsoft said in a blog. The Microsoft India Development Center (IDC) worked closely with Hyderabad-based CyberEye and MARVEL (Maharashtra Research and Vigilance for Enhanced Law Enforcement), to tailor the solution.According to CyberEye’s website, its CrimeOS AI offers what it calls an end-to-end investigation workflow. This includes the ability to ingest PDFs, images, videos, handwritten notes across several regional languages to “auto-generate” cases and identify initial threat vectors. The AI also suggests investigation paths along with standard operating procedures and suggested tactics, techniques and procedures to follow. It can also generate legal notices and analyse responses sent by telecom companies.CrimeOS AI is also supposedly capable of carrying out “suspect profiling in real time”. Its engine “pivots” between internal case history and external data sources to construct “evolving” profiles of suspects.Story continues below this adIn Maharashtra, the AI was deployed as a pilot across 23 police stations in Nagpur Rural, including the cybercrime police stations (CCPS), The Indian Express had earlier reported.A senior official had explained to this paper that in complex cases such as narcotics, cybercrime, crimes against women or financial fraud, investigating officers often had to wait for senior officials to review files and provide instructions. With the AI copilot, an investigation plan is generated immediately, guiding officers on the next steps, which statements to record, which bank accounts to freeze and what social media profiles to examine.Predictive policing: new buzzword, old pitfalls?As more law enforcement agencies adopt AI-based systems for investigative work, predictive policing has become a new buzzword. It refers to the use of artificial intelligence and data analytics by the police to anticipate where crimes may occur or who may be involved, based on patterns. Instead of reacting after an offence, the police use software to “predict” risks and deploy patrols or resources in advance.These systems work by analysing large volumes of data such as past crime records, locations, time of incidents, CCTV feeds, call logs and, in some cases, social or behavioural data. Algorithms look for trends — for example, neighbourhoods with repeated thefts at certain hours — and generate risk scores or heat maps. Police then use these insights to plan patrols, surveillance or preventive action.Story continues below this adHowever, predictive policing has raised serious concerns of unfair targeting, wrongful suspicion, and increased surveillance of specific groups. There are also concerns around transparency, accuracy, data quality and the lack of clear laws governing how AI decisions are made or challenged.For instance, more than a hundred individuals arrested in connection with the 2020 Delhi riots were identified by a facial recognition system. With AI, such tools have the potential of becoming more pervasive and conclusionary, posing challenges to citizens’ personal autonomy in public spaces.How are other agencies, govt departments using AI?Last year, the IT Ministry revealed that as part of a research project titled ‘Design and Development of Software System for Detecting and Flagging Deepfake Videos and Images,’ the Centre for Development of Advanced Computing (C-DAC) has developed a software for detection of deepfakes, available via a web portal and a desktop application.The desktop application, called ‘FakeCheck’ has been developed for users who need to detect deepfakes without access to the Internet. It has been provided to a few law enforcement agencies for testing and feedback, the ministry said in its 2024-25 annual report, without revealing the names of the agencies.Story continues below this adThe Indian Express had earlier reported that Delhi Police was preparing to expand the use of AI-powered facial recognition technology (FRT) across the Capital, scaling up from pilot deployments in select districts. Under a proposed Integrated Command, Control, Communication and Computer Centre (C4I), AI systems will analyse live CCTV feeds to identify suspects, track missing persons and flag vehicles using automated number-plate recognition. The system will also be capable of number plate identification and predictive analytics.The integration of AI in FRT systems can potentially allow real-time scans of live environments, while making inferences from several other databases at the same time. Privacy experts fear that such real-time analytics could allow law enforcement agencies to build profiles of people at scale.The Bengaluru police are using a new AI-based system to spot firecracker use during festivals and big events. The technology watches live CCTV feeds from hundreds of cameras around the city and can detect flashes, smoke and unusual crowd activity linked to firecrackers. When the system spots a violation of the firecracker ban, it sends alerts with location and video to the control room and nearby patrol teams for quick action. First used during Diwali, officials say it helped address over 2,000 incidents, and now it will be active again during New Year’s eve.