From reading radiological images, to predicting tuberculosis through cough sounds, or disease mapping and diagnosing conditions like cancers and silent heart attacks — AI has already found its way in several health applications. The aim is to detect diseases quicker, reach the people who are unable to access care and bring down the cost of healthcare.Yet, there are several challenges to conducting trials and regulatory oversight of these products. The biggest challenge for regulators is to understand exactly how an AI arrives at a diagnosis or treatment recommendation, called a black box problem. Without transparency, clinicians may find it difficult to trust the tool, and patients may not receive adequate explanations for AI-driven decisions, complicating informed consent. The pace of AI development outpaces the speed at which regulatory bodies can update their frameworks. This can stifle innovation by creating uncertainty for developers, yet premature approval poses risks to patient safety. Then there are issues of transmitting bias from the data that is fed in, ensuring data privacy and defining liability. But the most significant is real-world trialling.A battle of timelines“Models change very fast … and it takes about six months to design a good randomised control trial, two years to conduct the trial and around 18 months to publish it. If we start a randomised control trial today, we will get the results maybe by 2029. The policy cycles do not track with the publication cycle — an outbreak or a political issue may drive the deployment of an AI solution,” says Zameer Brey, deputy director of technology diffusion, Gates Foundation, who spoke at the recently concluded AI Impact Summit in Delhi. He calls for a middle ground that does not compromise on evidence generation but at the same time gives quick policy indications.He adds that models are better than clinicians at certain tasks, yet they do not hold up in field trials. A major reason for this is the trust that clinicians have on the tool. Speaking on the implementation of a decision support algorithm, which is an AI tool to guide physicians in diagnosing patients, he says, “The adoption was stuck only at 4-6 per cent till another physician spoke about errors that were being made by humans and that pushed up the adoption to over 60 per cent.”But trust is important, say health experts who insist models will always have to work under the supervision of humans. “We are not talking about autonomous use in healthcare settings, only supervised use. There will always be a doctor or healthcare professional in the loop. So, the AI will help in reaching a diagnosis or spotting an abnormality on radiological imaging but the doctor will take the final call. They will be signing the final report, so they are the ones responsible for it,” says Dr Harsh Mahajan, whose diagnostic chain, Mahajan Imaging, has been working on developing such models.Most AI models are trained and then tested on available health data to check their accuracy. “There are several such studies that have been done and published. But a randomised control trial is difficult. Comparing the accuracy of the model to that of a physician is only necessary if we want an autonomous model — one that is designed to work reliably without a physician or health worker in the loop. And, to my knowledge, there is only one autonomous AI model in this field developed by a company called Oxipit,” says Dr Mahajan.The need for field studiesAfter testing on available health data, most algorithms are tested in the field with real patients being diagnosed by a physician and the model, according to Dr Ashok Sharma, additional professor at AIIMS, who is working on developing an application to ease the diagnosis of certain cancers. “Most AI tools are first trained and verified on available data and then deployed as a trial in the healthcare setting — which tests its capability in the field and improves diagnosis over time. Take our algorithm, for example. When we had trained it on the available patient data, its accuracy was only around 39-40 per cent. Now, with its use, the accuracy has gone up to 89 per cent. Yet, even if it gets to 99 per cent, a physician will always have to be involved,” he says.Story continues below this adWhen it comes to healthcare, there needs to be a high level of precaution because it could be a matter of life or death. “So, there will always have to be a physician involved but AI models can aid them. The models may be able to detect abnormalities that may not be easily observed by the naked eye. They learn from several doctors and combine learnings from all to give a diagnosis,” Dr Sharma adds.What is more important, he says, is following a responsible framework. “We are working with real patient data, so we have to be extremely cautious that it does not get into the public domain. And, neither should it be available to just anyone, only those who have received proper ethics approval from their organisations,” he insists.Trial platform and a regulatory frameworkTo solve this issue of data availability, the Health Ministry is now working with IIT Kanpur to create a federated patient dataset for training and validation of healthcare AI models. Called Benchmarking Open Data Platform for Health AI (BODH), the platform will have anonymised data collected from across healthcare facilities.Manindra Agarwal, director of IIT Kanpur, says the key challenge for his team was that the real data on which the AI had to be trained was very fragmented, available with various health centres, and in small amounts. “The data also needs to be protected well because of privacy concerns. So, sharing it is not easy. BODH is a federated platform that will collect data and keep it secure. Developers can send their model to train it on-site. The model developer does not get access to the data. The data holders will be incentivised to upload their data because they can get credit and use it further.”Story continues below this adIndia, at present, does not have a specific regulatory framework for AI in healthcare. The world over it is evolving, too. The Health Ministry recently released a framework, which said that the applications would have to be monitored over their lifetime. A life-cycle approach is needed for effective AI use, beginning from defining the problem, to collection, storage and management of data, verification and validation, topped by real-world performance.