Artificial intelligence is knocking on the healthcare industry's doors. From computation-driven R&D for new drugs, to objective image analysis, to mass-scale disease screening and diagnosis, more and more instances of the technology are pitching solutions to some of the industry's most pressing challenges.
Regulators once stood as the primary hurdle for AI and associated technologies like machine learning and deep learning, but the dozens of algorithm clearances handed out by the FDA and equivalent bodies over the past few years suggest a shift in how these tools are being perceived. But to see widespread use in a clinical setting, AI stakeholders still face the surmountable task of winning over the gatekeepers of patient care.
"It's fair to say to say that the regulatory barrier is ... no longer the current battle of AI in healthcare," Yann Fleureau, CEO and founder of diagnostic AI company Cardiologs, said during a HIMSS20 online seminar, "We have all these technologies that have been validated by the regulators because they have been considered to be safe enough to not create unnecessary patient harm, and the next step is adoption by the caregiver community."
Much like any other medical technology, AI needs to undergo extensive clinical validation before healthcare providers feel comfortable relying on it during regular care, Fleureau said. That means both proof-of-concept investigations from academics and real-world deployments headed by stakeholders.
"Most of the validation that has been done so far has been performed by teams that are not necessarily device manufacturers; [in other words] not teams that are going to bring the product to market," he said. "Proper clinical validation in the real world requires more time, and is the next thing for the AI in healthcare industry to tackle to yield widespread adoption."
But there are some components of patient care where even a final-stage, fully adopted algorithm would fall short, he continued. Chief among these is the decidedly human quality of empathy.
"Even if an AI reaches what I'd say is perfect performance [and] has a capacity to find all the biomarkers we think about, at the end of the day medicine is not just about technique," Fleureau said. "There is something way more important in the relationship between a patient and a doctor. And there's a very simple reason to that: it's because there are many decisions and questions in medicine for which there is no right or wrong answer."
As the clinical community weighs whether or not each AI tool passes muster, clinicians remember that these technologies specialize in extracting information from large volumes of data. But not every case is typical, and care decisions are often influenced by a range of factors that extend beyond biomarkers or case histories.
In a not too distant future where AI clinical-decision support tools are playing an active role in day-to-day care, Fleureau said that trained doctors will still be needed to ensure that individual patients don't get lost in the data.
"Doctors care about the individual, the person, and so it is fair to believe that in the future one of the key roles played by doctors ... will be to be the person in the healthcare organization to have transgression rights," he said. "[They will be] the only person in the room who will have the right to say, against all technology, against all omnipotent AI, 'I'm going against that decision, that score, that AI recommendation, because I am the person responsible for this patient.' And these two very human characteristics – empathy and transgression rights – will be, I am sure, two of the core principles of the future role of the doctors."