AI to Complement – Not Compete With –Physicians’ Diagnostic Skills
It was big news earlier this summer. As reported by Forbes, “This AI Just Beat Human Doctors On A Clinical Exam.” The story unfolded on a stage in London where Babylon Health demonstrated its artificial intelligence software and announced that it had scored higher than the average passmark for trainees on the MRCGP, an exam that measures trainee general practitioners’ ability to diagnose. According to Forbes, Ali Parsa, Babylon’s founder, called the moment “a world first and a major step towards his ambitious goal of putting accessible healthcare in the hands of everyone on the planet.”
The demonstration in London showcased Parsa’s vision for use of the chatbot to prevent unnecessary doctor visits, free up physicians from notetaking and diagnose common illnesses. It highlighted the AI’s diagnostic and transcription capabilities, along with its use of a “facial tracking system that told the doctor if she was feeling confused, worried or neutral, based on the movements of 117 muscles in her nose, lips or eyebrows.” It also illustrated how the system could work with a live physician, who – in this case – concurred with the chatbot’s diagnosis, asked the patient more questions and sent a prescription to her pharmacy. Here, AI made the doctor’s job easier; it did not replace the physician’s own clinical judgment.
Its use in the U.S. – which Parsa predicts will start in 2019 – depends on finding a “big-name American customer,” something he sees as increasingly likely as the chatbot gets “smarter and more reassuring.” But what will it take for patients to trust – and be reassured by – technology? We think the potential for a system like Babylon’s lies in how it works with physicians… not in how it stacks up against them in a standardized test.
It’s also important to note that Babylon’s AI assessment of the patient takes place via a series of automated questions asked during a video conference. While that environment (i.e. telemedicine) is suited to address certain ailments and patient concerns, other types of diagnoses require a physical exam.
But as artificial intelligence and other technologies arrive, there’s more to integrating them into our work than wanting to be sure they complement (vs. compete with) the skills of physicians. It’s why Stanford Presence has been working to bring together industry leaders to consider the possibilities and (potential) peril of AI. For example, an August 2018 symposium – “AI in Medicine: Inclusion and Equity (AiMIE)” – examines the risk of bias, and a previous symposium explored the kind of “purposeful foresight” needed so that as AI contributes to medicine, medicine remains “fundamentally an endeavor of humans caring for other humans.”
In every case – whether it’s preventing bias or aiding with a diagnosis – the smarter the AI the better… for both patients and physicians.