Artificial intelligence has made significant advancements in various fields, including medicine. However, a recent study has revealed that while AI models excel in professional medical exams, they struggle with one crucial aspect of being a physician – interacting with patients to gather medical information and provide accurate diagnoses.
Research conducted by Pranav Rajpurkar and Shreya Johri at Harvard University introduced a new evaluation benchmark called CRAFT-MD, which assesses clinical AI models’ reasoning abilities through simulated doctor-patient conversations. These conversations were based on 2000 medical cases from US medical board exams, replicating real-life scenarios where patients may not disclose crucial information unless prompted.
The study utilized OpenAI’s GPT-4 model as a “patient AI” interacting with the clinical AI being tested. Results showed that leading AI models such as GPT-3.5, GPT-4, Meta’s Llama-2-7b, and Mistral AI’s Mistral-v2-7b performed significantly worse in conversation-based diagnostics compared to written case summaries.
For instance, GPT-4’s diagnostic accuracy dropped from 82% with structured summaries to 26% in patient conversations. Despite being the top performer, GPT-4 only gathered complete medical histories in 71% of simulated conversations and did not always provide accurate diagnoses.
Eric Topol from the Scripps Research Translational Institute noted that evaluating AI’s clinical reasoning through patient conversations is a more practical approach than traditional exams. However, Rajpurkar emphasized that AI’s success in simulated scenarios does not equate to surpassing human physicians due to the complexities of real-world medical practice.
While AI shows promise in supporting clinical work, it is not a substitute for the holistic judgement and experience of human doctors. The study underscores the importance of ongoing research to enhance AI’s capabilities while acknowledging the irreplaceable role of human healthcare providers.
Topics: