Google’s AMIE Better than Primary Care Physicians in Diagnosing, Empathy
A chatbot, known as Articulate Medical Intelligence Explorer (AMIE), developed by Google’s large language model (LLM), has shown potential in conducting medical interviews and diagnosing conditions, according to a story on nature.com.
According to a study, the AI system rivaled or even surpassed human doctors’ abilities to converse with simulated patients and list potential diagnoses based on patients’ medical histories. The chatbot was notably more accurate than board-certified primary-care physicians in diagnosing respiratory and cardiovascular conditions, among others, and ranked higher on empathy.
AMIE, however, is still experimental and has only been tested on actors simulating patient conditions. The researchers at Google Health emphasize caution and humility in interpreting the results. While the chatbot is not ready for clinical use, it is believed it could eventually help democratize health care.
The development of AMIE faced challenges, including a lack of real-world medical conversations for training data. The solution was to have the chatbot train on its own ‘conversations’. The model was fine-tuned with real-world data sets and prompted to simulate the roles of a patient with a specific condition, an empathetic clinician, and a critic evaluating the interaction.
In testing, 20 people trained to imitate patients had online consultations with AMIE and 20 clinicians. The AI system matched or surpassed the physicians’ diagnostic accuracy in all six medical specialties considered and outperformed physicians in 24 of 26 criteria for conversation quality. However, it’s noted that this doesn’t necessarily mean the language model is superior to doctors in taking clinical history.
The next step for the research is to conduct more detailed studies to evaluate potential biases and ensure the system is fair across different populations, with considerations for user privacy and data storage.
read more at nature.com