Healthcare Professionals Prefered Algorithm’s Optimistic Approach to Medicine
Seeflection.com has published a long list of stories regarding the addition of AI to the medical field. The amount of improved medical care through the addition of AI has been remarkable and verifiable. AI is reading X-rays and MRIs with more accuracy than humans. AI is successful in diagnosing ailments and recommending treatments.
AI is becoming an in-office assistant to many doctors trying to cut down on office wait time and reduce inaccurate medical records. There are few medical fields that have not already instituted the use of AI in their daily routines.
Now comes a study that says ChatGPT may actually have a better bedside manner than some human physicians. Perhaps a little more empathy was accidentally programmed in?
The team randomly sampled 195 exchanges from AskDocs where a verified doctor responded to a public question. The original questions were then posed to the AI language model, ChatGPT, which was asked to respond.
A panel of three licensed healthcare professionals, who did not know whether the response came from a human physician or ChatGPT, rated the answers for quality and empathy.
Overall, the panel preferred ChatGPT’s responses to those given by a human 79% of the time. ChatGPT responses were also rated good or very good quality 79% of the time, compared with 22% of doctors’ responses, and 45% of the ChatGPT answers were rated empathic or very empathic compared with just 5% of doctors’ replies.
Now granted this is like a blind taste test that had the medical participants competing with an algorithm that actually has no ability to develop emotions. But it has the data it can reference that makes AI sound empathetic to someone’s medical issue.
Some noted that, given ChatGPT was specifically optimized to be likable, it was not surprising that it wrote a text that came across as empathic. It also tended to provide longer, chattier answers than human doctors, which could have played a role in its higher ratings.
Dr. Christopher Longhurst, of UC San Diego Health, said: “These results suggest that tools like ChatGPT can efficiently draft high-quality, personalized medical advice for review by clinicians, and we are beginning that process at UCSD Health.”
Others cautioned against relying on language models for factual information due to their tendency to generate made-up “facts”, or “hallucinate” as it has been labeled by researchers.
In medicine, it is not acceptable to make up facts. In fact, it can be deadly.
Prof James Davenport, of the University of Bath, who was not involved in the research, said: “The paper does not say that ChatGPT can replace doctors, but does, quite legitimately, call for further research into whether and how ChatGPT can assist physicians in response generation.”
AI will vastly improve our individual healthcare and thereby increase our longevity in ways we can’t begin to imagine yet. Miracles in medicine could become an almost daily occurrence with AI.
read more at theguardian.com