Doctors Could Save Time by Using Generative AI in Medical Care, Despite Inherent Flaws

Robert Pearl, a Stanford medical school professor, says doctors need to start using ChatGPT in their practices now, according to a wired.com story. He says its particularly useful for diagnosing certain cases.

“I think it will be more important to doctors than the stethoscope was in the past,” Pearl says. “No physician who practices high-quality medicine will do so without accessing ChatGPT or other forms of generative AI.”

Previously CEO of Kaiser Permanente, a U.S. medical group with more than 12 million patients, Pearl compares the importance of a stethoscope with ChatGPT, citing both as necessary tools. This is the attitude despite major concerns about using AI in medical care for ethical reasons.

OpenAI’s ChatGPT challenges the supremacy of Google search, and has the potential to replace many white-collar workers like programmers and lawyers. Also, the medical profession is looking into how it could help doctors do their jobs better.

“Medical professionals hope language models can unearth information in digital health records or supply patients with summaries of lengthy, technical notes, but there’s also fear they can fool doctors or provide inaccurate responses that lead to an incorrect diagnosis or treatment plan.”

One example of a possible problem in using generative AI in medical care is the speed at which it issues information. One researcher using chatbot explained it this way:

Heather Mattie, a lecturer in public health at Harvard University who studies the impact of AI on health care, was impressed the first time she used ChatGPT. She asked for a summary of how modeling social connections has been used to study HIV, a topic she researches. Eventually the model touched on subjects outside of her knowledge, and she could no longer discern whether it was factual. She found herself wondering how ChatGPT reconciles two completely different or opposing conclusions from medical papers, and who determines whether an answer is suitable or harmful.”

Mattie is less pessimistic now that it can be a useful tool for tasks like summarizing text, as long as the user knows that the bot may not be 100 percent correct and can generate biased results. She worries about how ChatGPT treats diagnostic tools for cardiovascular disease and intensive care injury scoring, which are known for race and gender bias. But she remains cautious about ChatGPT in a clinical setting, because sometimes it fabricates facts and could be drawing from old information that’s been superseded by new research.

Medical Care Is Human First

AI is not flawless by any means. Generative AI in particular is faster and stores more knowledge perhaps, but it is still fallible.

“And we think most people will agree it still requires human-to-human contact for the best of medical care. Using AI as an assistant will certainly be beneficial beyond most doctors wildest dreams in med school, or over their objections today.”

Some bioethicists are concerned that doctors will turn to the bot for advice when they encounter a tough ethical decision like whether surgery is the right choice for a patient with a low likelihood of survival or recovery.

“You can’t outsource or automate that kind of process to a generative AI model,” says Jamie Webb, a bioethicist at the Center for Technomoral Futures at the University of Edinburgh.

The article includes other information about the addition of ChatGPT to medical care and some of the cautions doctors are already well aware of. But Dr. Pearl is a firm believer in the merging of AI and medical care, as long as it is done with the proper oversight and training for those employing it in medicine.

read more at wired.com