JAMA: Institutional Biases Plague AI Research, Medicine Development
Medical advances derived from AI range from early breast cancer detection to spotting potential cancer signs on a colonoscopy. According to a story on ScientificAmerican.com, however, built-in biases can create new problems.
“At a time when the country is grappling with systemic bias in core societal institutions, we need technology to reduce health disparities, not exacerbate them. We’ve long known that AI algorithms that were trained with data that do not represent the whole population often perform worse for underrepresented groups. For example, algorithms trained with gender-imbalanced data do worse at reading chest x-rays for an underrepresented gender, and researchers are already concerned that skin-cancer detection algorithms, many of which are trained primarily on light-skinned individuals, do worse at detecting skin cancer affecting darker skin.”
A recent study published in the Journal of the American Medical Association (JAMA) find that the disparities are only growing, in spite of awareness of the problem. Most of the data used to train algorithms came from only three states: California, New York and Massachusetts.
Medical data sharing is the crux of the problem, partly because of competition fears, but also because of privacy and tech-sharing issues.
The story also points out that even now, after Congress passed an act in 1993 requiring diverse participation in research, companies creating a COVID-19 vaccine was struggling to recruit people of varying physical traits. As the magazine noted in a separate story on the new vaccine, if some of the thousands of human volunteers to test coronavirus vaccines had been replaced by digital replicas, it could have speeded the process tremendously—and potentially have saved lives. Virtual patients may be the next big thing in medical research.
Even if they have diverse data, companies may still find it difficult to eliminate bias, according to the story.
“Bias in AI is a complex issue; simply providing diverse training data does not guarantee elimination of bias. Several other concerns have been raised—for example, lack of diversity among developers and funders of AI tools; framing of problems from the perspective of majority groups; implicitly biased assumptions about data; and use of outputs of AI tools to perpetuate biases, either inadvertently or explicitly. Because obtaining high-quality data is challenging, researchers are building algorithms that try to do more with less.”
read more at scientificamerican.com
Leave A Comment