Algorithm alerts for sepsis from Epic Systems appear to be riddled with problems.

Alert Fatigue And A Flawed Algorithm

Algorithms are not 100% infallible—and some are far more problematic than others. A new article from is frightening when it reveals some big mistakes being made on an algorithm in Michigan.

The system from Epic Systems involves reviewing patients that may have sepsis or may be susceptible to it while being hospitalized. A complication of infection, known as sepsis, is the number one killer in U.S. hospitals. So it’s not surprising that more than 100 health systems use an early warning system offered by Epic Systems, the dominant provider of U.S. electronic health records. The system sends alerts based on a proprietary formula, watching for signs of the condition in a patient’s test results.

But a new study using data from nearly 30,000 patients in University of Michigan hospitals suggests Epic’s system performs poorly. The authors say it missed two-thirds of sepsis cases, rarely found cases medical staff did not notice and frequently issued false alarms.

Karandeep Singh, an assistant professor at the University of Michigan who led the study, says the findings illustrate a broader problem with the proprietary algorithms increasingly used in health care.

“They’re very widely used, and yet there’s very little published on these models,” Singh says. “To me that’s shocking.”

The study was published Monday in JAMA Internal Medicine. An Epic spokesperson disputed the study’s conclusions, saying the company’s system has “helped clinicians save thousands of lives.”

Epic Systems is not the first widely used health algorithm that’s failed in its mission to improve health care is not delivering, or even actively harmful. In 2019, a system used on millions of patients to prioritize access to special care for people with complex needs was found to underestimate the needs of Black patients compared to white patients. That prompted some Democratic senators  to ask federal regulators to investigate bias in health algorithms. A study published in April found that statistical models used to predict suicide risk in mental health patients performed well for white and Asian patients but poorly for Black patients.

Racism rears its ugly head even in the digital universe of algorithms.

“The models are ‘very widely used, and yet there’s very little published on them,’ said Karandeep Singh, Assistant Professor at the University of Michigan.

Nothing but the Company’s Promise

The researchers say their results suggest Epic’s system wouldn’t make a hospital much better at catching sepsis and could burden staff with unnecessary alerts. The company’s algorithm did not identify two-thirds of the roughly 2,500 sepsis cases in the Michigan data. It would have given alerts for 183 patients who developed sepsis, but had not been given timely treatment by the staff.

The numbers of successes Epic Systems reported were way off from the numbers the study turned up.

The Epic Systems spokesperson pointed to a conference abstract published in January by Prisma Health of South Carolina on a smaller sample of 11,500 patients. It found that Epic’s system was associated with a 4 percent reduction in mortality of sepsis patients. Singh says that study used billing codes to define sepsis, not the clinical criteria medical researchers typically use.

Epic also says the Michigan study set a low threshold for sepsis alerts, which would be expected to produce a higher number of false positives; Singh says the threshold was chosen based on guidance from Epic.

The use of AI in the medical field must be tested and retested as it is impressed onto human care. With people like Dr. Singh keeping an eye on things, we should be good to go for the long term.