By Stephanie Lee
Sepsis is one of the biggest hospital hazards you’ve maybe never heard of. When the body overreacts to an infection, it can trigger widespread inflammation that can in turn cause tissue damage and organ failure. It causes one-third to one-half of all deaths in US hospitals.
But because sepsis’s symptoms, like fever and difficulty breathing, sometimes look a lot like other illnesses, it can be hard to detect, especially in the early stages. So a team at Banner Health, a hospital system in Phoenix, Arizona, turned to computer science for a solution. Maybe they could develop an algorithm that constantly monitored electronic health records and warned hospital staff in real time when patients were at high risk for sepsis.
It didn’t work. At least, not in the way Banner had hoped for.
Five years after Banner put the alert in place, it turns out to not have done a very good job of diagnosing sepsis. But the team behind it, led by Dr. Hargobind Khurana, discovered it had an unexpected upside: It was good at identifying patients who were generally much sicker than average, even if they didn’t have sepsis. Although the alert mostly failed at its main goal, it ended up having a different, perhaps even more powerful potential: steering clinicians to their most vulnerable patients.
Algorithms have infiltrated almost every part of our lives, quietly yet deftly shaping both the mundane — calendar alerts, Facebook ads, Google predictions — and the vital. One of the most critical roles algorithms play is in electronic medical record software, which hospitals and doctor’s offices use to track and manage patients’ health and illnesses. Algorithm-based alerts are supposed to point out important information hidden in mountains of data — things like when someone’s medication needs to be refilled, or when a patient has an unusually high heart rate.
At their best, these alerts save busy doctors and nurses precious decision-making energy and draw attention to dangers that would otherwise go unnoticed. Too often, however, they dilute their usefulness and urgency by beeping, buzzing, and flashing tens of thousands of times a day, often without a good reason.
Banner Health’s experiment demonstrates some of the core challenges of merging health care with 21st-century digital automation. It’s a continuing struggle despite the fact that the US government has poured billions into digitizing medical records in hopes of making them safer over the past few decades.
“It’s hard to create a good alert. And it’s hard to get buy-in from doctors and nurses because it’s ‘just another thing’ to do,” Khurana, Banner’s director of health management. “How do we keep that balance of not just expecting them to do more and more work, but how do we make sure the patient is taken care of? … How good do the alerts need to be? … Everybody in the health field is trying to figure out the answer to this.”
Banner Health started working on the alert in 2009; Khurana joined two years later. At first, they looked at the common criteria for sepsis and organ dysfunction, like high breath and heart rates, unusually high or low body temperature, and off-balance chemical levels in someone’s blood and organs. Then they used these criteria to design an alert that continuously analyzed electronic medical record data from medical device sensors and other sources. The alert would fire whenever a patient showed two of four symptoms for sepsis and at least one of 14 symptoms for organ dysfunction — if both of those things happened within eight hours of each other.
Khurana added the alert to Banner Health’s Cerner electronic medical record software, which, like other programs, comes with its own built-in alerts (but did not at the time have a sepsis alert). From April 2011 to June 2013, the sepsis algorithm monitored more than 312,000 patients across the emergency, inpatient, and intensive care units of Banner Health’s 24 hospitals.
Not everyone was thrilled, Khurana recalls. Some nurses and doctors complained that not every patient flagged by the algorithm actually had sepsis — but the caregivers still had to evaluate the patients, override the alerts, and document it. Those steps may take just a few minutes, but the many false alarms made some staff members doubt if the algorithm was working at all.
A colleague who helped develop the alert, Dr. Nidhi Nikhanj, recalls similar sentiments. “There was certainly was a lot of skepticism, especially from those who had to actually answer the alerts, because of the extra workload it would bring on our shoulders,” he said.
These clinicians were grappling with a widespread phenomenon in health care dubbed “alarm fatigue.” In a 2013 report, the Joint Commission, a health care accreditation nonprofit, found that several hundred alerts can fire per patient per day, which amounts to tens of thousands of buzzes or beeps throughout an entire hospital every day. But 85% to 99% of these warnings don’t actually require clinicians to intervene, often because the alerts’ settings are too narrow or broad to correctly identify patients in need. Weary, overworked staff are then prone to ignore even alerts that point out signs of danger.
Alerts are best when they “continually tell physicians what they’re really not aware of,” said Lorraine Possanza, a risk management analyst at the ECRI Institute, a nonprofit that studies patient safety issues. “If you’re continuing to give them info they already know, the likelihood of them bypassing that alert, or ignoring the alert, or becoming overwhelmed by the number of alerts, is just much more likely.”
This May, nearly five years after the experiment started, Khurana’s team crunched the data. His colleagues’ complaints had been partly accurate: The alert didn’t always flag patients with sepsis. More precisely, only about one-quarter of patients the alert flagged had the condition.
The patients identified by the alert did turn out, however, to be much sicker than average in general. This correlation wasn’t completely surprising, given how sepsis symptoms are known to overlap with other severe illnesses.
But Khurana was taken aback by just how sick this group was by virtually every measure. The algorithm identified a small minority of patients — about one-fifth — who accounted for the overwhelming majority — nearly 90% — of all deaths in the hospital. Compared with patients who didn’t set off the alert, those who triggered it had four times the chance of dying the next day. They were also more likely to suffer chronic medical conditions, such as chronic kidney disease and obstructive pulmonary disease, and to stay in the hospital twice as long.
“We expected it would be sicker patients, and the rates would be higher, but not this high,” Khurana said. In other words, the data showed that the alert had the potential to bring sick, in-need patients to clinicians’ attention — just not quite the patients who the Banner Health team had first set out to find.
Since the initial data analysis of the alert in early 2014, clinicians at Banner Health have come to perceive the algorithm in a new light, Khurana said. The question it used to prompt, as he put it, was: “‘Does a patient have sepsis?’ If not, move on.”
Now, he said, the alert inspires clinicians to take a second look and ask themselves, “Is the patient sicker than what I expected? Is there anything I can do to look at a patient’s care plan and do things differently?” Khurana said those things include moving a patient to an intensive care unit, checking in on them more frequently, and re-evaluating their diagnosis and treatment.
The team hasn’t crunched the numbers yet to definitively know how, or if, these interventions are improving patient health. But after seeing the first set of results, staff members are more willing to embrace the algorithm’s potential. “Because of a new enthusiasm and renewed interest in this, we were able to get a lot more buy-in,” Khurana said.
While his team still wants to create a fully functioning sepsis alert, their main focus at the moment is refining the original algorithm to better identify the sicker-than-average patients. One insight from the first time around, for example, was that patients who triggered the alerts and had elevated lactic acid levels were likelier to die than alert-triggering patients with normal levels. (High levels can mean that the body is not getting enough blood supply.)
Taking this into account, their revamped alert doesn’t fire if a patient has normal lactic acid levels and generally has stable vital signs. It’s too early to know if the tweak has made the algorithm more accurate or helped save more lives; answers to those questions will be revealed in future studies. But there are promising signs so far. “This has helped us filter out a lot of the false negatives,” Nikhanj said.
What Banner learned is that electronic health record alerts are near-perpetual works in progress — as unnerving as that may be for patients to hear. It’s likely that no one will ever come up with a set of algorithms that saves patients’ lives 100% of the time, but clinicians and programmers can’t stop trying to get there.
Depending entirely on algorithms was never the point, anyway. The goal, says John Gresham, a vice president at Cerner, the company making Banner Health’s electronic health record software, is to “guide the clinicians to make a different decision or to intervene more quickly. Not [to] take care out of the hands of the physician, but guide them to make a better clinical outcome.”
Masthead
Editor-in Chief:
Kirsten Nicole
Editorial Staff:
Kirsten Nicole
Stan Kenyon
Robyn Bowman
Kimberly McNabb
Lisa Gordon
Stephanie Robinson
Contributors:
Kirsten Nicole
Stan Kenyon
Liz Di Bernardo
Cris Lobato
Elisa Howard
Susan Cramer
Please keep in mind that all comments are moderated. Please do not use a spam keyword or a domain as your name, or else it will be deleted. Let's have a personal and meaningful conversation instead. Thanks for your comments!
*This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.