Innovations

BMC Neurologist Uses AI to Turn ICU Data into Life-Saving Insights for Patients

August 1, 2025

By Caitlin White

vital sign and pupillometry monitoring machines in the foreground with a patient blurred in an ICU bed in the back

Getty Images

By mining millions of ICU data points, Dr. Ong’s AI-driven approach helps clinicians detect neurological decline earlier and personalize patient care in life-or-death moments.

“There are about four million data points per day generated during a single ICU admission,” says Dr. Charlene Ong, a neurologist and researcher. Some of that data is structured, like vital sign data and lab work, or continuous through waveforms. Some of that data is unstructured, like doctor’s notes or radiology or ECHO reports. Not all of it, Ong says, is properly documented, and it can even contradict itself. 

For most, this ocean of numbers and notes would seem unmanageable, even overwhelming. For Dr. Ong, it’s a treasure trove of data that could help clinicians make decisions about their patient care—if you know where to look. 

“One of the jokes I make about my research is that I dig through the trash of what other researchers don’t want,” Ong says. It’s how she finds the biomarkers, the measurable signs in your body that tell doctors about your health, that she is interested in exploring further.  

Pupillometry, the size and reactivity of the pupil in your eye, is one of those biomarkers that fascinated her. 

“Even one of my early mentors, who is wonderful, said, ‘Why are you doing this?’” Dr. Ong says, laughing. “Typically as we’re doing our analysis, we delete eye movements detected on the EEG because it’s treated as artificial. But I was still interested, and I went along with it.” 

Dr. Charlene Ong, MD, MPHS.

This year, Dr. Ong has published a series of journal articles that relate non-invasive, de-identified data including pupillometry, electro-oculography and unstructured notes to outcomes after acute brain injuries such as ischemic stroke, traumatic brain injury and hypoxic injury from cardiac arrest. Her research looks to answer, “How much data are we leaving on the table and not actually leveraging when these are life-or-death questions and situations”? 

But this research, exploring massive data sets in life-or-death situations, wouldn’t be possible without the technological advances of machine learning and artificial intelligence (AI) in healthcare. 

Detecting the undetectable using new technologies

In neurology, the stakes are rarely higher than in the neurocritical care unit, where patients with acute brain injuries teeter between recovery and irreversible decline. Traditionally, clinicians have relied on risk prediction models—statistical tools that use early data to estimate the likelihood of outcomes like swelling in the brain or the need for emergency surgery.  

Machine learning—a subset of artificial intelligence—thrives on complexity. It can sift through millions of data points, searching for patterns invisible to the human eye. In Dr. Ong’s world, this means not just tracking a patient’s vital signs or lab results, but also subtle changes in things like eye movement, as captured by continuous waveform monitors. Furthermore, all AI models are used in a HIPAA-compliant manner since the AI is strictly using de-identified data like the clinical waveforms which are not traceable back to the patient.

“We’ve shown that we can see subclinical changes in pupillometry over time—changes that are not detectable by clinicians, whether because their eyes are closed or they are not visible to the human eye. These are changes that happen before the thresholds considered abnormal by manufacturers are met,” Ong explains. This process means clinicians can track and flag subtle signs of deterioration before they become catastrophic. In one case, Dr. Ong noticed a patient’s neurologic pupil index—a composite score of how the eye reacts to light—was dropping, even though it hadn’t yet reached the “abnormal” range. She used that data to confer with neurosurgeons on the team. Combined with imaging, the patient’s risk factors, and the clinicians’ expertise, they were able to intervene before the patient’s brain function deteriorated.  

“If we can identify impending deterioration, we’re not leaving patients with hours of compression of vital brain structures,” Dr. Ong says. “That can change everything about their ability to recover.” 

Dynamic risk prediction with a human touch

The traditional approach to risk in medicine is hypothesis-driven: clinicians look for what they expect to find, based on experience and training. And AI allows for a more empirical approach, letting the data itself reveal patterns and possibilities. 

“There’s this question in medicine: Do you think about things in a hypothesis-driven manner, or do you let the data tell you what’s going on?” Dr. Ong says. “You need a combination of both. We use our clinical experience to guide us, but machine learning can uncover interactions between variables that we’d never think to look for.” 

“While I’ve found that we think AI can remove a lot of human effort, you absolutely require a hybrid approach and a really thoughtful approach to get the best outcomes, the best of both worlds.”

Charlene Ong MD, MPHS, Neurologist at boston medical center

For patients, this means risk prediction is no longer a static calculation done at admission. It becomes a living, breathing process—one that adapts as new data streams come in and clinicians use their expertise, judgement, and experience to interpret it. 

“While I’ve found that we think AI can remove a lot of human effort, you absolutely require a hybrid approach and a really thoughtful approach to get the best outcomes, the best of both worlds,” Dr. Ong says. 

She likens the supposed “black box” of machine learning to the intuition clinicians develop over years of practice. “When they train you as a young doctor, nobody tells you to put an equation in your head. You learn from your lived experiences, which are uninterpretable. That’s a black box, too.” 

The personal stories in the data points

For Dr. Ong, the impact of AI-driven risk prediction isn’t abstract—it’s deeply personal. She flashes on screen a complicated graph, saying, “I see personal stories in the data.” She recalls this patient whose pupil measurements she tracked obsessively, scanning every 12 hours. She remembers the room they were in. 

“She had a waxing and waning mental status. She was 40 years old. You can see where she died, where she herniated,” she says, showing a graph data from a patient’s pupillometry many years ago. “All these measurements—none of them alone would have told us what was coming. But when you look at the trend over two days, the pattern is different. Is this someone we could have made a difference for with this new technology? That’s huge to me.” 

These aren’t just numbers—they’re stories, each data points a clue in the narrative of a patient’s life and recovery. 

Integrating AI into future care for providers and patients 

The future Dr. Ong envisions is one where AI doesn’t replace clinicians but empowers them. She and her colleagues are developing large language models that can extract critical information—like NIH Stroke Scale scores—from the unstructured text of clinical notes and diagnostic reports. 

“I get really excited by things like this,” she says. “Can I apply this everywhere and have a screening system for when people decline? Can we use these models to identify the alarms—something’s going to go wrong, let’s fix it preemptively?” 

There are still questions to answer: How do these models perform in real time? How do they fit into clinical workflows? Will clinicians trust them? “We need to determine the right implementation strategy,” Dr. Ong says. “Do they like it? What’s the sensitivity and specificity in practice?” 

“We use our clinical experience to guide us, but machine learning can uncover interactions between variables that we’d never think to look for.” 

Charlene Ong MD, MPHS, Neurologist at boston medical center

Ultimately, the promise of AI in healthcare and neurology is about giving patients and families more than just predictions—it’s about offering time, options, and hope. In the neurocritical care unit, the decision to continue or withdraw care often hinges on a clinician’s best estimation about the likelihood of recovery. 

“My ability to tell a family what’s the likelihood of somebody waking up determines what they decide,” Dr. Ong says. “Do we continue care, or do we withdraw it?” 

By harnessing the power of AI to make those guesses more accurate, and more timely, clinicians can give patients the best possible chance at recovery, and families the confidence that every decision is grounded in the fullest understanding of what’s possible. 

In the end, it’s not just about the four million daily data points in an ICU admission. It’s about the one life that hangs in the balance—and the promise that, with the right tools, we can tip the scales toward positive outcomes and recovery. 

Related Articles