Scroll Top

The Use Cases and Benefits of AI in Healthcare — Issue #15

True partners in EM and HM physician services are hard to find.

Plus: Potential pitfalls and implications for hospital and emergency medicine

Greetings from Atlanta everyone!

The more we all learn about AI the more we realize just how wide open the future is, and how many possibilities there are for transformation. I can tell you that the more we talk about it internally here at Core Clinical Partners, the more use cases we come up with.

However, although I am very bullish on the capacity for AI to start making a meaningful contribution to healthcare, there are also ways in which I think AI could actually exacerbate pre-existing problems, including within hospitals. It wouldn’t be the first time a new technology promised us the world and then failed to deliver.

So, how do we sort the hype from the substance? What impact will AI have on the business of healthcare vs. clinical practice? How could AI change emergency medicine and hospital medicine in the acute setting? Those are only some of the questions I’m going to attempt to tackle in this edition.

So, let’s jump in!


Introduction — What’s so different about ChatGPT?

For anyone paying attention over the past few months, I think it’s become clear that AI has suddenly taken a huge leap forward. That leap is embodied by the release of ChatGPT.

Here in healthcare, we’ve been hearing buzz about the impact of AI for a long time. A lot of us have even become desensitized to the hype. We keep hearing all the buzzwords at all the conferences and in the marketing packets, and yet our business goes on pretty much the same way it has been.

But there are two things that I think make ChatGPT so interesting and different from what’s come before.

The first is its extreme user-friendliness. For the first time, we basically have a tool open to the public, that anyone can use and see for themselves.

The second thing is your ability to actually engage in a back-and-forth. It’s one thing to ask a question and get a good answer—that’s just an improved version of Google. It’s a completely other thing, however, to be able to ask follow-up questions, engage in dialogue, and have the AI remember and adjust based on your input. Learning how to engage in those follow-up conversations is a big part of how early adopters are unlocking the initial potential of ChatGPT.

What this new, extremely user-friendly tool has taught us is that the “Large Language Models” (LLMs) that ChatGPT is based on are a massive step-change forward in terms of AI’s ability to interact with humans in a meaningful and productive way.

In order to understand why this is so powerful (and also what its pitfalls are), it’s important to understand how the technologies work in the first place. So let’s cover that briefly here.

How does AI work?

AI works by training a program on millions, billions, or even trillions of bits of data, and then telling that program what you’re trying to do with the data—what the end goal is.

So in the case of a Large Language Model, the data are trillions of words—all the words on the Internet, basically. The goal is to analyze all the patterns of all those trillions of words and then predict which words are most likely to come next in any given context.

In some sense, that’s all AI is: a gigantic pattern recognition and prediction engine.

As others have pointed out, pattern recognition and prediction are mostly what humans are good at too.

That’s why, when you have enough computing power trained on enough data, the result looks very much like what a human would do. In a lot of healthcare contexts, it looks almost alarmingly like what a clinician would do, too.

Whether you call that intelligence or magic is up to you. I call it a really interesting new tool, one that will change healthcare. Let’s talk about how, and then down at the bottom, I’ll talk about the important implications of these changes for hospital medicine, emergency medicine, and physician group services in general.

As I said, the more we think about use cases for this technology, the more we come up with. But so far, I see them fitting into three broad categories:

  1. Obvious places AI will displace human judgments soon
  2. Use cases where progress is likely a little further away
  3. Difficult problems I don’t see AI solving (or, areas where AI could actually hurt)

1. Obvious use cases

There are a lot of places where it’s clear AI is coming soon, whether it’s to displace an actual human task or to displace a process with a better one.

For example:

Billing and Coding

On the business operations side, the biggest target is likely the medical billing and coding industry. First, these jobs were shipped overseas to India. But it looks likely that AI will be able to displace even those jobs in the near future.

I know one large physician services group that is already moving to full AI medical coding. In other words, a chart comes in, a software program reads the chart and assigns the code, and sends it to the insurance provider, all without human involvement.

How quickly physician groups and medical providers move to these tools will depend on their tolerance for risk and how they do audits, but it is coming.

Accounts Receivable & Claims Denials

Similar to the above, it’s likely that AI will have a meaningful role in helping physicians chase down unpaid bills. These programs could alert us to which claims are likely to be denied by various commercial payors and even handle rejected claims. I know there is already software that aims to do this, but their approach is very different—an AI tool could be trained on millions of previous claims denials and get much better at predicting ahead of time which are likely to be denied.

Similarly, an AI could be trained to handle accounts receivable directly from patients, predicting individual propensity to pay based on certain characteristics, and even recommending the best means to follow up, and on what timeline. The AI collections bot then could handle everything from automated text or calls or triggering the delivery of a paper statement.

Real-time Patient Volume Predictions

Predicting volume is one of the most fundamental tasks for a physician services group, whether in the emergency room or on the inpatient side. It’s a problem we’ve been tackling for decades—but the new AI has the capacity to improve on the modeling systems that we already have.

In the ER, we started by trying to predict how many patients would be coming in each day of the week, and building our staffing plans to those predictions. The next step was being able to predict volume hour by hour, and also by acuity. Most cutting-edge predictive analytics are already doing this today.

AI could help us make not just a prediction of the future, but give us real-time monitoring for that specific day, with alerts for how to adjust in real-time.

So for example, our AI-enabled monitoring tools of the future will be able to see that an emergency department that gets overloaded with a certain number of admission-holds by a certain time in the morning is likely to result in an all-hands-on-deck overcrowding situation four hours later. The system can then alert the group to prepare ahead of time to avoid a worse situation later in the day.

Radiology

This one comes as no surprise—but it’s very instructive as an example of what AI can and can’t do when it comes to patient care.

First, we’ve known for a long time that AI was coming for radiology. It’s almost a perfect use case. The AI can be trained on millions of previous scans which either did or did not point to a serious health problem, and then predict better than a human (yes, even a radiologist) whether the current scan is likely to be, for example, a cancerous tumor or not.

Radiology is a good example because there are only so many pixels in an image. The AI can learn and read every single one of them. Like billing and coding, there is less art involved here and more exact science. Under a microscope, a certain chart really does have a “correct” way to read it, and the same is (mostly) true for the scans in radiology.

But this brings us to what I think is the more interesting part of the AI: its ability to support other kinds of clinical decision-making.

2. Use cases a little further off

Chatbot in Triage (or at the bedside)

Most in emergency medicine are familiar with process improvements that put a provider in triage. The idea is to implement some medical decision-making closer to the front end of the process, rather than have a patient wait.

As an emergency medicine physician myself, I think part of the beauty of being a good doctor is being able to walk into a room and know right away, that person is sick or they’re not sick. It’s hard for me to think that an AI is going to get to that level of judgment any time soon.

And on the other hand… Right now when I walk into the room as an ER physician, I usually have the patient’s age, a chief complaint, such as “chest pain,” a list of medical conditions, and maybe one line of something from the nurse, such as “pain for two days.” Then my job is to walk in and get my own history.

But it’s not hard to see how chatbots could get pretty good at taking that same medical history. I was curious about what a hypothetical chatbot doctor might say to such a patient, so I gave the situation to ChatGPT:

BR ChatGPT 1 text conversation

And, as I mentioned above, the true power of ChatGPT is being able to engage in conversation. So I gave it the basics of an answer as I imagine a patient might:

BR ChatGPT 2 chat conversation

I have to admit—this is almost exactly what I would have done. If a patient with chest pain had given me that history, I would have known immediately, ok this patient is getting admitted. Which is exactly what the chatbot told the patient.

There’s no reason a chatbot can’t be trained to elicit histories based on chief complaints. If that history were then condensed into a short paragraph—ex-smoker, history of coronary artery disease, 2-day history of pain that feels like a weight—I don’t even have to walk into the room anymore. I know the patient is getting admitted, and I can probably start and finish the workup from anywhere, whether that’s my desk in the emergency department or a home office.

The possibilities are pretty exciting.

The hospitalist chatbot (and other clinical decision-making tools)

Not to oversimplify, but I think the essential problem for AI to support medical decision-making is that the amount of variability is almost infinite. The human body is extraordinarily complex, more than we understand. Many disease processes and interactions are still not fully understood, and won’t be for a long time. As opposed to a radiology scan with a fixed number of pixels, the science of most complex diseases is itself evolving and iterative. We are still learning and improving.

So, it’s not like if we only got all the data points of human health and wellness together in one database, we would be able to predict what works—or would we?

One thing we’ve already written about is something like a hospitalist chatbot that would sit in on multidisciplinary rounds. The idea is that an AI could be trained on various disease processes and their likely lengths of stay depending on certain criteria. Then, a chatbot could sit in on rounds and make suggestions to the hospitalist on who might be able to go home and when. If the hospitalist feels the AI is making a mistake, the doctor could tell it, and that feedback would help the AI learn to make better recommendations in the future.

I can see this helping, but there are two problems. First, it is difficult to see a chatbot being able to take into account each and every complex and interrelated factor involved in whether to send someone home or not.

Second, since AI is only as good as the data we train it on, how do we ensure that the doctors who give it feedback aren’t just asking the AI to replicate human errors? Or, if an AI-enabled clinical decision-making tool is learning from a few million different patient encounters, how do we keep it from learning the bad habits of all those doctors?

One answer might be to not try to fix everything all at once. Instead, we focus on identifying the key drivers of bad outcomes or long lengths of stay and help doctors avoid those. We want to drag the median higher.

Take pneumonia as an example. Some patients who are hospitalized for pneumonia stay much longer, others stay a short time, and many are in between. An AI can look at the factors causing long lengths of stay and help alert clinical teams to fix or avoid those. There are a lot of caveats, again, because there is just so much variability in medicine. Still, it’s not unreasonable to expect that these tools will be integrated into clinical practice at some point.

A real-time, 360-view of hospital capacity management

Emergency physician groups have a lot of experience with analytics to predict volumes—but the real utopia is the entire hospital integrated into a single, real-time, 360 view that helps support hospital capacity management.

The system would be monitoring the ER for surges, delivering data on likely inpatient admissions throughout the day, every day. On the floor, there’s real-time monitoring of which patients are likely to be discharged and which might be more likely to be sent to the ICU because they are deteriorating, and the ICU teams would then know that even though they have a discharge coming up, the bed is likely to be filled. All departments and all physician services are integrated and have a view into how other parts of the hospital may impact their own work.

Of course, achieving that kind of coordination is hard because coordination in general is hard. It’s already a well-known issue in a hospital setting when one group isn’t aware of the situation somewhere else. AI could help us predict everything that might happen in real time, but we’ll have to solve some human problems before that day arrives.

As I see it, the potential for AI to exacerbate pre-existing human problems is the main thing we have to look out for.

3. Pitfalls & problems it may be hard for AI to solve

The new AI might be more capable of decision-making, judgment, and insight than any before—but the problem remains of communicating those insights to humans charged with making good use of them.

For example, doctors are already suffering from alert fatigue. So what good is a tool that just adds alerts? We already know AI can come to mistaken conclusions, and that’s fine because correcting those conclusions is part of how it learns and improves. But what happens when a busy hospitalist comes to expect that say, 30 percent of the time, the AI tool is likely to have just given him or her a wrong judgment, or a superfluous insight?

It’s like when I’m at the computer inside the EHR and I get a pop-up warning me about something I’ve already considered and taken action on, such as using a different antibiotic based on an allergy. All that’s happened is the EHR has interrupted my flow and wasted my time.

Alert fatigue is a huge problem, and we have to be careful that AI doesn’t actually exacerbate it. Just because we have a very powerful prediction engine capable of giving us a lot of insight doesn’t mean that tool is going to fit into the culture and pre-existing challenges of any given hospital environment.

The more information we have access to, the more we’re likely to start tuning out that information. Or, we could misuse or misinterpret the information. It wouldn’t be the first time that better, more access to data failed to solve intractable problems in healthcare.

The truth is, as with any new technology, it’s incredibly hard to predict how it will ultimately impact highly complex environments. Remember that when email was first introduced to office environments, researchers thought it would merely displace the current system for asynchronous intra-office memos—they had no idea it would lead to something like the opposite: an “always-on” office culture and an expectation of near-instantaneous responses, leading to high levels of burnout.

It is just really hard to predict these things. And so anyone who has worked in healthcare for a long time must acknowledge some humility in trying to predict the future.

Still, I think there are two main implications for the new AI as they relate to physician services in EM and HM in particular.

Implications for hospitals & physician services

One crucial point about the new AI tools is that it is highly likely that other companies are going to build them. Since AI depends on having the most number of data points possible, that means the best tool is going to be one that leverages data industry-wide.

For example, it just will not make sense for a physician services group to build its own AI coding program in-house, because then you’d only be training it on your own charts, and it wouldn’t be as good. What you want instead is to buy the AI coding tool that is trained on all the charts. But of course, every other group will be able to buy that tool as well.

This is a total inversion of competitive advantages we used to think you’d get from being a very large physician group. Up until very recently, these large groups claimed that their size and resources allowed them to build better tools than smaller or regional groups. But in the AI-enabled future, we are all going to basically be using the same tools, and we’re all going to have access to the same data, and they’re going to be built and managed by someone else.

What remains after all this disruption? Execution.

No matter how good our predictive analytics or our chatbots get, no matter how expansive our data library, quality patient care will still depend on how those insights are used and who’s analyzing them. We will still need operational leaders with the time and capacity to look at and take action based on those insights, not to mention the leadership skills to sustainably implement them.

No matter how intelligent our artificial intelligence gets, we will still have clinicians at the pointy end of the process in charge of actually doing the patient care.

For hospitals, the new AI is just one more reason why going with the largest, “most resourced” physician group isn’t necessarily the wise decision it once seemed. You still have to ask who the clinical and operational teams are on the other end, and whether those teams are capable of getting the job done.