AI and Machine Learning in Cardiovascular Care: A Call to Action for Nurse Engagement

July 16, 2024
Guest: Osama Dasa, MD, MPH, PhD; Eileen Handberg, PhD, ANP-BC, FACC, Yvonne Commodore-Mensah, PHD, MHS, RN, FAAN, FAHA, FPCNA, and Heidi Salisbury, RN, MSN, CNS-BC, ACGN

AI and machine learning are already impacting healthcare. What are the benefits and cautionary considerations of their use, and how can patients benefit from nurses involvement right now? Listen to pro and con perspectives from guests Osama Dasa, MD, MPH, PhD; Eileen Handberg, PhD, ANP-BC, FACC, Yvonne Commodore-Mensah, PHD, MHS, RN, FAAN, FAHA, FPCNA, and Heidi Salisbury, RN, MSN, CNS-BC, ACGN.

Episode Resources

elcome to Heart to Heart Nurses, brought to you by the Preventive Cardiovascular Nurses Association. PCNA's mission is to promote nurses as leaders in cardiovascular disease prevention and management.  

Geralyn Warfield (host): We are so excited today on the episode where we're going to be discussing AI and machine learning, and how that might look in cardiovascular care and in healthcare overall. I have my first two guests—this is actually a special episode where I have four guests, but we're going to take two at a time. The first two, I'm going to have you introduce yourselves so that our audience knows to whom they're listening or watching. 

Osama Dasa (guest): I'm Osama Dasa. I'm a Cardiovascular Disease Fellow at the University of Florida.  

Yvonne Commodore-Mensah (guest): Hi, I am Yvonne Commodore-Mensah. I'm an Associate Professor at Johns Hopkins School of Nursing and the Bloomberg School of Public Health.  

Geralyn Warfield (host): Excellent. Well, we're really excited to start talking with the two of [00:01:00] you, and then we'll introduce our next guest in the next segment of this episode. 

Osama, could you just level set for us and help our audience just remember what the difference is between AI and machine learning?  

Osama Dasa (guest): Fair enough. In brief, AI is just the umbrella term that encompasses any computer system that tries to mimic human behavior, human intelligence, or even surpass it. 

And then, underneath that is machine learning. It is a subset of AI that is interested in creating algorithms that uses statistics, regular statistics, to solve problems and predict outcomes. However, the machine part of this, the machine learning part, is that this algorithm can learn from itself and improve on itself without a human intervention. 

I think this is kind of the biggest differentiation between AI and machine learning. 

Geralyn Warfield (host): And I know that there's a lot more to the application of technology and healthcare, but we're really going to stay specific [00:02:00] to one topic today. 

And for the two of you, Yvonne and Osama, I'd like you to consider and share with our audience some broad benefits of using AI in healthcare.And I'm not sure who would like to start, with that part of the conversation.  

Yvonne Commodore-Mensah (guest): I can get started. One of the benefits, I would say, that AI presents is the ability to better diagnose cardiovascular diseases. So, we know that it's important to detect cardiovascular diseases early, and machine learning models are able to do a better job, frankly, than humans in predicting cardiovascular disease. 

But also using that information to guide clinical care and early intervention. And in many cardiovascular conditions, we know that time is of the essence. And so, the promise of AI is that it may help us to [00:03:00] pull information from different sources to diagnose cardiovascular disease in a timely manner. 

Geralyn Warfield (host): So, beyond diagnosis, what other applications do you see, Osama?  

Osama Dasa (guest): I can pivot off of that. So, on top of the efficiency and prediction, because of the multimodal plethora of data that we have, we can add on top of that using AI in regular day-to-day, tasks that we have to use in hospitals, in nursing, in the outpatient clinic.  

Where we can use that to help us prep notes, pre-chart about patients, talk to patients before they even come into the clinic, pre-populate a note after the patient leaves, pre-populate specific instructions for that specific patient when they go home and they need to, instead of having to do all these tasks manually. So, that will be one extra application of efficiency using machine learning and AI applications in [00:04:00] healthcare. 

Geralyn Warfield (host): What other considerations do you have for using AI in healthcare? What are some other benefits that you might point towards?  

Yvonne Commodore-Mensah (guest): Another benefit would be the opportunity for personalized medicine. So, we know the term ‘precision medicine’ is used, but we also have precision population health.  

So, one of the benefits of AI is the ability to process billions of data from different sources. So, if we are able to harness all of this information, we can use the information to identify—in the context of population health—subpopulations that need earlier intervention, that need resources, to address the burden of cardiovascular disease. 

When it comes to the individual level, AI allows us to use data or information about a person. [00:05:00] And you also use different sources of data. And I think that's one of the unique aspects of AI. So, whether it’s if you are wearing a smart watch, right? Or, we have information about your medical records, other sources of data, how—and for instance, your genetic information as well—how do we pool or combine all of these different sources of data to treat you as the individual that you are and not come up with standardized treatment plans that may not consider your unique needs as well.  

Geralyn Warfield (host): This may be throwing you both for a loop. And I was thinking as you were talking, Yvonne, about how guidelines for medicine might change with this application of very individualized medicine. Do you have any thoughts about that?  

Osama Dasa (guest): I think it would be interesting and unique. It's still a work in progress, but I think we'll be empowered even better than our [00:06:00] traditional classic models, in the next hopefully five to 10 years. 

We'll have a patient that comes in that you can, from your prior knowledge, discuss his risks and all of that, but then you'll be empowered with this AI ability that is tailored to that patient. And you can discuss his risks in the future, specifically tailor a plan for him or her. I think will be very interesting to see how that's applied in real life.  

Yvonne Commodore-Mensah (guest): To build on that too, one tangible application of AI in the context of guidelines is that we know that there are different guidelines for different conditions, and unfortunately clinicians don't apply the guidelines uniformly. So, that would be another opportunity to integrate guidelines in our medical records.  

So, to also inform clinical decision support tools. So, if a patient is supposed to be on a statin because their cholesterol is [00:07:00] high and the clinician is not acting accordingly, perhaps AI can inform an alert that pops up, that allows them to opt out, rather than, you know, go through all of the multiple steps. So, that’s one tangible way to ensure that we are really taking advantage of the guidelines that exist to improve the quality of care.  

I think the underlying point is improving the quality of care. Because we know how to manage cardiovascular conditions. There's an abundance of evidence that exists. But unfortunately, the application is where we fall short. So, if we can think of ways to harness AI to help us to become better clinicians, I think it's a win-win for everyone.  

Geralyn Warfield (host): Are there any other ideas that you would like to share with the audience in terms of why AI in cardiovascular care is a great idea? 

Osama Dasa (guest): I think probably just summarize what we just [00:08:00] said. It's, we have an explosion of data. We have data from EHR, from wearables, the wearable technology, wireless data being transmitted, lots of lab data, genomic data that just a regular physician cannot just deal with or handle, or even extract what would be patterns and things that would predict outcomes. 

And definitely we need something to help us, like AI.  

Yvonne Commodore-Mensah (guest): And the most important thing, I would say from the nursing perspective, is that nurses need to be at the table.  

I recall that I was invited to join a workshop that was organized by the National Heart, Lung and Blood Institute on the issue of AI in hypertension management. And my first reaction was, “I know nothing about AI. I know a lot more about hypertension.” And so, I hesitated. 

But then I just remember that, in many of these opportunities, [00:09:00] nurses may not be well-represented. So, even though I felt like I didn't have enough expertise in the AI space, it was an opportunity to learn. An opportunity to bring the nursing perspective. 

So, I think the other consideration is that as we think about the use of AI, we ensure that nurses are also receiving the education—especially at the pre-licensure level, too—and so that when they go in the field, they're well prepared for this new reality because AI is here to stay.  

And so, how do we ensure that nurses are at the table and that our data and the information that we are collecting, those data are also represented in these data sets as well. 

Geralyn Warfield (host): I would really like to thank our first two guests on today's episode, Osama and Yvonne, thank you so much for enlightening us about AI and machine learning and how it might be applied in clinical practice.  

We're going to take a quick break and we will be right back. [00:10:00]  

Osama Dasa (guest): Thank you.  

Yvonne Commodore-Mensah (guest): Thank you. 

 

Geralyn Warfield (host): Welcome back to the second part of this important episode, talking about artificial intelligence.  

I have two new guests across the table from me, and I'm going to let them introduce themselves to you.  

Eileen Handberg (guest): I am Eileen Handberg, and I am a Professor of Medicine at the University of Florida.  

Geralyn Warfield (host): Excellent. 

Heidi Salisbury (guest):  And I'm Heidi Salisbury. I'm a Clinical Nurse Specialist at the Stanford Center for Inherited Cardiovascular Disease. 

Geralyn Warfield (host): Well, we're really excited to have you both at the table. And I know that you were kind of listening into the previous part of the podcast episode, so, you know kind of what was discussed by our two previous guests.  

But I'd like us to pivot to something that's kind of counterpoint to what they discussed. And that, really, a discussion of the limitations and some of the concerns about using AI in cardiovascular and in overall healthcare.  

So, I don't know if one of you wants to start us off on the conversation about maybe things that we need to be keeping in mind, even though it's pervasive and there's great opportunity, what else do we need to keep in mind? 

Eileen Handberg (guest): I'll [00:11:00] start, because I think Heidi will definitely bring in some real valid concerns, probably from the CV nursing perspective. And I think these are concerns that people have, whether they're pro or con. So, I think that's, you know, got to be clear.  

But there are a lot of concerns about jobs and this always happens when new technology disrupts us, right? The jobs we have now, it's been estimated that there will be 300 million lost jobs because of AI, because of the efficiencies and those kinds of things. So, that’s a concern.  

The how AI generates is a concern of some people, because it takes existing knowledge. And the knowledge that we have is pretty curated. And if you think about history books, there are large parts of history that were [00:12:00] purposely not reported because of who's in charge at the time, who's leading the country, who's leading the world, and there are a lot of biases attached to how that history gets presented.  

So, if AI is pulling from the existing literature and it is skewed in one direction or another, the bias goes along with the new knowledge development. And so, I think that's a real significant concern and there's a lot of knowledge and we see this in EHR data pulls. Social determinants of health are not standard in EHR data collection at this point. They're evolving. They're coming, but that totally affects how people are impacted by healthcare and their opportunities to take advantage of healthcare. 

And so, if that data doesn't exist, how does that inform AI? [00:13:00]  

And the bias of who's in charge of all of this. I mean, I'm not a big conspiracy theorist, but who is in charge of AI? And right now, the general comment is, ‘Well, it comes from the existing literature.’  

But really, what's the algorithm and what are you pulling from?  

Because if you look at social media and any kind of media today, it's very curated. And your bias gets perpetuated by the first thing you check on your social media feed. Because once you declare yourself as X, what you get fed by all of those algorithms perpetuates your own bias. It does not give you side A and side B and lets you make an informed decision. It says, ‘Oh, you're a B. We are going to feed you B, and we're going to perpetuate your bias.’  

And so, I worry a lot about [00:14:00] AI in terms of that it, you know, job security. I can understand some of the efficiencies, and jobs are going to change.  

But if you think about these algorithms right now, 90% of the clinical trial data is on Caucasian, white males. That's how we dispense our drugs. And there is no other data out there, or there's not a lot.  

And maybe AI will help because it will pull all the data and there might be a better representation. But again, it perpetuates a bias. And who's the ethical police here? Who is going to maintain the ethics of where this happens? 

I mean, big business does what big business does, right? They control a lot of what we see and do. And who controls them? Nobody. They do whatever they want. And [00:15:00] so, if you have, you know, I’m not a doomsayer, but you know, on the other hand, there seems to need to be something to be able to control this. 

And I think this is what people are afraid of about ai because it's another example of somebody controlling a lot of stuff and I think it's a valid concern.  

Geralyn Warfield (host): So, in many ways, Eileen, what you've described to me is analogous in my mind—and you can certainly disagree with me—but what the guidelines writing committees do. They become the arbiters of what is accurate, what's the best overall choices for our patients, but they are the guiding force, if you will, in some circumstances for what we're going to do.  

And what you've described in terms of what AI’s possibilities are, is there is no limiting factor. There is no guiding light. There is no [00:16:00] way for us to discern who's making those decisions. And so, that really is a concern.  

Eileen Handberg (guest): Yeah, I mean, I think that until there is transparency around where is this data actually coming from?  

Because think about it, if you ask ChatGPT something, whatever it is, and it comes out with what you are comfortable getting, you're like, “Oh, great.” You wouldn't think about the source of that information. Now, if it goes against your beliefs, are you just going to dismiss it and say, “Hmm, that's not right?” 

I mean, there's just lots of questions here and I think we have to make educated and have really frank conversations about this, and not just take it as the standard without questioning [00:17:00] and trying to understand it and making people accountable. Because, you know, bad people do bad things. 

And there's a lot of power in knowledge and, you know, the fake news. I mean, the fake news in COVID, I mean, resulted in people dying because they heard fake news that went along their pathway, and they elected not to be treated and they came to the hospital, and they died. 

And that's fact. That isn't made up. It's not drama. But they listen to the rhetoric that was not fact-based.  

And so, if you have…I mean, let's just be doomsdayers. If you had a leader of the country who had control or a business conglomerate that had control, that had [00:18:00] not good intentions, and they fed this, they could perpetuate a lot of negative behaviors. They could perpetuate a lot of negative and incorrect knowledge. They could reframe the history. I mean, if our kids are only going to learn by AI and it is curated and fed, that's what they're going to learn, right? They may not pull out a dusty textbook to realize that there was a Civil War and what was behind it, because the people in power currently don't support that view. 

And we certainly have had books burned because people didn't like the view. And so, again, there could be a lot of negative impact. And so, who's the ethics police in all of this? And I think this is a lot of conversation that's going on. And again, not to be a doomsayer, but I think if you don't [00:19:00] have some rational thought about this and you just go along like sheep. 

We may be walking ourselves to wherever you take sheep, when you know what I mean? So, so, yeah. Crazy.  

Heidi Salisbury (guest):  Yeah.  

Geralyn Warfield (host): So, I recognize that this possibility is fraught with complexity. You've brought up some great points, Eileen, in terms of who is the lead for making sure the information is accurate, that we are, including multiple points of view. 

You know, if you have ever searched for an item, let's say ‘boots’ on, your phone or on any kind of other device, and then all the ads that populate are for boots, even though you may have already made that choice. We understand how decisions that we make can follow us. Whether or not it's intentional or not, there's a lot that goes into that. 

So, Heidi, why don't you share your [00:20:00] perspective in terms of what some considerations are that we need to think about when it comes to AI. 

Heidi Salisbury (guest):  And I, I'm listening and I'm thinking about the Stephen Hawking quote which is, “AI can be the greatest or the worst thing that ever happened to humanity, but we just don't know yet.” 

So, I think this is a call to action to nursing as a profession. To show up, to pay attention, and to remain kind of positioned as the guardrails that we need as we move in this new direction, inevitably move in this new direction, we are moving in this new direction. So,to do so in our positions, at our posts, as the most trusted healthcare professionals. 

Yeah, I really think this is a, a call to action. 

And two things come to mind in terms of clinical practice, which are critical thinking—you know, really being mindful of integrating AI in a way that doesn't obliterate critical thinking for nurses—and knowledge acquisition. So, [00:21:00] case in point would be, you know, automated blood pressure machines. AI technologies are better at, and more accurate at getting blood pressure. It's hard to dispute that.  

But in terms of problem solving, when there is an error with the blood pressure machine and/or if the patient is becoming emotional or physically unwell, the nurse still needs to have the knowledge of what a blood pressure means, and hemodynamics, and how to triage, and integrate this data into the patient's care plan. 

So, I really think it's kind of like asking someone to all of a sudden ride a bike, but they don't know how to pedal or, you know, you're able to have a skill or execute something, but you don't know any of the steps to get there. And I think that it's really important that we prioritize critical thinking skills, knowledge acquisition, and set it as part of the standards when we integrate AI. [00:22:00]  

Second point would be empathy. And that empathy is such an important part of being human, the human connection and providing care to individuals, that therapeutic connection. And I think that we, you know, it is unknown how empathy plays out when we integrate AI. We need to learn more. 

But again, that call to action to be part of the conversation. True story. I'm sitting in a busy cardiac clinic. I'm in the back conference room with my team, and we just hear the news that Google shared about the empathy study, which showed that patients rated a chat bot more empathetic than their physician. 

And we were discussing this among a multidisciplinary team that included nurses, and advanced practice nurses, and physicians, and we were, you know, appalled by this information. ‘No, this could not be.’ And we were really having an emotional reaction to it.  

And then—this [00:23:00] is a true story—we walked down the hallway, my physician partner and I.  We went into a room, open the door, go in to see a patient. Patient we've known for a long time. A very difficult patient. Patient that is, maybe frustrating.  

And we, you know, found ourselves a little frustrated with the interaction and it went on and it didn't feel that it was productive. We left the room, we looked at each other and said, “I didn't feel very empathetic there, did you? “ 

“No, I didn't feel very empathetic either.” 

And it was just a moment where, you know, just to kind of be for the thought exercise, it was a moment where I thought, “Well, in this situation AI might have offered more empathy to the patient because I was having an understandably hard time with the patient.” 

So, I think there's opportunity to frame shift, and support. you know, support our empathy in practice. There's empathy fatigue. That's [00:24:00] also, you know, not disputed.  

And so, just finding ways, again, to integrate AI responsibly, and mindfully, with intention. Where empathy, and that therapeutic connection, and critical thinking, are still standards. Those are things that come to mind. 

Eileen Handberg (guest): But it's an interesting point because nursing is a lot about empathy, right?  

Heidi Salisbury (guest):  Yes.  

Eileen Handberg (guest): We spend a lot of time at the bedside. And you can imagine that AI helps because of efficiency, and helps you manage patients better.  

But you know, there's also this concern that, you know, are the nurses of the future people who can manage AI and technology better, and not the people who hold the hand and provide the sounding board for someone going through a stressful health care [00:25:00] situation?  

And so, you know, I think there's a lot of issues. I mean, we heard that same data at the conference today, and everybody laughed because they said, “Well, if you'd have put a nurse against ChatGPT or the bot, the nurse would've won hands down.” And, I mean, maybe that's probably true, because we do have that essence about, you know, our practice.  

But it is of concern because then, you know, you have all this stuff, you spend all this time managing data. Do you actually take the time? We assume that this technology is going to make us so efficient. 

We have more time to be people to people. But I would argue if you look at social media, and how involved we are on our phones, and our children are learning the ability to communicate directly with people and have social interactions [00:26:00] because they don't have to do it in person anymore.  

And you can say anything you want on social media and there's really no repercussion. I mean there is, because you get blow back because you said something that people find offensive, and they will blow right back.  

But you can say any negative thing and it's fairly easy, right? If I'm standing next to you and I want to say you're a so and so, it's very hard. On my phone, I can type it out all day long.  

And so, we have a generation who is growing up with less social interaction. We have more social isolation. We have more depression, more suicide. People don't know how to get along. People just think if you just scream out your opinion, it's okay. The time of civility has passed. 

You know, everybody gets frustrated and it's okay to scream and yell. I mean, that wouldn't have happened 30 years ago in a store. You wouldn't have the [00:27:00] term ‘Karen’ 30 years ago, right? People have to interact and so, our whole social construct is changing and social media has sort of pushed us there, which has a lot of AI behind it.  

And now AI's going to come and it's going to really impact healthcare in a lot of positive ways, don't get me wrong. But again, I do think we have to really be involved as providers to advocate. Because the patients are going to get lost further in the hustle.  

Heidi Salisbury (guest):  I think that what you're saying is absolutely true. And that we've had this collective social experience, especially amplified by the pandemic, where we've moved towards interacting socially, virtually. 

I think there's an opportunity there too because we've known, we know, that this has led to social isolation. We know this has led to anger and fear and fake news. And we are [00:28:00] watching together the negative impacts of that. 

Eileen Handberg (guest):  Right. 

Heidi Salisbury (guest): So, there’s an opportunity to say, “No. Empathy matters. Human connection matters.” And that voice is going to come from nursing. 

And I think it has to be loud. Now.  

Eileen Handberg (guest):  Yeah.  

Heidi Salisbury (guest): Or, you know, to use a medical term, this can become malignant and hard to stop.  

Eileen Handberg (guest):  I think it's a true call to action to nurses. I mean, if anything comes out of—and you should have lots of debates about this, right? You really should in your practice, in your work environment. 

But it really says to nurses, ‘You need to stand up and be counted, because you advocate for so many others.’ And that you need to say, “I will be on that committee that talks about how we integrate. AI in our clinic and our health system, in our national society, in our guideline writing” and all [00:29:00] of that stuff. 

And that voice is essential. And I think, whether you feel one way, I mean, it's somebody presented a picture of, in one of the talks earlier, of the light bulb, the worldwide web and Facebook. About how they've influenced, right? And so, these things are here, they're going to happen. AI's here, it's going to happen. 

It's a matter of whether we choose to be active participants in the conversation about how it's going to impact us in healthcare, and social constructs.  

Because if we don't, if we as a large 20 million nurses across the world don't step up for our patients, I'm not sure who else will. Because historically we are that. And so, I think there's a lot to be learned and I think I'm sort of glad I'm closer to retirement than beginning my career. 

It's going to [00:30:00] get complicated.  

But I hope that we have future leaders who are going to step to the plate and Heidi who's going to, you know, jump on every committee that can exist and make a difference this way.  

Geralyn Warfield (host): We really hope you've enjoyed this particular episode of Heart to Heart Nurses. There is a lot to think about—and, do not wait!—a lot to do, on this particular topic. 

We are so grateful to all four of our guests for joining us today. This is your host, Geralyn Warfield, and we will see you next time. 

Thank you for listening to Heart to Heart Nurses. We invite you to visit pcna.net for clinical resources, continuing education, and much more. 

Subscribe Today

Don't miss an episode! Listen to the Heart to Heart Nurses podcast on your favorite podcast listening service.