Health care is a fertile proving ground for AI for a few key reasons. It’s huge, it’s complex, and it touches the lives and futures of millions, so there’s endless opportunity – but there is also less margin for error than in other fields.
To help professionals and decision-makers grasp how AI is evolving, the direction it’s headed in health care and what milestones it is hitting along the way, the National Institute for Health Care Management Foundation recently hosted a webinar on the topic. Panelists Dr. Michael Matheny of Vanderbilt University Medical Center, Svetlana Bender of GuideWell and Florida Blue, and I. Glenn Cohen of Harvard University joined the NIHCM’s Kathryn Santoro for a talk on the challenges and opportunities presented by this fast-moving and -evolving technology.
Setting the table for AI
Matheny pointed out that health care can benefit from AI precisely because of the vast volume of data generated and available in the field. Clinicians work with a “deluge of scientific information,” said Matheny, who added that AI can be deployed to streamline processes around clinical trial analysis and practice guidelines. “We need help in managing all of this information,” Matheny said. AI also has applications in areas such as clinician note creation and clinical decision support, Matheny noted.
Matheny cited the recent discovery that pulse oximeters were giving inaccurate readings for some patients of color. The training data used for these devices was largely based on a Caucasian population, so it wasn’t representative of people from other racial groups, Matheny said. This is just one type of application where AI might be deployed to improve accuracy and eliminate bias.
“It’s really critical to understand what you want to change or what needs to be fixed, and then evaluate that in the context of the workflow, the stakeholders, the end users that are going to be affected – the patients, the caregivers,” Matheny said, adding that in some cases, AI actually may not be the answer.
Overcoming obstacles
Bender noted that AI can be seen as a bridging of minds and machines, and it’s being used in ways people may not even be aware of, from having Google Maps find the quickest driving route to picking a movie or show suggested by Netflix. “It’s been around for decades, but it wasn’t until recently … that it captured the public’s imagination,” Bender said.
Using AI in health care has the potential to generate $200 billion to $360 billion in annual savings, Bender said. Wearable health devices, symptom checkers, medical imaging, the acceleration of drug discovery, medical claim approval and Medicaid fraud detection are areas where AI can be useful. Still, adoption is very low in the health sector, and that’s “a huge problem,” Bender said. “In fact, less than 5% of health care organizations were using AI as of 2022, and that number is really lagging behind a lot of other industries.”
The slow adoption rate is likely tied to three significant behavioral factors, said Bender, who suggested ways to address these issues:
- Fear of change can be mitigated by engaging people from all parts of a health care organization, including legal, information technology, marketing and business operations.
- Fear of the unknown can be solved by upskilling staff, asking for their input and explaining that AI applications are intended to augment their work, not necessarily replace human jobs.
- Fear of algorithms can be overcome by focusing on good governance, proper controls, transparency, human oversight and freedom from bias.
Bender said GuideWell has used AI technology to personalize patient care, streamline prior authorization approvals and create a chat tool for staff members, among other applications. “[S]tructure, training, ethics, technology and partnerships” are key pillars for adopting AI, according to Bender.
Making AI fair and effective
Cohen offered insights into the legal and ethical considerations surrounding AI in health care. Issues can arise at any stage of the life cycle when organizations are building a model, Cohen said, so it’s important to understand the process.
- In the first phase of development, organizations should consider factors like the data’s origins, the removal of personal identifiers and how representative of the population the information is.
- The second phase involves building and validating the model. Questions are posed about standards, the reliability of those doing the validating, and the balancing of intellectual property protection with transparency needs.
- The third phase involves testing in a real-world setting, understanding liability, consent and regulatory control, and deciding how patients should be informed about the use of AI in their health care journeys.
- In the fourth phase, a functional model is disseminated and used for the benefit of patients, and it’s monitored for equitable use and commercial viability.
Cohen pointed out that while medical AI can make mistakes, be biased or fail to explain its own output in some circumstances, skeptics should remember that the same is true of clinicians who provide care. In the case of AI, the process will only be refined with time. In terms of liability, AI can serve as a “confirmatory tool” to support existing decision-making methods, Cohen said, and it may be useful in informing clinicians when a particular type of care is inappropriate, for example.
It’s possible that AI may actually become the standard of care one day, Cohen said, and it may end up being riskier not to use it in health care settings. We haven’t arrived there yet, Cohen said – but it may be the path we are on.
____________________________________
Take advantage of SmartBrief’s FREE email newsletters for health care CIOs and leaders, which are among the company’s more than 250 industry-focused publications.