The first day a Ventura, Calif.-based doctor used a new artificial intelligence (AI)-assisted tool to record patient conversations and update electronic medical records was also the day he went home for dinner for the first time in a while. Ta. Thanks to this algorithm, she cried tears of relief.
That's what Dr. Jesse Ehrenfeld, president of the American Medical Association, heard in early January.
Anecdotes like this abound in the healthcare industry. Doctors now finish their visits on time, see more patients, and spend more time talking with patients during each visit. All of this is thanks to AI. “Anything that allows me to focus my time and attention on patients is a gift,” Ehrenfeld said. Medscape Medical News.
AI has the potential to do just that, making healthcare more efficient, affordable, accurate, and fair. It is already revolutionizing medical practice. As of October 2023, the U.S. Food and Drug Administration (FDA) has approved nearly 700 AI and machine learning-enabled medical devices. New companies are emerging with the promise of software that will revolutionize everything from billing and management to diagnostics and drug discovery.
But regardless of its potential, experts agree that AI will not have free reign. Without oversight, the benefits of AI in healthcare could easily be outweighed by its harms. These algorithms, many of which have access to vast amounts of data and the ability to modify and adapt on their own, must be kept in check. But who will build the necessary guardrails for this budding technology and how to enforce them is a question no one can yet answer.
Risk: Changing medical devices
Currently, most of the algorithms approved by the FDA are “locked down,” said Lisa Dwyer, a partner at King & Spalding and former senior policy adviser to the agency. Medscape Medical News. However, many future algorithms will be adaptive, adjusting their behavior based on the input they continue to learn from.
“What do we do with the ever-changing FDA products?” Dwyer paused. That's a question she asked FDA Commissioner Robert M. Calif directly in a January interview.
In the interview, the Secretary acknowledged that while there are many unknowns regarding adaptive AI, post-market evaluation and post-implementation reporting to authorities are essential.
“However, it is a difficult job, [requires] “The FDA doesn't necessarily have a lot of resources,” Dwyer said.
Risk: Bias
AI is also subject to bias, as is the data used to train it. Police algorithms that use past arrest data to predict crime have enhanced racial profiling. In Google's online ads, men were shown more high-paying jobs than women. Computer-assisted diagnostic systems are less accurate for black patients than for white patients.
“If you're not very intentional, two things are going to happen,” Ehrenfeld said. “One, it would exacerbate existing health inequalities, and two, in certain circumstances, it would unintentionally and unknowingly harm patients. ”
To avoid dangerous biases, regulators will need to evaluate more than just the algorithms themselves. You need to consider the “settings and workflow” of how you apply AI. [the AI] “We're excited to be able to help patients with their healthcare needs,” said Alison Callahan, Ph.D., a clinical data scientist on the Stanford University Healthcare Data Science team.
The Callahan team is part of a team that simulates how different AI tools work in specific healthcare systems. They test the effectiveness of their algorithms in a variety of use cases and examine outcomes in specific patient populations to see if their algorithms benefit patients in the real world. We are “strong believers in the importance of a more holistic evaluation of not just the model, but how it will be used before it is deployed,” Callaghan said.
Risk: Hacking and Surveillance
Eric Sutherland, senior health economist and AI expert at the Organization for Economic Co-operation and Development, says sophisticated algorithms that require more data can be inherently at odds with patient security and privacy. .
AI runs on data, and more data means the algorithms become more accurate, but it also comes with risks for patients. Sutherland said the large datasets that power AI tools are being targeted by hackers. Regulations must oversee how health data is stored and who has access to it to best protect patients.
Because of its ability to identify complex patterns, AI also has the unique ability to infer information that patients may not have intended to share. An algorithm can guess the location of a photo. AI-powered chatbots can use what you type in a chat to infer personal information. AI also determines whether you are planning to break up with your partner based on the tone of your voice. Technology's ability to identify sensitive information puts that information at risk of unauthorized sharing and surveillance.
“There is a human right to privacy and a human right to benefit from science,” Sutherland said. A key question for regulators, he said, is how to get the most out of algorithms while minimizing harm to patient safety.
Risk: Accuracy and Responsibility
No existing test or treatment is perfect, and neither are any AI-powered tools. But what error rate are we willing to accept from an algorithm?
Dwyer said false positives can misuse medical resources and false negatives can cost patients' lives. Regulations determine acceptable error rates and how algorithms are tracked when the data becomes dirty (defective) or when the algorithm makes a mistake.
Sutherland said regulators also need to decide who is responsible if an error occurs. If an algorithm misdiagnoses a person, who is responsible for that mistake? Is it the software developer, the health system purchasing the AI, or the doctor using the AI?
unknown waters
In October 2023, President Biden issued the Safe, Secure, and Trustworthy AI Executive Order. It called on developers to share safety data and critical findings with the U.S. government and Congress in order to pass data privacy legislation.
“This is an incredibly dynamic technology,” said Michelle Mello, a professor of health policy and law at Stanford University in California. “That makes it difficult for Congress to sit down and pass laws.” For regulations to be effective, she said, they must be “very nimble.” Medscape Medical News.
Anna Newsom, chief legal officer at Providence, a West Coast-based health system, said many of the existing regulations aimed at protecting patients also apply to AI. She added, “For example, a large-scale language model might utilize protected health information, which could suggest HIPAA to her.”
The FDA is already evaluating algorithms that are considered medical devices—algorithms intended to treat, cure, prevent, mitigate, or diagnose human disease.
The agency is also exploring various regulatory paradigms for reviewing software-based medical devices. From 2019 to 2022, FDA piloted a prequalification program that evaluated organizations rather than individual products.
Companies that received pre-qualification were eligible for a less onerous pre-market review. The downside, Ehrenfeld said, is that this approach “relies solely on post-market surveillance.”
“From a practical standpoint, FDA probably cannot hire enough reviewers to review every product,” Ehrenfeld added. As for post-market surveillance of any adaptive algorithms, “the U.S. doesn't have the infrastructure to do that at scale. It doesn't exist,” he said.
The reality is that we need the FDA's help.
Mello said AI oversight could follow a traditional regulatory model, where Congress passes laws and government agencies are responsible for issuing rules regarding AI safety. Or, she said, AI could be treated like the quality of a doctor's care, left to a third party with little government intervention. The third option is somewhere in the middle, involving the government but not as heavily as the first approach, Mello said.
Caliph and other experts agree that public-private partnerships are the best solution. Khalif said a “community of actors” is needed to evaluate algorithms and prove that they are beneficial and do no harm before and after implementation.
However, it is not yet clear who those entities will be.Recently published articles Japan Automobile Manufacturers Association We proposed a national network of health AI assurance labs to monitor AI. In this scenario, the government would fund specific centers of excellence to vet, certify, and monitor algorithms used in healthcare.
Whatever the strategy, the United States is expected to implement a meaningful portion of the regulatory framework within the next year or two. “I don't think it's going to be a big piece of legislation,” Mello said. Some of the processes outlined in the executive order have six-month or one-year deadlines. Therefore, they are executed. And some of these assurance labs will be operational in the next few years, she said.
When it comes to physicians, whether you're excited or concerned about AI, “you're not alone,” Ehrenfeld says. According to recent American Medical Association data, 41% of physicians surveyed reported being similarly excited and concerned. Medscape's Physicians and AI Report: 2023 found that 58% of physicians are still not enthusiastic about AI in medical settings.
“There are a lot of possibilities. What we want is to [AI] “There are problems in the medical field, but it's good to be cautious because patients' lives are at stake,” Ehrenfeld said.
Donavyn Coffey is a Kentucky-based journalist who reports on health care, the environment, and everything that affects the way we eat. She received her master's degree from the Arthur L. Carter Institute of Journalism at New York University and a master's degree in molecular nutrition from Aarhus University in Denmark. More of her work can be seen in Wired, Teen Vogue Scientific American, and more.