In a recent review published in natural medicineA group of authors investigated the regulatory gaps and potential health risks of artificial intelligence (AI)-driven wellness apps, particularly those that address mental health crises without sufficient oversight.
study: Health risks of generative AI-based wellness apps. Image credit: NicoElNino/Shutterstock.com
background
Rapid advances in AI chatbots such as Chat Generative Pre-trained Transformer (ChatGPT), Claude, and Character AI are transforming human-computer interaction by enabling fluid and free-flowing conversations.
Projected to grow to a $1.3 trillion market by 2032, these chatbots provide personalized advice, entertainment, and emotional support. Healthcare, particularly in the field of mental health, provides cost-effective, non-stigmatizing support and helps bridge the accessibility and awareness gap.
Advances in natural language processing will enable these “generative” chatbots to provide complex responses, enhancing mental health support.
Its popularity is evidenced by the fact that millions of people use AI “companion” apps for various social interactions. Further research is essential to assess risks, ethics, and effectiveness.
Regulation of Generative AI-Based Wellness Apps in the United States (US)
Generative AI-based applications, such as companion AI, occupy a regulatory gray area in the United States because they are not explicitly designed as mental health tools but are often used for such purposes.
These apps are governed by the Food and Drug Administration's (FDA) distinction between “medical devices” and “general health devices.” Medical devices require strict oversight by the FDA and are intended to diagnose, treat, or prevent disease.
In contrast, common wellness devices are not subject to strict FDA regulations because they promote a healthy lifestyle without directly addressing a medical condition.
Most generated AI apps are classified as general wellness products, make broad health-related claims without promising relief from any specific disease, and fall outside the strict regulatory requirements for medical devices. Masu.
As a result, many apps that use generated AI for mental health purposes are sold without FDA oversight, highlighting important areas of the regulatory framework that may need to be reevaluated as technology advances. I am.
General health risks of wellness apps that utilize generative AI
The FDA's current regulatory framework distinguishes between general wellness products and medical devices, but this distinction does not adequately address the complexity of generative AI.
The technology features machine learning and natural language processing and operates autonomously and intelligently, making its behavior difficult to predict in unexpected scenarios and edge cases.
This unpredictability, combined with the opaque nature of AI systems, raises concerns about potential misuse and unintended consequences in wellness apps marketed for mental health benefits, and the latest regulations This highlights the need for a better approach.
The need for empirical evidence in AI chatbot research
Empirical research on mental health chatbots is still in its infancy and has largely focused on rule-based systems within medical devices rather than conversational AI in wellness apps.
Research has shown that while scripted chatbots are safe and somewhat effective, they lack the individual adaptability of human therapists.
Furthermore, most studies have investigated the technical limitations of generative AI, such as inaccurate outputs and the opacity of “black box” models, rather than user interaction.
There is a critical lack of understanding about how users engage with AI chatbots in the context of wellness. The researchers suggest analyzing real user-chatbot interactions to identify risky behavior and testing how apps react to simulated crisis scenarios.
This two-step approach involves direct analysis of user data and “app audits,” but is often hampered by data access restrictions imposed by app companies.
Research has shown that AI chatbots frequently respond incorrectly to mental health crises, highlighting the need for improved response mechanisms.
Regulatory challenges for generative AI in non-medical applications
Generative AI applications that are not intended for mental health may still pose risks and require broader regulatory oversight beyond the current FDA framework focused on intended use.
Regulators may need to force up-front risk assessments by developers, especially for common wellness AI applications.
Additionally, potential health risks associated with AI apps require clearer oversight and guidance. Alternative approaches could include tort liability for failure to manage health-related scenarios, such as detecting and addressing suicidal thoughts in users.
These regulatory actions are important to balance innovation and consumer safety in the evolving landscape of AI technologies.
Strategic risk management in generative AI wellness applications
App managers in the wellness industry that utilize generative AI must proactively manage safety risks to avoid potential liability, brand damage, and loss of user trust.
Administrators must evaluate whether they need the full capabilities of advanced generative AI, or whether a more limited but scripted AI solution will suffice.
Scripted solutions offer more control and are suitable for sectors that require tight oversight such as health and education, and offer built-in guardrails, but can limit user engagement and future growth. There is a possibility that
Conversely, more autonomous generative AI can enhance user engagement through dynamic, human-like interactions, but increases the risk of unexpected problems.
Generate AI to make wellness apps more secure
Administrators of AI-based wellness applications can prioritize user safety by notifying users that they are interacting with an AI rather than a human, equipping them with self-help tools, and optimizing the app’s safety profile. need to do it.
While basic steps include informing and equipping users, the ideal approach would improve user well-being, proactively reduce risk, and protect both consumers and brands. Includes all three actions to protect.