ARTICLE SUMMARY:
AI large language models recently entered the realm of regulated medical devices. Market Pathways spoke to the German-based company Prof. Valmed about its experience gaining the first CE mark in the space for its clinical decision support system, and we explore oversight challenges ahead for these rapidly advancing technologies.
Large language models (LLMs) in the mode of ChatGPT have quickly become ubiquitous in society, including in many workplaces. That, to be sure, includes hospitals and healthcare practices, where clinicians are increasingly looking to their friendly chatbot not only to summarize and craft clinical notes, but also to help support diagnoses, look up drug interactions, or for other clinical applications.
On a parallel but much slower path, medtech innovators and regulators are working to establish medically validated versions of these generative AI tools that could be integrated into healthcare systems, with more clinical accuracy and more patient privacy protections. Major questions about transparency, reliability, and consistency of generative AI, in which outputs aren’t always readily explainable and can include inaccurate “hallucinations,” make the prospect of validating long-term safety and effectiveness difficult. These barriers were discussed in detail during a November 2024 FDA advisory committee meeting weighing the regulatory considerations of generative AI-enabled devices. The US agency, despite having authorized more than 1,000 AI/machine learning-enabled devices, has yet to approve an LLM or any generative AI device.
But the first LLMs have, in fact, already passed through regulatory gauntlets. In 2023, a mental health chatbot, Limbic Access (from the UK firm Limbic) was certified as a medical device in the UK. And earlier this year, Prof. Valmed (from German-based Prof. Valmed) became the first general purpose LLM-based clinical decision support tool for clinicians to gain a CE mark in the EU.