Regulatory Rapporteur
June 2025 | Volume 22 | No. 6
Licence notice
Copyright © 2015-2025 The Organisation for Professionals in Regulatory Affairs Ltd. T/A Regulatory Rapporteur − All Rights Reserved. This work is licensed to Catarina Carrão for personal and academic use only.
Notwithstanding this licence, no part of materials published in Regulatory Rapporteur may be reproduced without the express written permission of the publisher.
As a general rule, permission should be sought from the rights holder to reproduce any substantial part of a copyrighted work. This includes any text, illustrations, charts, tables, photographs, or other material from previously published sources.
To obtain permission(s) to re-use content published in Regulatory Rapporteur please email publications@topra.org.
To join TOPRA please click here.
Up until the early 20th century, more people died during infancy than at any other stage of life.[1][2] Innovations in public health and medicine sparked a longevity revolution that saw significant and quick increases in life expectancy at birth, which was followed by reductions in early-age mortality and loss-of-life improvements at middle/older ages.[1][2]
This longevity revolution in human society is a treasured achievement but one that holds important implications for societal, health and economic policies. By 2042, a large increase in the global centenarian population is predicted (this being the 100-year anniversary of the beginning of the post-World War II baby boom).[2] As they age, people accumulate more medical conditions, which can require complex treatments and increase demand for healthcare services with ever decreasing budgets.[3] However, healthier, longer lives have potential benefits for the economy, and it is possible to offset the negative economic effects of an ageing society.[4]
Health promotion and disease prevention both at the individual level and through population-based interventions is the way forward to counterbalance the impact of ageing and increased human longevity.[4] This is where artificial intelligence (AI)-driven big data capabilities based on personal electronic health records, and in the form of digital healthcare tools, can transform the future of the European health landscape.
Through predictive risk assessments and personalised chronic disease management, AI tools can analyse and interpret health data to assess individual health risks, identify potential problems, and provide personalised recommendations and interventions. For example, the PRECISE randomised clinical trial has evaluated the use of an AI-guided approach to improve decision-making in patients with chest pain who might have coronary artery disease (CAD).[5] This strategy used the PROMISE minimal risk score to identify individuals at very low risk, allowing clinicians to safely avoid further testing in those cases, while patients who were classified as high- or middle-risk received a coronary tomography angiography, followed by additional tests as necessary.[5]
Participants in this trial were randomly assigned to either the AI-guided precision strategy or the standard testing approach.[5] The AI strategy, which incorporated risk assessment through the PROMISE score, improved efficiency by reducing unnecessary invasive procedures (such as cardiac catheterisations) in patients who did not have significant artery blockages.[5] Importantly, this approach maintained patient safety, showing no significant difference in serious outcomes such as death or nonfatal heart attacks after one, three, five and seven years compared to standard care.[5][6]
In addition, the use of AI-enabled electrocardiogram (ECG) software to detect low ejection fraction as a sign of asymptomatic left ventricular systolic dysfunction (an early heart failure clinical decision-support tool) demonstrated this to be cost-effective in routine clinical practice.[7] The study results suggested that AI-ECGs could be implemented system-wide, with priority given to younger patients and those in outpatient settings, to improve patients’ long-term quality of life (QoL).[7]
In summary, these and other study results tend to show that AI has the possibility to enhance health promotion and disease prevention by enabling earlier, more personalised and cost-effective interventions.
Trust is essential
However, in order to implement AI tools for disease prevention and health promotion, a good degree of trust is needed from all stakeholders, from developers through to healthcare workers, and particularly from European citizens and patients.[8][9] We need clearly articulated clinical indications, well-defined risk-based clinical testing processes and evidence generation, as well as continuous monitoring linked to these indications to gain sufficient trust among clinicians and patients which, in turn, will limit its adoption and impact on health.[10]
At the 2024 European Health Forum Gastein, it was evident that the European Commission (EC) has put a significant emphasis on obtaining the trust from its patients and citizens as the basis of its policies such as the AI Act, the Medical Devices Regulation (EU MDR), the In Vitro Diagnostic Medical Devices Regulation (EU IVDR) and the European Health Data Space (EHDS). Through the actions of the ‘Digital EU4Health and Health system modernization’ of the European Commission’s Directorate General for Health and Food Safety, there is an emphasis on the ‘trustful reuse of anonymized and pseudo-anonymized health data for specific use categories, where privacy-by-design of AI-enabled medical devices is a core value.’[11]
Regulations should enforce quality control systems to ensure standards are upheld and there are sanctions for failures. In addition, through a risk management transparency approach, bias in datasets needs to be minimised and reported so that these tools can be used across the spectrum of the European population. An environment that is conducive to innovation should also be one that is transparent and that promotes patient safety through forward-looking regulations, especially when dealing with frontier technologies such as generative AI.
Regulations as a safety leverage on innovation
Although European AI regulations for medical products may seem burdensome currently, they might be the key advantage for Europe’s future innovation and healthcare quality, especially in a system where patients face a knowledge gap and rely on the trustworthiness of healthcare providers and the tools that they use.[9] Patients need to trust that the new models of care will prioritise their needs and expectations, and be reassured that the services that they are offered are evidence-based and safe.
Although there are concerns that regulatory burdens could push small and mid-sized companies to look for markets outside Europe, potentially resulting in a loss of innovation within Europe, this should not be the case. The necessity to maintain rigorous safety and performance standards to ensure that AI applications benefit patients without compromising their safety is of the utmost importance because the broader acceptance of AI in healthcare is dependent on public trust.
Global alignment on AI guardrails and trust
Globally, governments are implementing novel regulations to mitigate the risks associated with AI. These regulations prioritise the establishment of proactive, risk assessment-driven safeguards that are applicable across the entire AI supply chain and throughout the AI lifecycle. For example, the Australian government’s consultations on safe and responsible AI have shown that their regulatory system was not fit for purpose to respond to the distinct risks that AI poses, and ten guardrails were implemented in September 2024.[12] It was acknowledged that, to unlock innovative uses of AI, a modern and effective regulatory system was needed.
The same thoughts are converging in the UK, where leaving the EU presented an opportunity to update regulations fit for an internal market. The current UK Medical Devices Regulation uses a risk-based classification system, which classifies products from Class I (low risk) to Class III (highest risk) devices, with greater scrutiny required for the higher risk class. This system will continue in the reformed regulations; however, many AI products are currently in the lowest risk classification, meaning they can be placed on the market without an independent assessment of conformity, and these will now be up-classified. The objective is to protect users and patients in the UK through greater scrutiny throughout the AI’s product lifecycle.
At the 2024 European Health Forum Gastein, the commitment to exploring the regulatory differences between the UK and the EU in terms of AI in healthcare was evident, and the divergence in AI policies between the lighter UK approach in relation to the newly enforced EU AI Act were debated at length.[13] The incorporation of the fundamental rights impact assessments and human oversight was perceived as a positive aspect of the AI Act, but participants also emphasised the growing demand for further clarification on specific measures the UK will employ to address these aspects, whether in alignment with the EU AI Act or not. Furthermore, the advantages and disadvantages of both approaches were keenly debated, ultimately concluding that fostering greater international cooperation and alignment on AI regulation is paramount to ensuring the seamless functioning of these tools at both an internal and a global scale.
AI regulatory sandboxes: A tool for global sharing of best practices and enhanced trust
One of the tools that can support the global sharing of best practices is the AI ‘regulatory sandbox’.[14][15] Essentially, AI sandboxes foster innovation and facilitate the development, training, testing and validation of innovative AI systems for a limited time before being placed on the market, pursuant to a specific sandbox plan being agreed between the provider and the competent authority. Sandboxes may have different roles, from fostering business learning on the development and testing of innovative AI systems in a real-world environment, to supporting regulatory learning on how to formulate legal frameworks to guide and support businesses and their innovative AI systems under the supervision of a regulatory authority.[16]
In the EU, and according to the AI Act, by August 2026, competent authorities in EU Member States will establish at least one national AI regulatory sandbox. The European Data Protection Supervisor may also establish AI regulatory sandboxes for Union institutions, bodies, offices and agencies.[14][15]
In the UK, the Medicines and Healthcare products Regulatory Agency (MHRA) AI Airlock pilot has moved at a significant pace and has already had its first pilot cohort for a regulatory sandbox.[17] Its objective is to explore the regulatory challenges of innovative AI systems by working directly with real-world products and prototypes, and use these insights to provide evidence-based recommendations to the MHRA and other key stakeholders. The initiative facilitates the collaboration between innovators and the MHRA, enabling them to pinpoint areas where their products require additional evidence to support the safety and efficacy assessment, and ultimately resolve identified issues. It follows a robust process, so that, at a later phase, manufacturers of AI medical devices are able to deliver what is required to ensure the real-world viability of these devices within a limited study time period.
The AI Airlock pilot cohort includes some of the following projects:
- An AI-driven clinical decision-support system that explores the use of TAG-RAG technology, a novel approach to overcoming regulatory concerns associated with the use of large language models (LLMs) in healthcare (to minimise hallucinations, improve knowledge retrieval and ensure more preordained responses).
- An automatic impressions radiology imaging software that aims to validate AI-generated radiology reports using synthetic data.
- A real-time monitoring system to identify performance variations and safety issues within an AI-enabled medical device post market surveillance (PMS).[18]
The UK MHRA AI Airlock testing phase was completed in April 2025. The next steps involve drafting outputs based on the findings from the simulations and virtual testing to write a comprehensive report on the AI Airlock programme, incorporating lessons learned from each pilot project as well as key insights from the overall programme evaluation.[18] A public webinar is also planned in the summer of 2025 to mark the conclusion of the pilot programme and share the final outcomes. This is an opportunity for the MHRA to share the knowledge acquired and further promote a global alignment in terms of AI regulations.[18]
While regulatory sandboxes offer significant potential for advancing medical AI technologies, their application is not without inherent risks. Although they offer a free space for testing, at the same time, they provide limited scale.[19][20][21] As such, there could be inconsistencies with large-scale deployment (for example, AI products tested in a regulatory sandbox may not perform as well when deployed at a larger scale or in a different context).[20][21] Nevertheless, the objective of the AI Airlock pilot was not to fully test the technologies, but better guide regulatory alignment pathways to help define the regulatory thread of innovative AI technologies.
A key risk in the use of regulatory sandboxes is the potential for inconsistent oversight and regulatory fragmentation.[19] Different jurisdictions may have varying requirements for sandbox operations, leading to discrepancies in how medical AI is tested and approved across regions. In the specific case of the EU, it blurs jurisdictional boundaries and raises concerns about legality and equal treatment, creating liability risks for innovators and jeopardising informed consent from testing subjects.[22] However, despite these risks, regulatory AI sandboxes have the ability to drive the development of trustworthy AI tools for health promotion and disease prevention.
Conclusion
The concept of regulatory sandboxes is still emerging in the public health sector but can support the rapid adoption of AI novel technologies for health promotion and disease prevention at a large scale. By facilitating collaboration between developers and regulators, sandboxes bridge the gap between innovation and compliance, enabling the responsible deployment of AI technologies aimed at disease prevention and health promotion. The development of responsible AI tools ensures healthcare solutions that are safe, effective and aligned with collective values and principles, helping to promote our patients’ trust.
The author would like to thank Célia Cruz, managing partner and Chief Regulatory Affairs Officer at Complear, for her expert peer review and constructive input into this article.
References
[1] Riley JC (2001) ‘Rising Life Expectancy: A Global History’. Cambridge University Press. (Accessed: 29 May 2025).
[2] Olshansky SJ, Willcox BJ, Demetrius L, Beltrán-Sánchez H (2024) ‘Implausibility of radical life extension in humans in the twenty-first century’. Nature Aging Nov; 4(11) pp.1635-1642. doi: 10.1038/s43587-024-00702-3.
[3] Pillutla V, Landman AB, Singh JP (2024) ‘Digital technology and new care pathways will redefine the cardiovascular workforce’. The Lancet Digital Health; 6(10) e674-6. doi: 10.1016/S2589-7500(24)00193-6.
[4] Scott AJ (2021) ‘The longevity economy’. The Lancet Healthy Longevity; 2(12) e828-35. doi: 10.1016/S2666-7568(21)00250-6.
[5] Douglas PS, Nanna MG, Kelsey MD et al (2023) ‘Comparison of an Initial Risk-Based Testing Strategy vs Usual Testing in Stable Symptomatic Patients With Suspected Coronary Artery Disease: The PRECISE Randomized Clinical Trial’. JAMA Cardiology 8(10) pp.904-14. doi: 10.1001/jamacardio.2023.2595.
[6] Fairbairn TA, Mullen L, Nicol E et al (2025) ‘Implementation of a national AI technology program on cardiovascular outcomes and the health system’. Nature Medicine. doi: 10.1038/s41591-025-03620-y.
[7] Thao V, Zhu Y, Tseng AS et al (2024) ‘Cost-Effectiveness of Artificial Intelligence-Enabled Electrocardiograms for Early Detection of Low Ejection Fraction: A Secondary Analysis of the Electrocardiogram Artificial Intelligence-Guided Screening for Low Ejection Fraction Trial’. Mayo Clinic Proceedings: Digital Health 2(4). pp.620-31. doi: 10.1016/j.mcpdig.2024.10.001.
[8] de Ruijter A, Hervey T, Prainsack B (2024) ‘Solidarity and trust in European Union health governance: three ways forward’. The Lancet Regional Health – Europe 46. (Accessed: 29 May 2025).
[9] McKee M, van Schalkwyk, MCI, Greenley R, Permanand G (2024) ‘Trust: The foundation of health systems’. European Observatory on Health Systems and Policies. (Accessed: 29 May 2025).
[10] Patel MR, Balu S and Pencina MJ (2024) ‘Translating AI for the Clinician’. JAMA 332(20) pp. 1701-2. doi: 10.1001/jama.2024.21772.
[11] European Health Forum Gastein (2024) ‘Is the AI Act a gamechanger for healthcare? The AI Act and applications in clinical practice’. (Accessed: 29 May 2025).
[12] Australia.Government ‘The 10 guardrails’. (Accessed: 29 May 2025).
[13] European Health Forum Gastein (EHFG) (2024) ‘Outcomes report: Shifting sands of health: Democracy, demographics, digitalisation’. (Accessed: 29 May 2025).
[14] EU. Article 57: AI Regulatory Sandboxes. EU Artifical Intelligence Act 2024. (Accessed: 29 May 2025).
[15] Official Journal of the European Union (2024) ‘EU REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)’. (Accessed: 29 May 2025).
[16] Madiega, T (2022) ‘Artificial intelligence act and regulatory sandboxes’. European Union. (Accessed: 29 May 2025)
[17] GOV.UK (2024) ‘Guidance: AI Airlock pilot cohort’. (Accessed: 29 May 2025).
[18] GOV.UK (2025)-. ‘Exploring AI in Healthcare: Insights from the AI Airlock Pilot’. (Accessed: 29 May 2025).
[19] Qiu Y, Yao H, Ren P, Tian X and You M (2025) ‘Regulatory sandbox expansion: Exploring the leap from fintech to medical artificial intelligence’. Intelligent Oncology 1(2) pp. 120-7. doi: 10.1016/j.intonc.2025.03.001.
[20] Longhurst CA, Singh K, Chopra A, Atreja A and Brownstein JS (2024) ‘A Call for Artificial Intelligence Implementation Science Centers to Evaluate Clinical Effectiveness’. NEJM AI 1(8). doi: 10.1056/AIp2400223.
[21] Domalpally A and Channa R (2021) ‘Real-world validation of artificial intelligence algorithms for ophthalmic imaging’. The Lancet Digital Health 3(8) e463-e4. Available at: https://pubmed.ncbi.nlm.nih.gov/34325850/ (Accessed: 29 May 2025).
[22] Buocz T, Sebastian P and Eisenberger I (2023) ‘Regulatory sandboxes in the AI Act: reconciling innovation and safety?’Law, Innovation and Technology 15(2) pp. 357-89. doi: 10.1080/17579961.2023.2245678.
Topics
- AI Airlock
- Artificial intelligence (AI)
- Digital health
- EU Artificial Intelligence Act (AIA)
- European Health Data Space (EHDS)
- In Vitro Diagnostic Medial Devices Regulation (IVDR)
- Medical Devices Regulation (MDR)
- Medicines and Healthcare Products Regulatory Agency (MHRA)
- regulatory sandboxes
- RR-2025
- RR-Reprint-Licence
- trust