The AI regulatory scene is bubbling and there’s a growing need for regulatory guidance tailored for the healthcare and biomed industry.
Below are the main regulations and some key takeaways to consider.
The European Union Artificial Intelligence act (The EU AI Act)
Check if your AI system falls under the scope of the EU AI Act which was recently published. Note that even if you are not located in the EU, if your AI output will be used in the EU Member States or if you are developing or distributing in the EU, you might fall under the scope of the EU AI act.
The EU AI act exempts AI or output thereof that is developed and put in service for the sole purpose of scientific research and development. However, the scope of this exemption is still not entirely clear.
If you fall under the scope of the EU AI act, check to which risk category your product might fall into:
- Ensure your product does not include one of the prohibited practices, which include (but are not limited to): social scoring, deceptive and manipulative techniques, exploition of vulnerabilities of a certain person or of a group of people, creation of a facial recognition database through untargeted scraping of facial images from the internet or CCTV footage.
Certain activities such as emotional recognition may be permitted for medical use (while forbidden in education or the workplace) and will be categorized as high risk. - Check If your product meets the definition of “high risk AI system”. Your product might be a high risk AI system if your AI enabled software is regulated under various legal frameworks, including the EU MDR (for medical devices) or EU IVDR (for in vitro devices) and accordingly requires a conformity assessment. As a high risk AI system, your product will be subject to a list of obligatory requirements.
- If you are in the drug discovery field or the digital health field, and your product does not meet the categories above, you might be a “low risk AI system” which means you will be subject to transparency requirements.
- If you are using a general purpose AI to interact with customers, make sure you adhere to the transparency requirements, particularly explaining to the users they are interacting with an AI system.
USA federal Guidance
President Biden issued Executive Order 14110 in October 2023, followed by guidance from the Office of Management and Budget (OMB). The OMB guidance covers AI that is developed, used, or procured by or on behalf of covered federal agencies that impact safety or rights. Medical-related AI products used or purchased by such agencies may be presumed to impact safety, requiring these agencies to implement minimum practices to mitigate the associated risks. As per these strategic documents, future regulation of AI in health is anticipated.
Recently, and following the OMB guidance, the US Department of Human and Health Services (HHS) Office for civil rights (OCR) and the Centers for Medicare and Medicaid (CMS) published its final rule prohibiting algorithmic discrimination. Accordingly, health care providers, insurers, grantees, and others covered by the Rule, are are required to mitigate the risk of discrimination when using AI decision support tools that affect patients’ care.
Additionally, several US agencies have issued a joint statement on enforcement efforts regarding discrimination and bias in automated systems, affirming their commitment to enforcing existing legal authorities.
The Federal Trade Commission (FTC) has also issued notices declaring its intention to enforce the existing FTC Act prohibiting unfair and deceptive practices when AI tools violate it, emphasizing fairness, privacy, and anti-discrimination as key issues.
Although there is still no federal AI law in the US, several US states have proposed or enacted specific laws addressing certain aspects of AI. See Also recent US Senate AI working group initiation of AI’s roadmap.
Regulatory Authorities’ Guidance
Especially in an uncertain regulatory environment, it is advisable to be aware of the professional guidance of the relevant regulatory authorities. below are some examples:
- FDA’s policy on AI and medical products
- Draft guidance marketing submission recommendations for a pre-determined change control plan for AI enabled devie software functions
- FDA & Canada health & MHRA – Good Machine learning practices for medical devices
- FDA Using AI in developing drugs and biological products – discussion paper
- EMA Draft reflection paper on the use of artificial intelligence in the lifecycle of medicines
The Israeli Perspective
If your product is an AI-based medical device it requires the MoH approval – AMAR- and having a CE or FDA approval can help expediting receipt of such approval. If you are testing your device in a medical research, it should receive the regulatory approvals such as Helsinki committee approval and medical institution director’s approval (and in certain cases the MoH approval as well).
However, the Israeli regulatory bodies have not yet issued concrete guidance regarding AI, other than the strategic document of Principles of Policy Regulation and Ethics in the artificial intelligence field.
As the Israeli health regulator often relies on the FDA and EU regulatory bodies policies, and as their policies are aligned with responsible AI approach set in the above mentioned Israeli strategic document, it is advisable to monitor and follow recent and future developments published by FDA and EU health regulatory bodies.
Privacy
Using AI tools requires extra caution regarding the use of personal information. Make sure you comply with the applicable regulation – HIPAA and state law in the USA, GDPR and Member State laws in the EU and the Privacy Protection Law and Regulations in Israel. Also make sure your prompt or input while using the AI tool does not include personal or sensitive information before verifying you are authorized to do so and it complies with applicable laws.
Legal considerations
When developing AI based products, including the use of generative AI, there are many more legal considerations regarding ownership of the the product, input and output, protection of your data and commercial agreements. For a general review of such considerations, see here and here.
AI Risk Management Policy As regulatory frameworks are being established, it is advisable to formulate your own AI risk management policy. Start by mapping out the use of AI and its impact on each stage of development of your product. Our AI team would be happy to assist you in doing so.
This publication is provided as a service to our clients and colleagues, with explicit clarification that each specific case requires individual examination and discussion in writing.
The information presented here is of a general nature and is not intended to answer the unique circumstances of any individual or entity. Although we strive to provide accurate and available information, we cannot guarantee the accuracy of the information on the day it is received, nor that the information will continue to be accurate in the future. Do not act on the information presented without appropriate professional advice after a comprehensive and thorough examination of the specific situation.