AI Quarterly Update 2024 and Outlook for 2025
In 2024, the AI field developed significantly. On the legislative side that resulted in the historic enactment of the EU AI Act. This groundbreaking legislation established the first comprehensive regulatory framework for AI, setting a standard not only in Europe but also influencing global efforts to oversee and manage the swift progress in AI technology. Below please find updates relevant to Q4- 2024:
Israel
After Israel joined the Council of Europe’s Framework Convention on Artificial Intelligence, an inter-ministerial team, including the Ministries of Justice and the Treasury, as well as all the financial regulators, released an interim report open for public consultation, addressing major challenges anticipated from the use of AI in the financial sector. The report evaluates these challenges and offers recommendations for areas such as investment advice, portfolio management, credit activities, and insurance. Key recommendations emphasize the need for explainability, human oversight, notification obligations, liability, and overall AI governance in organizations in the financial sector. Regarding privacy, the report confirms that existing privacy laws apply to data usage in AI systems, while recommending the implementation of higher consent requirements.
It is anticipated that Israel will not issue comprehensive AI regulations but will rather issue reports or guidance from governmental authorities.
Europe
In addition to the EU AI Act, which came into effect in August 2024, the new Product Liability Directive, effective December 2024, expands liability for defective products to include software and AI, recognizing them as products subject to no-fault liability and addressing issues like software updates and the establishment of causal links.
European authorities also continued to provide guidance on AI, focusing on various aspects of the field. The European Data Protection Board issued an opinion addressing significant challenges regarding AI models’ use of personal data, offering guidance on issues such as how personal data used by AI models can be categorized as “anonymized”, the reliance on the legal basis of legitimate interest when processing personal data using an AI model, and the treatment of an AI model developed using unlawful data processing practices. Additionally, the European Commission published a draft Code of Practice for general-purpose AI models, emphasizing transparency, copyright obligations, and risk management. This Code aims to assist general-purpose AI developers to align with the EU AI Act and is anticipated to receive final approval in May 2025.
Since the launch of ChatGPT, OpenAI has been continuously monitored by the Italian Data Protection Authority, Garante, which has previously taken several enforcement actions against it. Recently, Garante imposed a €15 million fine on OpenAI, for its failure to notify the relevant parties regarding a data breach that occurred in March 2023, for processing users’ personal data to train ChatGPT without establishing a proper legal basis, for breaching its transparency obligations to users and, failure to implement necessary age verification measures. Garante has also required OpenAI to conduct a six-month information campaign on the functioning of ChatGPT.
In the EU, we expect the publication of additional guidelines and codes of conduct to assist with the implementation of the EU AI Act in the coming year.
US
After California introduced the “Generative Artificial Intelligence: Training Data Transparency” bill, which mandates transparency regarding the datasets used in developing generative AI systems and the “California AI Transparency Act” which requires AI systems with significant user bases to provide AI detection tools and clearly label AI-generated content, both of which are scheduled to enter into effect in January 2026, New York enacted a law regulating state agencies’ use of automated decision-making tools. The law prohibits using AI in providing state benefits or impacting individuals’ rights unless subject to human review. Agencies must disclose AI usage, conduct biennial risk assessments, and cease its use if biased outcomes are found.
The Commodity Futures Trading Commission issued an advisory on AI use in the regulated market, outlining potential applications and ensuring compliance with existing regulations.
In the US, due to the country’s granular approach to AI, we anticipate the continued emergence of specific AI-related laws or guidelines at the state level.
___________________
This publication is provided as a service to our clients and colleagues, with explicit clarification that each specific case requires individual examination and discussion in writing.
The information presented here is of a general nature and is not intended to answer the unique circumstances of any individual or entity. Although we strive to provide accurate and available information, we cannot guarantee the accuracy of the information on the day it is received, nor that the information will continue to be accurate in the future. Do not act on the information presented without appropriate professional advice after a comprehensive and thorough examination of the specific situation.