US
On January 23, 2025, President Trump issued an executive order, establishing U.S. policy in order to maintain its global AI leadership and promote national security and economic competitiveness. The order attempts to represent a shift towards innovation and the development of “AI systems that are free from ideological bias or engineered social agendas”. Since then many major tech companies have responded with proposals. OpenAI focused on a regulatory strategy that emphasized innovation freedom, democratic AI export, and infrastructure development. Google emphasized AI investment, government modernization, and international pro-innovation approaches. Action plan shall be presented within 180 days of the executive order.
- California Attorney General Issues 2 AI Advisories
On January 13, 2025, California’s Attorney General issued 2 advisories concerning AI technologies. The first advisory outlines how the California’s existing consumer protection, civil rights, competition, and data privacy laws apply to artificial intelligence systems, emphasizing that AI developers and users must ensure their systems comply with these laws to prevent bias, discrimination, deception, and other potential harms to Californian people and institutions, economy and infrastructure, while also highlighting new AI-specific legislation taking effect in 2025.
The second advisory focuses on AI used in healthcare. Acknowledging its rapid adoption in the field, it mandates that general State laws and sector-specific laws concerning patient privacy and autonomy apply to entities developing, distributing and deploying AI in the health sector.
On January 7 2025, the Food and Drug Administration (FDA) published a draft guidance outlining how it plans to regulate AI-enabled medical devices, i.e. devices implementing one or more AI models to achieve their purpose. This document has been issued for public comment, and the FDA is currently accepting feedback from stakeholders before finalizing the guidance. The guidance advises sponsors to include specific information about their marketing submissions for FDA approval of AI-enabled medical devices. Key recommendations include (i) disclosing the use of AI models and how they support the device’s intended use; (ii) documenting data collection, handling and use practices; (iii) describing the AI training process; and (iv) detailing cybersecurity measures addressing AI risks such as overfitting, data leaks, bias, and data poisoning.
On February 11, 2025, a Delaware federal court ruled against Ross Intelligence in its copyright dispute with Thomson Reuters. Judge Bibas found Ross has infringed Westlaw’s headnotes copyright when training its legal AI tool, rejecting Ross’s fair use defense. The court reasoned that Ross’s commercial use directly competed with Thomson Reuters, making it non-transformative and that it negatively impacted Thomson Reuters’s market opportunities. Unlike prior “intermediate copying” cases involving necessary code reproduction, Ross copied editorial content to build a competing product. This ruling suggests fair use may not protect AI training on copyrighted materials when developing directly competing products, even if protected content isn’t visible in the final output.
EU
The European Union’s Artificial Intelligence Act has initiated its phased rollout as of February 2, 2025, with the first set of rules now in force. These rules encompass the definition of AI systems, requirements for AI literacy (which requires AI system providers and deployers to ensure their teams are knowledgeable about AI technologies), and a select group of banned AI practices that are considered to pose unacceptable risks, such as social scoring, crime risk prediction, unauthorized facial recognition, emotion inference, biometric categorization, and specific uses of ‘real-time’ biometric identification.
On February 26, 2025, The EPRC published a report on algorithmic discrimination and the interactions between the AI Act and the GDPR (the “Report“). The Report mentions That despite the GDPR limitations of processing special categories of personal data, processing of such data in the context of high risk AI systems and to the extent necessary for bias monitoring, detection and correction may be compliant with the GDPR, as it represents “substantial public interest” in line with the provisions of both the GDPR and the AI Act. The Report concludes by mentioning several conditions for compliance with the GDPR and the AI Act, which include implementing cybersecurity measures to prevent leaks, complying with GDPR principles such as data minimization and privacy by design, processing special categories of data only when strictly necessary to protect fundamental rights, and adhering to the lawful grounds for processing special categories data as set out in the GDPR, such as the processing being necessary for substantial public interest.
Global and News
- Investigations and Actions Against DeepSeek
Data protection authorities across multiple countries have launched investigations against DeepSeek, a Chinese AI chatbot, over concerns about user data storage in China and GDPR non-compliance. Italy took the most decisive action by blocking the app entirely, claiming it falls outside EU jurisdiction. Additionally, formal investigations or official warnings were issued in Belgium, Netherlands, Germany, Poland, Luxembourg, France, and Ireland, with each country expressing specific concerns about data security risks. Outside Europe, Australia and Taiwan implemented bans on DeepSeek for government devices citing national security concerns, while South Korea suspended new downloads until data protection violations are addressed. In the United States, Texas prohibited the app on government devices, with Attorney General Ken Paxton initiating an investigation into potential privacy and data security law violations. The primary concern among authorities in the EU is that user information stored on Chinese servers could be accessed by the Chinese government under national intelligence cooperation laws, particularly problematic since China lacks the adequate personal data protection standards recognized by the EU.
In the EU, we expect the publication of additional guidelines and codes of conduct to assist with the implementation of the EU AI Act in the coming year.
This publication is provided as a service to our clients and colleagues, with explicit clarification that each specific case requires individual examination and discussion in writing.
The information presented here is of a general nature and is not intended to answer the unique circumstances of any individual or entity. Although we strive to provide accurate and available information, we cannot guarantee the accuracy of the information on the day it is received, nor that the information will continue to be accurate in the future. Do not act on the information presented without appropriate professional advice after a comprehensive and thorough examination of the specific situation.