USA
Court Rules in Favor of Anthropic in AI Training Copyright Case
On June 23, 2025, the Northern District Court of California court ruled in favor of Anthropic, in a lawsuit brought by three authors for copyright infringement, alleging the company used their books without permission to train Claude AI. The court differentiates between the use of pirated books and those legitimately purchased and scanned by Anthropic. While the former would consider to be a violation of copyright laws, the latter may qualify as fair use under the U.S. Copyright Act. For AI training, the judge determined the use was highly transformative since it aimed to create something entirely different rather than replicating the originals. The court noted that Claude’s output doesn’t infringe on original copyrights and caused no market harm.
Federal Judge Rules in Favor of Meta in AI Training Copyright Case
On June 25, 2025, federal judge ruled in favor of Meta, in a lawsuit brought by thirteen authors claiming Meta trained its AI models copyrighted books from online “shadow libraries” without permission. The court also found that such use constituted fair use under copyright law.
In its reasoning the court found that the authors failed to prove Meta’s use caused or would likely cause significant market harm, such as enabling AI reproduction of substantial portions of their works or creating competing content. Instead, Meta’s use of the books was deemed reasonable and necessary for AI training purposes.
The court also rejected claims that lost licensing fees constituted harm under fair use law.
The court emphasized, however, that this ruling doesn’t establish that all AI training on copyrighted materials is lawful, and future cases may reach different outcomes with stronger evidence of market harm.
National Security Agencies Published Guidance on Securing Data in AI Systems
On May 22, 2025, the NSA and other leading national security and cybersecurity agencies published a guidance outlining essential best practices for protecting data used in AI and machine learning systems. The guidance emphasizes the importance of securing data throughout the AI lifecycle—from development to deployment and ongoing operation—by recommending measures such as data provenance tracking, encryption, digital signatures, secure storage, and robust access controls. The Guidance also highlights key risks, including threats from compromised data supply chains, malicious data manipulation, and data drift, and provides practical mitigation strategies to help organizations safeguard sensitive and mission-critical information while maintaining the reliability and integrity of AI-driven outcomes.
On May 22 The House of Representatives approved the One Big Beautiful Bill Act, a legislation that among other issues, provides $500 million through 2034 for the Department of Commerce to modernize IT systems with AI technology. The funding will replace legacy systems with commercial AI solutions, improve operational efficiency, and strengthen cybersecurity through automated threat detection. The Bill also establishes a 10-year moratorium preventing state and local governments from regulating AI systems in interstate commerce, creating uniform federal oversight. Key exceptions allow states to facilitate AI deployment, maintain criminal law enforcement, and impose reasonable fees that treat AI systems equally with comparable technologies. The legislation includes comprehensive definitions for AI systems and automated decision-making to ensure consistent application of federal oversight.
New York Passes RAISE Act Requiring AI Safety Plans from Major Developers
On June 12 the New York State Senate has passed the Responsible AI Safety and Education (RAISE) Act, which mandates that AI developers investing over $100 million in advanced AI training establish and publish safety plans to address severe risks such as automated crime, bioweapons, and other large-scale harms. The law requires these companies to disclose serious incidents, including dangerous model behavior or security breaches, and empowers the Attorney General to enforce compliance through civil penalties.
EU
EDPB publishes resources on privacy and AI legislation
The European Data Protection Board (EDPB) has published two new training resources— Law & Compliance in AI Security & Data Protection and Fundamentals of Secure AI Systems with Personal Data — to address skill gaps in AI and data protection. Developed under the Support Pool of Experts programme at the request of the Hellenic Data Protection Authority, the first document targets legal professionals such as Data Protection Officers, focusing on compliance with GDPR, the AI Act, and related regulations, while the second is designed for technical professionals, including cybersecurity experts and AI developers, covering secure AI system design and privacy integration. The EDPB plans to launch a one-year pilot for community-driven updates, allowing external contributors to propose changes and comments to keep the materials current with evolving AI practices.
Israel
On April 28 the Privacy Protection Authority (PPA) has released draft guidelines to clarify the intersection between privacy laws and artificial intelligence systems. In these guidelines, the PPA clarifies that the Privacy Protection Law applies to AI systems and underscores the necessity of a legal basis for processing personal data through such systems, including information that the AI systems derive from personal data. The PPA further elaborates on the standards and requirements for obtaining informed consent and asserts that data scraping also requires the data subject’s consent. Additionally, the draft guidelines emphasize the need for robust corporate governance led by senior management. The PPA also addresses the right to amend personal data in the context of AI, expressing its intention to prioritize enforcement of the rights to amend and access personal data. With respect to data security, the PPA notes that most AI-based databases will be classified as requiring medium or high statutory security levels and stresses the significance of adhering to the principle of data minimization.
This publication is provided as a service to our clients and colleagues, with explicit clarification that each specific case requires individual examination and discussion in writing.
The information presented here is of a general nature and is not intended to answer the unique circumstances of any individual or entity. Although we strive to provide accurate and available information, we cannot guarantee the accuracy of the information on the day it is received, nor that the information will continue to be accurate in the future. Do not act on the information presented without appropriate professional advice after a comprehensive and thorough examination of the specific situation.