Quarterly AI Update | Q3 2025

17 September, 2025


Global:

Claude Consumer Terms: User Data Now Used for AI Training Unless Opted Out

On 28 August 2025, Anthropic revised its Consumer Terms and Privacy Policy for its Claude Free, Pro, and Max programs. Per the updated terms, unless users opt out all future chats and coding sessions may now be stored for up to five years and used to train Claude’s models. This reversed Antropic’s original stance according to which data was not used for AI training unless users opted in, and data not used for AI training was deleted within 30 days. New customers must select their preference during sign-up, while existing users must do so by 28 September 2025. Commercial offerings (Claude for Work, Government, Education, or third-party API) are unaffected by this change. Because inadvertent opt-in could expose confidential or sensitive information, organizations should review the update, verify employees’ selections, and issue internal guidance as needed.

USA:

Georgia Court Dismisses Defamation Lawsuit Against OpenAI

A Georgia court dismissed a defamation lawsuit brought by radio host Mark Walters against OpenAI after Chat GPT has generated false claims that Walters had embezzled funds following prompts made by a journalist. However, the journalist recognized the claims were false and never published or shared them except with Walters himself.

The court provided three main reasons for dismissing the case. First, it found that ChatGPT’s output, which included clear disclaimers that it may provide inaccurate information, could not be interpreted as an actual fact. Second, the court determined that Walters failed to prove OpenAI was negligent, especially since it implemented strong safeguards and user warnings to avoid mistaken output.

Finally, the court held that Walters could not recover damages because he admitted to suffering no actual harm and had not requested OpenAI to correct the information. This decision sets a precedent for AI companies, highlighting that robust safeguards and clear user warnings can offer significant legal protection against defamation claims.

Anthropic Reaches $1.5 Billion Settlement Over Use of Pirated Content for AI Training

In June 2025 an order issued by a California court clarified fair use boundaries for AI training. The order ruled that Anthropic PBC’s use of lawfully obtained books to train its language models constitutes as fair use, as it does not reproduce their original works. However, it denied similar protection to Anthropic’s use of pirated works, as it found it inherently infringed their intellectual property rights even if these works were eventually used for AI training. On September 5th 2025, Anthropic agreed to a $1.5 billion class settlement for pirated works, which also included destroying any infringing copies it had.

The decision establishes that AI developers can use lawfully acquired text for training under the fair use doctrine, but similar protection would not extend to use of pirated materials.

White House AI Action Plan: Key Legal and Compliance Implications

President’s Trump administration’s AI Action Plan, released July 23 rescinds former President Biden’s Executive Order 14110 which pushed for AI regulation, arguing that regulation stifles innovation and benefits large tech companies.

The plan’s three pillars are: accelerating innovation, building infrastructure, and leading international diplomacy. It removes regulatory barriers, encourages use of open-source AI to promote commercial and governmental experimentation, and streamlines permits to establish domestic data centers, semiconductor manufacturing, and energy infrastructure to support AI developments’ needs.

Internationally, the plan seeks to expand American AI export and tighten export controls on critical hardware. It aims to protect U.S. innovations, assess national security risks from advanced AI, and recommends measures to combat risks such as AI-powered deepfakes.

Trump Administration Issues Sweeping Executive Orders to Reshape Federal AI Policy, Infrastructure, and Global Competitiveness

In line with its AI Action Plan, On July 23, 2025 President Donald Trump signed three executive orders significantly reshaping U.S. federal policy on AI. The first order, “Preventing Woke AI in the Federal Government“, requires all AI systems used by federal agencies to be ideologically neutral and truth-seeking, specifically prohibiting the promotion of diversity, equity, and inclusion concepts. Agencies must include contract terms allowing termination at the vendor’s cost if these principles are violated.

The second order, “Accelerating Federal Permitting of Data Center Infrastructure“, streamlines federal approvals for large AI data centers and related infrastructure. It offers financial incentives, reduces regulatory barriers, and directs agencies to make federal lands available for data center development. The third order, “Promoting the Export of the American AI Technology Stack“, establishes a program to support U.S. AI exports through federal financial tools and coordinated diplomatic efforts to maintain its dominance in global markets.

Senate Blocks One Big Beautiful Bill Moratorium, Preserving States’ Authority Over AI Laws

In a decisive 99-1 vote, the U.S. Senate has removed a proposed 10-year ban on state-level AI regulation from the One Big Beautiful Bill. This outcome preserves the ability of individual states to enact their own AI regulations, rather than deferring all authority to federal action. The move follows bipartisan criticism that a moratorium would have left a regulatory vacuum, potentially allowing unchecked AI development and limiting consumer protections.

With the moratorium provision now stripped, businesses should anticipate a continued patchwork of state-level AI rules and monitor both state and federal AI legislation initiatives as they continue to evolve.

Illinois Imposes Strict Regulation and Penalties on AI Use in Therapy Services

Illinois has enacted the Wellness and Oversight for Psychological Resources Act, which strictly limits the use of artificial intelligence in therapy and psychotherapy services to licensed professionals. Under the new law, which entered into effect on August 2025, AI may only be used by a licensed professional to assist with administrative or supplementary support tasks, and the licensed professional must maintain full responsibility for all AI-related interactions, outputs, and data use. Informed consent is required when sessions are recorded or transcribed, and any violations may result in civil penalties of up to $10,000 per violation. The Act also upholds confidentiality requirements and exempts religious counseling, peer support, and general self-help resources from its scope.

EU:

No Grace Period for EU AI Act: Legal Deadlines Remain Firm

The European Commission has confirmed that there will be no grace period or pause in the implementation of the EU AI Act, despite calls from industry and some member states to delay enforcement. The legal deadlines set out in the Act remain unchanged: prohibitions on certain AI practices are already in effect, obligations for general-purpose AI models will apply from August 2025, and requirements related to high-risk AI systems will take effect in August 2026. While the Commission acknowledged industry concerns and is offering support measures such as an AI service desk and a voluntary code of practice, it emphasized that the legal text and its deadlines are binding and cannot be postponed. Companies should prepare for compliance according to the established timeline, as no formal extension or grace period will be granted.

CNIL Issues Practical GDPR Guidance for AI System Development

The French data protection authority (CNIL) has released its first comprehensive recommendations, designed to complement the EU AI Act and ensure organizations comply with the GDPR while developing AI systems, particularly where personal data is used for their training. The guidance addresses key stages in AI development, including defining a clear and legitimate purpose for data use, identifying the correct legal basis, establishing roles and responsibilities, and applying data minimization principles. CNIL also provides practical advice on lawful data reuse, transparency with individuals, respecting data subject rights, and implementing robust security measures. Organizations are encouraged to conduct Data Protection Impact Assessments, document their processes, and adopt safeguards such as anonymization, pseudonymization, and regular risk assessments to mitigate privacy risks throughout the AI lifecycle.

EU AI Act copyright template published

On July 2025, the European Commission introduced a standardized template to help developers of General-Purpose AI (GPAI) models provide clear, public summaries of their training data sources, catalog primary data collections, and document additional materials utilized for GPAI model training. This template offers AI providers a consistent method to maintain trustworthy and transparent AI systems, as required by the AI Act.

The template will also support legitimate stakeholders, including copyright owners, in protecting their legal rights under EU legislation.

Want to know more?
Contact us

Shiri Menache

Head of Marketing and Business Development

Matan Bar-Nir

Press Officer, OH! PR