In 2025, the AI field continued its rapid evolvement, building upon the momentum of the previous year. Following the enactment of the EU AI Act in 2024, this year saw significant progress in its implementation and in the development and adoption of various regulatory initiatives across key jurisdictions which sought to promote responsible innovation. This quarterly update highlights key developments, including the EU Commission’s proposed delay for high-risk AI system obligations, approval of a comprehensive AI legislation package in California, and landmark court rulings on copyright liability in the UK and Germany.
US:
Wave of Wrongful Death Lawsuits Challenges Tech Industry Accountability
The AI industry is facing a wave of lawsuits alleging that chatbots are directly responsible for user deaths and severe psychological harm. These cases raise critical questions about the legal liability of tech companies and the need for greater regulatory oversight. On August 7, 2025, a lawsuit was filed against OpenAI after the suicide of a 16-year-old, alleging that ChatGPT contributed to the death. The suit claims the AI’s design prioritized user engagement over safety, fostered psychological dependence, provided detailed suicide instructions, and lacked adequate safety protocols. That same day, OpenAI issued a statement committing to add parental controls and improve tools for detecting and responding to mental health crises. Merely 2 months after the incident, seven more lawsuits were filed against OpenAI and its CEO on behalf of four individuals who died by suicide and three survivors of suicide attempts. These suits allege the company prematurely released its GPT-4o model despite internal warnings, designing it with emotionally immersive features that fostered psychological dependence and addiction.
California Advances AI Regulation: Governor Signs Comprehensive Legislative Package
The Governor of California recently signed 18 new laws addressing artificial intelligence. These laws cover a wide range of matters, from transparency requirements to consumer privacy protections, and will significantly shape the state’s approach to AI.
Uniform Definition of “Artificial Intelligence”
A notable component of the legislative package is AB 2885, which establishes a uniform definition of “artificial intelligence” that closely aligns with that of the EU AI Act. It defines AI as a machine-based system that can infer from input how to generate outputs affecting physical or virtual environments.
Transparency Requirements for AI Developers
Another key law is AB 2013, which requires AI developers to disclose detailed information about the training data they use. By January 1, 2026, developers must publish a comprehensive summary of the datasets used to create their AI services or systems.
In addition, SB 942 imposes further obligations on large AI developers, requiring them to provide public tools to identify AI-generated content and to watermark such content with information such as the provider’s name, the name of the system and the content’s date of creation. The law also mandates that developers who license their systems require licensees to preserve the watermarks and terminate the license within 96 hours of learning that the function has been disabled.
SB 53 adds further oversight for developers of large-scale “frontier” AI models trained with massive computing power. It requires transparency reports for new or substantially modified models, prompt reporting of critical safety incidents to the state, and implementing whistleblower protections. Large developers must also publish annual AI risk frameworks, submit quarterly risk summaries, and maintain anonymous reporting channels. The law, effective starting January 1, 2026, authorizes civil penalties of up to $1 million per violation and private enforcement of whistleblower protections.
Expanded Privacy Protections
On privacy matters, AB 1008 expands the scope of the California Consumer Privacy Act (CCPA). The amendment clarifies that the law also applies to the use of personal information in the context of AI systems capable of producing outputs that include personal information, thereby enhancing consumer protection when using AI.
Regulation of AI Use
AB 2905 focuses on entities using AI technologies and requires disclosure when AI-generated synthetic voices are used in telemarketing calls. This measure is intended to protect consumers and ensure transparency in commercial communications.
The U.S. Patent and Trademark Office Issues Revised Guidance on AI Inventorship
In November 2025, the U.S Patent and Trademark Office (USPTO) released its updated guidance on the standard for patent inventorship for inventions made using AI. The USPTO clarified that only natural persons can be recognized as inventors. It emphasized that AI systems, no matter how advanced, are merely tools meant to support human creativity rather than independent contributors. The guidance reaffirmed that the traditional legal test of “conception”, requiring a human inventor to possess a specific idea of an operative and complete invention, remains the cornerstone of inventorship. Importantly, applications naming AI as a sole inventor will be rejected.
This update signals the USPTO’s ongoing effort to align AI innovation with established patent law.
UK:
Getty Images v Stability AI: High Court Limits Copyright Liability
In a November 2025 ruling, the UK High Court largely rejected Getty Images’ lawsuit against Stability AI over its Stable Diffusion model, which is hosted and operated outside the UK. The case centered around 2 main arguments: (1) the model’s weights contained copyright infringing images and is thus an “infringing copy“, and (2) Stability is liable for secondary infringement since the model is an “article“ imported to the UK which contained Getty’s copyrighted works.
The court rejected the first claim, finding that the model’s weights did not store or reproduce copyright infringing images, and only contained statistical patterns learned during the model’s training.
Regarding the second claim, the court dismissed this claim since Stability only provided remote access to the model and has not made it available for download in the UK.
The ruling suggests that hosting AI models remotely carries lower infringement risk than domestically distributing them for download.
EU:
EU Commission Considering One-year Delay for High-risk AI Rules
On November 19, 2025, the European Commission published its digital omnibus proposal for the EU AI Act, introducing several legislative adjustments. Notably, the proposal extends the compliance deadline for AI systems deemed “high-risk”, which are subject to strict obligations under the Act, based on the track according to which they were classified high-risk. This delay, expected to postpone enforcement for some high-risk AI systems by up to 15 months (from August 2026 to December 2027), reflects growing concerns among member states, industry groups, and international stakeholders that necessary technical standards are not yet ready.
The proposal is subject to European Parliament and EU member states’ approval. If adopted, it would reflect a more innovation-friendly approach while preserving individual rights and maintaining firm compliance deadlines for businesses.
Munich Court: AI Model Memorization of Lyrics Infringes Copyright
In November 2025 the Munich I Regional Court ruled in the case GEMA v. OpenAI that OpenAI’s memorization and output of song lyrics violated German and EU copyright law. The court rejected OpenAI’s defense that models only “learn“ data, rather than storing it, finding that lyrics reproducibly embedded in model parameters constitute unauthorized reproduction under copyright law.
The court held that text and data mining exceptions don’t apply because they only cover preparatory analysis that doesn’t harm exploitation rights. When models retain and reproduce protected works, this exceeds mere analysis and infringes rights holders’ interests. Other defenses, such as incidental use and implied consent, were also rejected.
The court additionally held that the model operator, not end users, is liable for infringing outputs, since operators control training data, architecture, and memorization risks. The court granted GEMA injunctive relief, information rights, and damages.
The judgment establishes that neither internal storage nor outputs of memorized lyrics are protected by data mining exceptions, and operators face liability for unauthorized reproduction.
The above content is a summary provided for informational purposes only and does not constitute legal advice. It should not be relied upon without obtaining further professional legal counsel.
