Quarterly AI Update | Q1 2026

31 March, 2026


US

Federal Court Declines to apply Attorney – Client Privilege To AI Communications

In February 2026, a New York court issued its ruling in a case where a defendant used Claude AI to prepare his criminal defense. His communications were later seized by a federal authority, which sought to use them as evidence. The defendant claimed that these communications were protected by attorney-client privilege. 

The court rejected the privilege claim, finding that the protection applies to communications with licensed attorneys, and Claude does not and cannot qualify as such. The court also noted that the communications cannot be deemed “confidential” since Claude’s legal terms mandate that they may be used for AI training and disclosed to third parties, including governmental authorities (as was done in this case).

While this decision is tailored to the specific facts of the case, it underscores the need for caution when sharing sensitive information with AI tools.

White House Publishes National AI Framework Policy

On March 20, 2026, the White House issued a National Policy Framework outlining the administration’s priorities for federal AI governance. The Framework highlights several key policy areas and issues legislative recommendations, notably:

  • The administration takes the position that using copyrighted materials to train AI models does not violate copyright laws. However, since this matter is currently being litigated in high-profile lawsuits, it recommends leaving the final determination to the courts.
  • The Framework rejects the establishment of a designated Federal AI rulemaking body, instead expressing support for sector-specific AI standards by existing regulators (such as the SEC). 
  • Following the rejection of the proposed moratorium on state-level AI regulation (discussed in a previous update), the Framework proposes a more balanced approach: states will retain authority over certain matters (such as fraud and consumer protection), while other areas (such as AI development) will remain exclusively within federal jurisdiction.

As the Framework reflects the administration’s views on key issues, it is likely to impact future AI legislative initiatives at both the federal and state levels.  

GSA Publishes Draft Proposal for Governmental Procurement of AI

On March 6, 2026, the U.S. General Services Administration (GSA) issued a draft proposal laying out terms and conditions governing the purchase of AI systems by federal agencies. The proposal requires AI systems to be developed and produced in the U.S., prohibits using government data to train or fine-tune AI systems, and grants agencies full ownership of any improvements made under the contract. It also imposes strict oversight and reporting obligations and grants agencies broad rights to use acquired AI systems for “any lawful government purpose.”

If adopted, these rules would impose strict obligations on AI vendors and may potentially impact public sector procurement rules in other regions.

New York Advances Amendment Requiring Disclosure on Output Accuracy

On March 9, 2026, New York’s legislature passed an amendment to the state’s business law requiring owners, licensees, or operators of generative AI platforms to display a notice on the platform’s interface disclosing that the output may be inaccurate. The amendment, effective once signed by the Governor, authorizes civil penalties of up to $100,000 per violation.

Utah Enacts Comprehensive AI Bill Addressing Synthetic Intimate Imagery

On March 24, 2026, Utah enacted a bill introducing a comprehensive set of AI-centered modifications designed to protect consumers from AI-related harms, which becomes effective in June 2026. A primary focus of the legislation is the Digital Voyeurism Prevention Act, which establishes legal safeguards against the non-consensual creation and distribution of “counterfeit intimate images.” The Act requires AI services and platforms to verify an individual’s consent for the generation of such images and comply with takedown requests. Non-compliance with these provisions may lead to civil action, including punitive damages.

EU

EDPB and EDPS publish Joint Opinion on AI Act Omnibus

In January 2026, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) issued their response to the EU Commission’s proposed “Digital Omnibus on AI“— a package of amendments designed to ease compliance burdens under the EU AI Act. Key proposals and the authorities’ concerns include:

Delaying the compliance deadline for high-risk AI systems by up to fifteen months (discussed in a previous update) – the delay could harm individual rights, and that the lack of firm compliance deadlines would undermine legal certainty for businesses.

The AI Act’s permission to process special categories of personal data (e.g., health data, ethnic origin) for bias detection and correction shall apply to all AI systems, rather than just high-risk AI systems – the broadened standard could be abused, and should be limited to cases where the risk of bias is sufficiently high.

Exempting providers who self-assess their systems to be non-high risk from registering in the EU database for high-risk AI systems (for instance, where the AI system was only used to perform procedural tasks) – the change could reduce provider accountability and incentivize providers to exploit self-assessment to avoid regulatory oversight.

The response signifies the regulators’ state of mind, still attempting to find the middle ground between innovation and regulation. It comes over the backdrop of growing criticism about the EU AI Act over-burdening EU businesses in comparison to their non-EU competitors.  

EU Commission Releases Revised Code of Practice for AI Content Transparency

 In March 2026, the European Commission published the second draft of its voluntary Code of Practice on marking and labeling AI-generated content. The code aligns with Article 50 of the EU AI Act, which imposes transparency obligations on providers effective August 2026. It provides streamlined guidance for AI system providers and deployers regarding the disclosure and marking of certain content.

For deployers, the code addresses labeling and disclosure requirements for deepfakes and other AI-generated content, while proposing a tailored exemption from these rules for AI-generated text that has undergone human review.  

UK

CMA Issues Consumer Law Guidance for AI Agents

In March 2026, the UK’s Competition and Markets Authority (CMA) released guidance on the applicability of consumer laws to AI Agents. While the guidance recognized the benefits of deploying agentic AI in consumer-facing applications (such as chatbots), it emphasized that businesses are fully liable for non-compliance with consumer laws caused by AI agents.

To that end, the guidance sets out key compliance principles for businesses, including (1) training their AI agents to comply with consumer law (for instance, by obtaining customer consent and avoiding misleading practices), (2) ongoing human oversight (including through human-in-the-loop mechanisms), and (3) incorporating clear and accurate disclosure to consumers when they are interacting with an AI agent.

The guidance signals a growing regulatory focus on AI agents and may inspire similar frameworks in other jurisdictions.

___________________

The above content is a summary provided for informational purposes only and does not constitute legal advice. It should not be relied upon without obtaining further professional legal counsel.

Want to know more?
Contact us

Shiri Menache

Head of Marketing and Business Development

Matan Bar-Nir

Press Officer, OH! PR