California's groundbreaking AI transparency law is now in effect. Assembly Bill 2013, signed by Governor Gavin Newsom in September 2024, took effect on January 1, 2026, making California the first state to require comprehensive disclosure of training data used in generative AI systems.
If you're deploying AI in your customer experience operations, this law is already reshaping your technology landscape. This post breaks down what the act is and what it means for CX leaders and operators—especially teams running complex CX stacks across CCaaS, CRM, data, and AI vendors.
AB 2013, formally known as the Generative Artificial Intelligence Training Data Transparency Act, mandates that developers of generative AI systems publish detailed documentation about their training data before making those systems available to Californians.
The scope is broad. According to the legislation, any entity that "designs, codes, produces or substantially modifies" a generative AI system for public use must comply.
Generative AI is defined as any system that can "generate derived synthetic content, such as text, images, video, and audio, that emulates the structure and characteristics of the artificial intelligence's training data."
Notably, the law applies retroactively to any system released or substantially modified on or after January 1, 2022. That four-year lookback period means most AI systems currently in production fall under its requirements.
If you're using AI-powered chatbots, voice assistants, sentiment analysis tools, or automated customer service platforms, AB 2013 is already affecting your technology stack. As of January 1, 2026, AI developers must publicly disclose how they trained their models, and that transparency exposes everything from data sources to intellectual property practices.
The market response has been telling. OpenAI and Anthropic published their AB 2013 disclosures within days of the law taking effect, opting for high-level summaries that acknowledge using "publicly available information that may be protected by copyright" and data "in the public domain" without revealing specific datasets. Their approach appears designed to satisfy the statute's requirements while protecting competitive information.
For CX leaders, this creates immediate implications. The transparency requirements expose how your AI vendors trained their models. Your customer service chatbot's training data provenance is now public record. If it was trained on datasets containing copyrighted material without proper licensing, on biased data sources, or on datasets that include personal information in ways that conflict with privacy commitments, that information is (or should be) documented on your vendor's website.
Organizations that partnered with AI vendors using questionable training practices now face potential trust issues with customers who can review these disclosures. Conversely, transparency enables more informed vendor selection and the ability to differentiate based on responsible AI deployment.
AB 2013 doesn't specify monetary penalties, but enforcement is already active. The California Attorney General has announced plans to hire AI experts and investigative technologists specifically to support enforcement of California's AI legislation, including AB 2013.
The law allows enforcement through California's Unfair Competition Law, enabling both public enforcement by the Attorney General and private rights of action when violations cause injury or financial loss. Given California's active plaintiff bar and intense public scrutiny around AI ethics, expect enforcement actions to ramp up throughout 2026.
xAI filed a federal lawsuit on December 29, 2025 (just days before the law took effect) challenging AB 2013 as unconstitutional. The company argues the law forces disclosure of valuable trade secrets, violates the First Amendment by compelling speech, and is unconstitutionally vague. Whether this legal challenge succeeds remains to be seen, but it signals how contentious training data transparency has become.
Legal experts note that xAI's constitutional arguments face significant hurdles, particularly since OpenAI and Anthropic have already published compliant disclosures without apparent difficulty. If those high-level summaries satisfy the statute, it weakens xAI's claim that compliance necessarily requires revealing valuable trade secrets. Nevertheless, the lawsuit creates regulatory uncertainty that CX leaders must navigate.
With the law now in effect, CX leaders face immediate action items:
Check whether your AI providers have published AB 2013 documentation on their websites. OpenAI's disclosure is titled "Training Data Summary Pursuant to California Civil Code Section 3111," while Anthropic published "Training Data Documentation Pursuant to California Civil Code Section 3111 (AB 2013)." If your vendors haven't disclosed anything, that's a red flag about their compliance posture.
The early disclosures from major providers remain at a high level, disclosing generalized categories like "publicly available information" rather than specific datasets. Consider whether these summaries provide enough transparency to assess whether the AI systems align with your organization's ethical commitments and risk tolerance.
Review your vendor agreements for provisions addressing regulatory compliance, indemnification for legal violations, and your rights to switch providers if vendors fail to meet legal requirements. With AB 2013 now enforceable, non-compliant vendors expose you to reputational and operational risk.
The xAI lawsuit could reshape how AB 2013 is interpreted and enforced. Stay informed about court proceedings and any guidance from the California Attorney General about what constitutes acceptable disclosure. Many smaller AI developers are taking a "wait and see" approach, watching how the litigation unfolds before publishing their own disclosures.
AB 2013 doesn't exist in isolation. California enacted 18 AI-related bills in 2024, creating a comprehensive regulatory framework that includes SB 942 (requiring AI content detection and watermarking for large providers) and SB 53 (the Transparency in Frontier Artificial Intelligence Act, requiring large frontier model developers to publish risk frameworks and report critical incidents).
However, a significant challenge emerged in December 2025. President Trump signed an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence" proposing to establish a uniform federal policy framework that would preempt state AI laws deemed inconsistent with federal policy. The executive order directs the Attorney General to establish an AI litigation task force to challenge state laws on grounds including unconstitutional regulation of interstate commerce and federal preemption.
The timing suggests California's AI laws, including AB 2013, are specific targets. The executive order creates uncertainty about how long state-level regulations will remain enforceable, though as of February 2026, California's laws are currently in effect and actively enforced.
Despite this federal challenge, the regulatory trend toward AI transparency and accountability is clear across multiple jurisdictions. California's framework has influenced Colorado's AI Act and will likely inform other states and potentially federal legislation. CX organizations should prepare for ongoing compliance while monitoring federal developments that could reshape the regulatory landscape.
While AB 2013 creates compliance burdens, it also creates competitive differentiation opportunities. Organizations that proactively ensure their AI deployments use ethically sourced, properly licensed training data can market that transparency as a trust signal.
In an environment where consumers are increasingly skeptical of AI, being able to demonstrate that your customer service systems were built on transparent, ethical foundations becomes a competitive advantage. The brands that will win in AI-powered CX aren't just those with the most sophisticated technology—they're the ones customers trust.
California's AI Transparency Act represents a fundamental shift in AI governance. For CX organizations, compliance isn't just about avoiding penalties—it's about building customer trust in an environment where opacity is becoming a competitive liability.
The question isn't whether to address AB 2013. The law is in effect, major vendors have published disclosures, and enforcement mechanisms are active. The question is whether you'll use this regulatory reality (despite its uncertainty) to build a more transparent, trustworthy customer experience that differentiates your organization in a market increasingly skeptical of AI.