From Lawsuits to Licensing: How AI Is Forcing a Reckoning Over Content Ownership
The rapid leap in artificial intelligence capabilities in 2025 has ignited a content shockwave across global media industries. As generative models began producing text, images, music, and video at unprecedented scale and fidelity, a long-simmering tension finally erupted into open legal warfare.
For publishers, studios, and creators, copyright remains the last defensible moat. And increasingly, they are testing that moat in court.
On December 22, Pulitzer Prize–winning journalist John Carreyrou, best known for Bad Blood, joined a group of authors and reporters in filing a sweeping lawsuit against six leading AI firms—OpenAI, Google, Meta, xAI, Anthropic, and Perplexity—accusing them of illegally using copyrighted works to train their models. The case followed earlier legal actions from U.S. record labels and Hollywood studios, signaling that the creative industries had moved from protest to prosecution.
Yet even as lawsuits multiply, a parallel—and seemingly contradictory—trend has taken hold: détente.
When Litigation Fails, Licensing Begins
The logic is pragmatic, if uneasy. If AI companies cannot be stopped, they must be monetized.
On December 11, Disney—the largest copyright holder in U.S. entertainment—reached a landmark agreement with OpenAI. The deal granted OpenAI access to Disney’s vast IP library in exchange for equity, anchored by a reported $1 billion investment. In effect, Disney chose ownership over opposition.
“No generation has ever stopped technological progress, and we don’t intend to,” Disney CEO Bob Iger said following the announcement. Even disruption, he argued, must be met with adaptation.
The irony was hard to miss. On the same day, Disney sent a legal notice to Google, accusing its AI products of “massive” infringement. Lawsuits by Disney, Universal Pictures, and Warner Bros. Discovery against Midjourney and Chinese AI firm MiniMax remain unresolved.
This dual-track strategy—partnering with some AI firms while suing others—underscores a deeper reality: copyright law, built over more than 300 years to protect human intellectual labor, is colliding with a technology that does not respect medium, format, or provenance.
Copyright’s Old Rules Meet a New Machine
From printing presses to peer-to-peer file sharing, every major media transition has weakened enforcement before new norms emerged. AI, however, goes further. It not only distributes content—it learns from it.
Both the training of models and the outputs they generate raise unresolved questions. Who collects royalties for an AI-generated song? Does a product designed from an AI-generated sketch infringe the original artist’s rights? And when imitation becomes statistically derived rather than manually copied, where does authorship begin or end?
In the United States, content companies have largely doubled down on traditional copyright doctrine. In China, by contrast, many upstream content producers—film, music, and gaming studios—were absorbed into platform ecosystems during the internet era, trading rights for distribution and scale. As AI platforms now generate and circulate content themselves, copyright has become secondary to data accumulation.
For Chinese tech companies, particularly content platforms, the more immediate problem is not ownership but volume: how to manage an ocean of AI-generated material without collapsing trust in the ecosystem.
The Flood of AI Content
The scale is staggering. According to Sensor Tower, AI-integrated apps were downloaded 7.5 billion times globally in the first half of 2025, up 52% year over year. ChatGPT’s image tool alone enabled 130 million users to generate 700 million images in four days. Kuaishou’s Kling has produced hundreds of millions of videos and images. Google’s Nano Banana image editor has crossed five billion generated images.
Predictably, quality has suffered. On December 14, Merriam-Webster named “slop” its 2025 Word of the Year—a term now widely used to describe low-quality AI-generated content flooding the internet. As the editors dryly observed: people claim to despise it, yet cannot stop consuming it.
More troubling is the erosion of authenticity. AI-generated footage is increasingly used in television documentaries and short-video platforms, often indistinguishable from reality. As multimodal models advance, platforms are locked in an escalating “AI versus AI” arms race, with no clear victor.
From Music Wars to Studio Deals
Nowhere has the shift from confrontation to collaboration been faster than in music.
After suing AI music platforms Udio and Suno in mid-2024, the world’s three major record labels reversed course. In October 2025, Universal Music Group reached a strategic partnership with Udio, settling its lawsuit. Warner Music followed with Suno weeks later.
UMG executives framed the pivot as pragmatic evolution. Fans may be curious about AI music, market research shows, but they remain attached to real artists—not cloned voices. AI’s value lies in discovery, recommendation, and monetization, not replacement.
The terms are clear: licensed training data, revenue sharing, and “responsibly developed” models. A new UMG–Udio platform is slated for 2026.
Hollywood followed a similar arc. OpenAI’s Sora 2 triggered outrage when it generated videos resembling famous actors and IP. Unions called for boycotts. Then, just weeks later, OpenAI introduced an opt-in system for likeness rights. By December 11, Disney had granted Sora a three-year license, opening more than 200 characters for AI-generated short videos.
Access to Disney IP does not come cheap. Industry sources estimate minimum annual guarantees of roughly RMB 3 million, plus a 6% royalty on retail sales. In exchange, OpenAI accepted Disney’s capital—and its constraints.
For Sora, the stakes are existential. Sensor Tower data cited by a16z partner Olivia Moore shows Sora’s 30-day retention rate at just 1%, compared with TikTok’s 32%. Without premium IP, interactivity—and user retention—collapses.
The Harder Question: Training Data
While output licensing is being normalized, the legality of training data remains unresolved.
Studios accuse AI firms of scraping copyrighted material at scale. MiniMax, now preparing for a Hong Kong IPO, denies the claims, arguing that character images are not standalone protected works. If courts disagree, damages could exceed $75 million—more than double its 2024 revenue.
The deeper issue is structural. Modern LLMs depend on vast, low-cost datasets assembled from the open internet. Filtering copyrighted material at scale is often impractical. Yet as models mature, companies are now chasing higher-quality, proprietary data—raising the value of licensed content and the leverage of rights holders.
This tension has drawn regulators in. In December, the EU opened an antitrust investigation into Google, examining whether its AI search summaries exploit publisher content without fair compensation, and whether YouTube’s data policies disadvantage competitors.
Fighting AI With AI
As AI-generated content becomes harder to detect, governance itself must become automated.
China’s web-novel platforms, long dependent on copyright enforcement, now face AI-assisted plagiarism that easily evades detection—especially when text is transformed across formats. Experts note that while images and audio leave forensic traces, text does not.
New regulations, including China’s Measures for Identifying AI-Generated Synthetic Content, require labeling of AI outputs. But implicit metadata for text remains technically difficult and costly, leaving enforcement uneven.
Platforms are responding with behavioral analysis, machine detection, and content libraries. ByteDance, for example, uses scene-level AI detection and celebrity protection databases to block unauthorized usage. Yet malicious actors continue to adapt, fueled by cheap face-swapping tools and generative software.
Commentary | The Inevitable Bargain Between Creators and Machines
From where I sit, the trajectory is clear—and uncomfortable. Copyright will not defeat AI. Nor will AI eliminate copyright. What is emerging instead is a negotiated equilibrium, shaped less by legal doctrine than by leverage.
The most powerful IP holders—Disney, major record labels, elite creators—will secure licenses, equity, and control. Everyone else will be absorbed into the data exhaust. Courts may clarify boundaries, but markets will set the terms.
The real risk is not infringement; it is dilution. When content becomes infinitely reproducible and cheaply generated, attention—not authorship—becomes the scarce resource. In that world, the value of originality depends not on whether it can be copied, but on whether it can still command belief.
AI does not just challenge copyright. It challenges the economic meaning of creativity itself.