Week 5 and 6: Digital Ghosts in the Machine: An Analysis of the 2023 SAG-AFTRA Strike and the Battle for Digital Rights in the Age of AI
The Precipice – Hollywood's 2023 Labor Standoff
The year 2023 will be remembered in Hollywood history as a period of profound industrial unrest when the engines of production ground to a halt under the weight of dual labor strikes. For the first time since 1960, the Writers Guild of America (WGA) and the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) stood together on picket lines, creating a near-total shutdown of the American film and television industry.1 This historic standoff was not merely a dispute over wages and working conditions; it was a battle for the future of creative professions in an era of disruptive technology, pitting human artistry against the specter of artificial intelligence.
The Dual Strikes: A Timeline of Unrest
The conflict began with the writers. The WGA commenced negotiations with the Alliance of Motion Picture and Television Producers (AMPTP) on March 20, 2023, presenting demands to address compensation erosion in the streaming era.2 After members voted to authorize a strike with an unprecedented 97.9% approval, and with talks failing to produce a new contract, the WGA officially called a strike beginning at 12:01 a.m. on May 2, 2023.2 The effects were immediate, shuttering late-night talk shows and productions like
Saturday Night Live.2
As the writers' strike wore on, SAG-AFTRA's own contract deadline loomed. The actors' union began its negotiations with the AMPTP on June 7, 2023.4 Mirroring the writers' resolve, nearly 98% of SAG-AFTRA members voted to authorize a strike ahead of the talks, providing their negotiators with significant leverage.2 Despite a last-minute extension and the introduction of a federal mediator, negotiations collapsed.3 On July 14, 2023, SAG-AFTRA initiated its strike, joining the WGA and amplifying the pressure on the studios.3 The dual work stoppage lasted for months, with the WGA reaching a tentative deal on September 24 and officially ending its strike on September 27.2 SAG-AFTRA's fight continued until a tentative agreement was reached on November 8, officially ending their 118-day strike on November 9, 2023.6
Core Grievances: From Residuals to Robots
The unions' grievances were rooted in a decade of industry transformation. The meteoric rise of streaming services had fundamentally altered the business model, leading to a sharp decline in residuals—the long-term payments actors and writers receive for the reuse of their work.6 This issue, a major factor in the 2007-08 WGA strike, had created a state of economic precarity for a majority of working performers and writers, who felt their compensation was no longer tied to the success of their projects.6 SAG-AFTRA explicitly entered negotiations seeking a contract "relevant to the new business model" to protect members from "erosion of income due to inflation and reduced residuals".5
This pre-existing economic vulnerability made the workforce acutely sensitive to the new, existential threat posed by generative artificial intelligence. AI became the "catalyst" for the strikes, transforming a contract dispute into a fight for "the survival of our profession".8 The unions' goal was not to prohibit AI entirely but to establish human-centric guardrails and regulations.10 As SAG-AFTRA leadership stated, the aim was to ensure technology remains a "tool, not the star".12 The fear was that without such protections, the studios' focus would shift entirely from art to finance, devaluing human creativity in favor of automated, cost-cutting facsimiles.10 The fight over AI was thus inseparable from the fight over streaming residuals; it was a battle to prevent a second wave of technological disruption from finishing the economic damage started by the first.
The AI Impasse – Defining the Digital Battleground
At the heart of the 2023 strikes lay a profound philosophical and economic chasm over the role of artificial intelligence. The negotiations revealed two starkly different visions for the future: one where AI serves as a tool under human control, and another where human identity itself becomes a perpetually exploitable digital asset. The specific proposals exchanged between the AMPTP and the unions defined this digital battleground with alarming clarity.
The AMPTP's "Groundbreaking" Proposal: A Right to Digital Eternity
The most galvanizing moment of the SAG-AFTRA negotiation came with the revelation of what the union's chief negotiator, Duncan Crabtree-Ireland, described as the AMPTP's "groundbreaking" AI proposal.13 According to Crabtree-Ireland, the studios proposed that background performers could be scanned on set, receive a single day's pay, and the company would then own that scan—their image and likeness—in perpetuity. This digital replica could then be used on any future project "for the rest of eternity" without any further consent or compensation.6
This proposal caused immediate and widespread outrage, solidifying union resolve and shaping the public narrative of the strike.2 It framed the studios not as innovators but as entities seeking to convert human beings into royalty-free digital puppets. The AMPTP later disputed this characterization, claiming their proposal would only permit the use of a scan within the specific film for which the background actor was employed and that any subsequent uses would require separate consent and payment.6 However, the damage was done. The initial proposal, as presented by the union, was perceived as such a profound overreach that it transformed the negotiation into a moral crusade. It made SAG-AFTRA's counter-demands for consent and compensation appear eminently reasonable, strengthening their position both at the bargaining table and in the court of public opinion.
SAG-AFTRA's Counter-Stance: The Pillars of Consent, Compensation, and Control
In stark contrast to the AMPTP's vision, SAG-AFTRA's position was built on three non-negotiable pillars: informed consent, fair compensation, and transparency regarding any use of a performer's digital likeness.11 The union's stance was consistently framed not as anti-AI but as anti-exploitation.14 As leadership clarified, they were willing to partner with the industry on the integration of AI, but only if robust, enforceable guardrails were built into the contract to protect their members.11 This extended to the deeply controversial issue of posthumous use of digital replicas. Reports emerged that the AMPTP had proposed using AI-crafted scans of deceased performers, a move that SAG-AFTRA fiercely resisted, demanding that any such use require consent from the performer's estate or the union.10
The Video Game Front: A Preview of Battles to Come
The concurrent, and ongoing, negotiations for the Interactive Media Agreement (covering video games) offer a glimpse into the long-term ambitions of producers regarding AI. The proposals in this arena have been even more aggressive, revealing the logical endpoint of the concepts introduced in the TV/Theatrical talks. Video game companies have reportedly sought terms that include buyouts for the unlimited use of a performer's digital replica in perpetuity and, most alarmingly, language that would allow them to use a performer's digital replica during a strike, effectively enabling digital scabbing.17
Furthermore, a key sticking point in the video game talks has been the companies' refusal to classify all motion-capture and stunt artists as "performers" covered by the agreement's AI protections.19 This attempt to narrow the scope of who is protected demonstrates a clear strategy to create loopholes for the unconstrained use of AI. The video game negotiations are not a separate conflict but a more advanced stage of the same battle. They provide a clear roadmap of where studios across all media will likely attempt to push the boundaries in future contract cycles, making the TV/Theatrical strike a fight over the first step. In contrast, the video game strike is a fight over the entire staircase.
The New Covenant – Deconstructing the 2023 SAG-AFTRA AI Agreement
After 118 days of striking, SAG-AFTRA secured a new TV/Theatrical contract valued at over one billion dollars, which included what the union termed "unprecedented provisions for consent and compensation that will protect members from the threat of AI".7 Ratified on December 5, 2023, the agreement established a complex, tiered system for governing the use of AI, creating new definitions and rules for digital replicas and synthetic performers.8 These terms, when compared to the WGA's AI agreement, reveal fundamentally different strategies for protecting creative work in the digital age.
The Three Tiers of Digital Existence: Replicas and Synthetics
The final SAG-AFTRA contract meticulously defines three categories of AI-generated content, each with different levels of protection.8
Employment-Based Digital Replica (EBDR): This is a digital replica of a performer's voice or likeness created with their physical participation during their employment on a specific project. The contract requires "clear and conspicuous" consent, which must include a "reasonably specific description" of the intended use. Critically, compensation is tied to the labor it replaces; if an EBDR is used to perform work that would have taken an actor two additional days, the actor must generally be paid for those two days.22 Most importantly, consent is project-specific. To reuse an EBDR in a different film or show, the studio must obtain separate consent and negotiate new compensation, thereby preventing perpetual ownership.8
Examples in Practice:
Background Crowd Population: A background actor is hired for one day on the set of a historical epic and undergoes a 3D scan. The contract explicitly states this scan will be used to create an EBDR to populate crowd scenes in the film Gladiator II only. The actor is paid for the day of the scan. Later, the VFX team uses the EBDR to fill out a wide shot of the Colosseum that would have required 50 additional background actors for three days. Under the new agreement, the studio cannot use this scan in perpetuity for other projects and must compensate the actor for the work their replica performed, preventing the exact scenario the union fought against. 8
Digital Stunt Double: An A-list actor in a superhero film performs a complex fight sequence on a motion-capture stage. The data is used to create an EBDR that performs a climactic, physically impossible stunt (e.g., jumping between two skyscrapers). The actor's contract contains a rider, provided 48 hours in advance, with a "reasonably specific description" of this use. 59 The actor is compensated for the days they would have worked if the stunt were performed practically, ensuring they are paid for the performance their digital likeness provides. 22
Playing a Twin: In a film like An American Pickle, where one actor plays two identical characters, an EBDR can be created. 61 The lead actor performs both roles, and an EBDR created from a scan of the actor is used in post-production to composite both performances into the same shot, allowing for seamless interaction. The actor's consent is for this specific use within this single film, and their compensation reflects the work of creating two distinct performances. 61
Independently Created Digital Replica (ICDR): This is a replica created from a performer's pre-existing work (e.g., archival film footage) without their direct involvement in the creation process. While studios may own the copyright to the source material, the contract still requires them to provide notice and bargain for consent and compensation before using the ICDR in a new project.8 A significant carve-out is the "First Amendment Exception," which may permit use without consent for purposes such as parody, criticism, or in a biographical work, to the extent such use is legally protected.8
Examples in Practice:
Posthumous Performance Completion: Following Carrie Fisher's death, Lucasfilm used an ICDR to complete her role as Leia Organa in Star Wars: The Rise of Skywalker. 62 This was created by combining existing footage from previous films with new digital creations. Under the new contract, the studio would be required to obtain clear consent from Fisher's estate and negotiate fair compensation for this new use of her likeness. 8
Digital Resurrection: To include the character Grand Moff Tarkin in Rogue One: A Star Wars Story, filmmakers created an ICDR of actor Peter Cushing, who had died in 1994. 63 They used CGI to superimpose his likeness, derived from the 1977 film
Star Wars, onto a body double. This is a clear case of an ICDR, and the new rules would mandate that the studio bargain with Cushing's estate for permission and payment before using the replica. 63Biographical Film (First Amendment Exception): A director making a biopic about James Dean wants to use an ICDR of the actor, created from his old films, to narrate key moments. 66 The actor's estate denies permission. The director could argue that this use is protected under the First Amendment as a historical or biographical work, potentially allowing them to proceed without consent. 8 This exception, however, is likely to be a future legal battleground.
Synthetic Performers: This category encompasses entirely new, digitally created characters that are not intended to be recognizable as a specific actor. However, they may be trained on data from multiple human performers. The protections here are considerably weaker. Instead of requiring consent, the contract only mandates that producers provide notice to the union and "bargain in good faith" over appropriate compensation if the synthetic performer is used in a role that would have otherwise been filled by a human.21
Examples in Practice:
AI-Generated Creatures: A fantasy film requires a scene with a diverse army of orcs. Instead of hiring hundreds of background actors for extensive makeup and prosthetics, the studio uses a generative AI tool to create 500 unique, non-human "synthetic" orcs. 65 Because these performers are not recognizable as any specific human actor, the studio's only obligation is to notify SAG-AFTRA and bargain over whether this use displaced work that would have gone to union members. 20
Authentic Dialogue Adjustment: In the film The Brutalist, AI voice technology was used to adjust the Hungarian dialogue spoken by stars Adrien Brody and Felicity Jones to sound more authentic. 67 While this involves real actors, the modification creates a new, synthetic vocal performance. If this technology were used to create a new character's voice from scratch, not based on a single identifiable actor, it would be a synthetic performer, requiring only notice and bargaining. 68
Virtual Influencers: A game studio creates "Aura," a completely synthetic virtual character, to be the public face of their new game, Cyber-Odyssey. Aura has a unique AI-generated face, voice, and personality and appears in promotional materials, social media, and as a non-player character (NPC) in the game. Since Aura is not recognizable as any real person, she is a synthetic performer. The studio must inform SAG-AFTRA of her use and be prepared to bargain if the union contends that Aura is filling roles (e.g., brand spokesperson, voice actor) that would have otherwise been hired. 8
This tiered structure creates a clear economic incentive for studios. The path of least resistance, both contractually and financially, is to move away from replicating specific, identifiable union actors and toward developing generic, non-union "synthetic" assets. While the agreement is a landmark victory in establishing consent for replicas, it may have inadvertently charted a course for its own obsolescence by making the creation of wholly synthetic performers a more attractive option for producers.
A Tale of Two Guilds: Comparing SAG-AFTRA and WGA AI Protections
The WGA, which secured its agreement weeks before the actors, took a fundamentally different approach to regulating AI. This divergence stems from the different nature of their members' creative contributions: the WGA protects the written word, while SAG-AFTRA protects personal identity and likeness.
The WGA's contract focuses on preserving the definition and value of authorship. Its core provision establishes that AI cannot be considered a "writer," and any material generated by AI cannot be classified as "literary material" or "source material" under their agreement.20 This is a crucial protection for compensation; it prevents a studio from handing a writer an AI-generated script and paying them a lower "rewrite" or "polish" fee. The writer must be credited and compensated as the first writer.24 The agreement also mandates that companies disclose to a writer if they are being given AI-generated material to work with, and a writer cannot be required to use AI as part of their process.23
This comparison reveals a critical strategic split. The WGA was able to establish a bright-line rule regarding the nature of the work product—AI-generated text is not considered "literary material." SAG-AFTRA could not make a similar claim; a digital replica is the likeness of a person and cannot be defined as something else. This forced the actors' union into a more complex, consent-based regime that regulates the use of a performer's identity. While a major achievement, this individual consent model is inherently more vulnerable to the economic pressures a working actor faces than the WGA's collective, material-based protection.
The Copyright Crucible – Training Data and the Doctrine of Fair Use
While the union agreements set the rules of engagement within Hollywood, the fundamental legality of the technologies themselves is being decided in a separate arena: federal court. A wave of high-profile lawsuits filed by creators and publishers against AI developers will ultimately define the legal bedrock of copyright in the digital age. These cases hinge on the crucial doctrine of "fair use" and will have profound implications for the entire AI industry.
The New York Times v. OpenAI & Microsoft: The Regurgitation Test
In December 2023, The New York Times filed a landmark lawsuit against OpenAI and Microsoft, alleging massive copyright infringement.27 The suit makes two primary claims. First, it alleges direct infringement occurred when the companies copied millions of Times articles without permission to create the datasets used to train their large language models (LLMs) like ChatGPT.27 Second, it alleges contributory infringement based on the models' outputs, providing extensive evidence that ChatGPT can "regurgitate" near-verbatim excerpts of paywalled articles when prompted, thus serving as a direct market substitute.28
OpenAI's defense strategy has been multifaceted. It argues that some claims are barred by the three-year statute of limitations and that it cannot be held liable for users who intentionally manipulate the model to reproduce content in violation of its terms of service.27 Most notably, OpenAI has characterized the verbatim regurgitation as a "bug" or an unintended consequence, not the model's core function.28 Early court rulings have allowed the core copyright claims to proceed, signaling that the case will be a major test for the fair use defense.27 The
Times's evidence of memorization and regurgitation directly challenges the notion that the AI's use of its articles is purely "transformative."
Author Lawsuits: Piracy, Transformation, and a Split Decision
Parallel to the Times case, groups of authors, including prominent figures such as Sarah Silverman and Ta-Nehisi Coates, have filed class-action lawsuits against AI companies like Meta and Anthropic.33 A pivotal pre-trial ruling in one of these cases,
Bartz v. Anthropic has already reshaped the legal landscape. In his decision, U.S. District Judge William Alsup made a critical distinction that separated the AI development process into two distinct acts.35
First, Judge Alsup ruled that the act of training an LLM on copyrighted books, when legally acquired, constitutes fair use. He described the process as "quintessentially transformative," comparing it to a human reader learning from various sources to develop their own style and knowledge.35 Second, and crucially, he ruled that Anthropic's method of
acquiring its training data—by downloading millions of books from "shadow libraries" of pirated works—was not protected by fair use and constituted theft.35 Anthropic was ordered to stand trial for the damages related to this initial act of mass infringement, even if the subsequent training was deemed lawful.37
This ruling creates a pivotal distinction between the process of training and the provenance of data. It suggests that while AI developers may win the broad argument that training is a transformative fair use, their entire business model could be built on a foundation of illegally acquired data. The legal risk for AI companies now shifts from the output of their models to the input, placing immense pressure on them to use licensed, "clean" data.
The Fair Use Doctrine Under AI-Induced Stress
These cases are putting unprecedented stress on the four-factor test used by U.S. courts to determine fair use.31
Purpose and Character of the Use: This is the central battleground. Is training an LLM a "transformative" use that creates something new, or is it a commercial exploitation that merely repackages the original? The Anthropic ruling supports the former, but the U.S. Copyright Office has expressed skepticism, rejecting the "human learning" analogy by noting that AI makes perfect, scalable copies, unlike the imperfect impressions of a human mind.40
Nature of the Copyrighted Work: The use of factual works (like news reports) is more likely to be fair use than the use of highly creative works (like novels or screenplays).39
Amount and Substantiality of Use: AI models are trained on entire works, which weighs against fair use. However, courts may consider whether technical guardrails prevent the model from reproducing substantial portions of the original in its output.40
Effect on the Potential Market: This factor is key. If the AI's output can substitute for the original work, it harms the market and weighs heavily against fair use. The NYT's evidence of regurgitation is a direct assault on this front.28 The case of
Thomson Reuters v. Ross, where an AI legal research tool was found to be a direct market competitor to the database on which it was trained, demonstrates that courts are unlikely to find fair use when an AI tool directly supplants the market for the original.41
The ultimate legal standard will likely not be a simple "yes" or "no" on fair use. It will become a technical, evidence-based fight over the degree of memorization and regurgitation a model exhibits. The less an AI reproduces its training data, the more transformative—and legally defensible—it will appear.
The Right of Identity – Publicity, Legacy, and Posthumous Performance
Running parallel to the complex copyright battles is a more personal and, for performers, perhaps more potent legal front: the right of publicity. This area of law protects an individual's name, image, and likeness from unauthorized commercial use. While copyright protects a specific creative work, the right of publicity protects the very essence of a person's identity—the persona, style, and voice that generative AI seeks to replicate.
The Estate of George Carlin v. Dudesy: A Test Case for Digital Necromancy
A crucial test case for this right emerged in early 2024 when the estate of the late comedian George Carlin sued the creators of the podcast "Dudesy".42 The podcast had released an hour-long special titled "George Carlin: I'm Glad I'm Dead," which it claimed was entirely generated by an AI that had been trained on Carlin's life's work. The lawsuit presented two distinct legal claims. The first was copyright infringement, alleging that the AI was illegally trained on five decades of Carlin's copyrighted comedy routines.42 The second and more direct claim was a violation of Carlin's right of publicity. The suit argued that the defendants used Carlin's name, voice, cadence, and unique comedic style to create and promote the special for their own commercial advantage, all without permission.This claim was powerful due to California's robust posthumous right of publicity law, which protects the rights of a deceased personality for 70 years after their death.44
The case was resolved with a settlement before a court could rule on the merits. The defendants agreed to permanently remove the special from all platforms and refrain from using Carlin's image or likeness in the future.45 While not a legal precedent, the outcome served as a powerful deterrent, demonstrating that estates and individuals have strong grounds to fight back against this form of "digital necromancy." For performers, this case highlights that asserting their right of publicity may be a more direct and successful legal strategy than navigating the convoluted fair use arguments that dominate the copyright landscape.
The "Fictional AI" Defense and Its Legal Irrelevance
A fascinating wrinkle in the Carlin case was the defendants' subsequent claim that the "Dudesy AI" was merely a fictional character and that the special was, in fact, written entirely by a human host.45 This "fictional AI" defense, however, is largely irrelevant to the core legal injury. Whether the special was generated by a human or a machine, it was explicitly marketed and presented to the public as an AI-generated performance by George Carlin. The harm to Carlin's reputation and the unauthorized commercial exploitation of his identity occurred regardless of the proper method of creation. This reveals a new legal vector for future disputes: the harm is not just in the creation of a replica but in the deceptive marketing of AI-related content, which could bring claims of false advertising and unfair competition into play.
Legislative Responses: The NO FAKES Act and State-Level Protections
The threat of unauthorized digital replicas has spurred legislative action. SAG-AFTRA has been a key collaborator in drafting the federal Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act. This proposed legislation aims to establish a federal right protecting individuals from the creation and use of unauthorized digital replicas of their voice and likeness.44 This effort runs alongside state-level initiatives, such as Tennessee's Ensuring Likeness Voice and Image Security (ELVIS) Act, which updates existing laws to explicitly include protections against AI misuse.44 Together, these legislative efforts aim to create a legal backstop to the contractual protections won in the strike, providing a baseline of rights for all individuals, not just union members.
A Different Path – 'Now and Then' as a Paradigm for Restorative AI
Amid the dystopian fears of replacement and replication that fueled the Hollywood strikes, a remarkable project emerged that offered a powerful counter-narrative: The Beatles' 2023 song, "Now and Then." This release demonstrated a different, more hopeful path for the use of AI in the creative arts—one focused on restoration and enhancement rather than generation and replacement.
The Technology: Machine-Assisted Learning for Audio Separation
The technology behind "Now and Then" was not the generative AI at the center of the labor disputes. Instead, it was a sophisticated audio restoration technique powered by machine learning, often referred to as "stem separation".49 The system, developed by director Peter Jackson's WingNut Films for the 2021 documentary
The Beatles: Get Back, was nicknamed MAL (for Machine-Assisted Learning).51 Its purpose was to de-mix old, mono recordings. For "Now and Then," MAL was applied to a low-fidelity demo tape recorded by John Lennon in the late 1970s. On the original tape, Lennon's voice was inextricably blended with the sound of his piano, making it impossible for the surviving Beatles to work with using the technology available in the 1990s.49 The MAL software was able to precisely isolate Lennon's vocal track, cleaning it up and preserving its clarity, thus making the impossible possible.51
The Process: Augmenting, Not Replacing, Human Creativity
Crucially, the AI did not create or synthesize any part of John Lennon's performance. It meticulously restored the original, authentic human performance that was trapped on the demo tape.50 This restored vocal track then became the foundation upon which the surviving members, Paul McCartney and Ringo Starr, built the final song, adding new bass, drums, and guitar parts, including archival recordings of George Harrison's guitar from the abandoned 1995 sessions.51 The entire project was undertaken with the full support and participation of Lennon's estate and his former bandmates, ensuring it was a tribute that respected his artistic intent and legacy.49
The Distinction: A Clear Line Between Restoration and Generation
The "Now and Then" project provides a clear and essential distinction in the debate over AI. It separates the use of AI as a restorative tool from its use as a generative engine. One application enhances and preserves human artistry, enabling lost or unfinished works to be brought to life. The other threatens to replicate and devalue that same artistry by creating synthetic substitutes.50 This case study serves as a vital benchmark. It sets a tangible, positive precedent for the ethical use of AI that unions, creators, and legislators can reference. In future negotiations, it enables a more nuanced conversation, one that can distinguish between beneficial and exploitative applications. The goal can be articulated as, "We want AI to be used like it was for The Beatles, not like the AMPTP proposed for background actors."
The Unfinished Script – Future Implications and Strategic Recommendations
The 2023 strikes concluded with historic agreements that erected the first major guardrails around the use of generative AI in the creative industry. However, the contracts are not a final resolution but rather the opening chapter in a long and evolving struggle. The interplay between labor agreements, landmark court rulings, and new legislation will define the future of digital rights, with significant battles still looming on the horizon.
Loopholes and Future Battlegrounds: The 2026 Negotiations
While the 2023 SAG-AFTRA agreement was a significant victory, its long-term viability is challenged by several potential loopholes and unresolved issues that are almost certain to be central to the next round of negotiations in 2026.
The "Synthetic Performer" Pivot: The contract's protections are strongest for recognizable, individual actors whose likenesses are replicated. The provisions for wholly "Synthetic Performers" are much weaker, requiring only notice and bargaining rather than consent.21 This creates a clear incentive for studios to invest in technology that generates new, non-union digital actors, thereby circumventing the contract's most robust protections and potentially creating a new class of digital performers outside the union's purview.
The Unresolved Training Data Issue: The agreement effectively punts on the critical issue of compensation for using union members' vast library of past performances as training data for AI models.21 The contract merely stipulates that the parties will meet to discuss the matter. This will undoubtedly become an explosive issue in 2026, as the outcomes of the major copyright lawsuits clarify the legal standing of such use. Performers will likely demand a new form of "training data residuals" for their contribution to the development of AI systems.
The Pressure of "Consent": A significant concern among working actors is that the hard-won "consent" provisions will be rendered meaningless by market realities. Critics argue that studios will simply favor performers who agree to have their digital replicas made, making consent a de facto condition of employment for all but the most powerful A-list stars.55
The Shifting Landscape: Co-evolution of Law, Labor, and Technology
The future of digital rights in entertainment will be shaped by the dynamic and continuous interplay of three powerful forces:
Labor Agreements: These contracts set the immediate, practical rules of engagement and compensation on the ground in Hollywood.
Court Rulings: Landmark cases like NYT v. OpenAI and Bartz v. Anthropic will define the underlying legal framework of what is permissible under copyright and right of publicity law, setting the stage for all other discussions.
Legislation: New federal and state laws, such as the NO FAKES Act, will create new rights and safe harbors to address the novel challenges posed by AI that existing statutes did not anticipate.55
These three tracks will continuously influence one another. A definitive court ruling that AI training is not fair use would dramatically increase the unions' leverage in the next negotiation. Conversely, the detailed consent and compensation structures in the union agreements provide a valuable, industry-tested blueprint for what future legislation could look like.57
Strategic Recommendations for Stakeholders
For Creators and Unions: The immediate priority should be to address the "Synthetic Performer" loophole and the unresolved training data issue. Unions must begin building the legal and economic case now for why training AI constitutes a licensable use of a performance deserving of a new residual stream. Continued advocacy for strong federal legislation is essential to create a baseline of protection that exists independent of any single contract.
For Studios and Producers: The primary risk is no longer just public backlash but massive legal liability. Investing in transparent and ethically sourced training data is now a critical risk mitigation strategy to avoid the fate of Anthropic, which faces a trial over its use of pirated books.36 Studios should proactively partner with unions to develop "restorative" and "assistive" AI tools, building trust and demonstrating a commitment to ethical innovation. The short-term cost savings from replacing human creativity may be dwarfed by the long-term costs of litigation, regulation, and further labor unrest.
For Policymakers: It is clear that existing legal frameworks are ill-equipped to handle the speed and scale of generative AI. Lawmakers should prioritize the creation of a clear federal right of publicity to protect against unauthorized digital replicas. Furthermore, a legal framework for AI training data is needed that balances the drive for innovation with the fundamental rights of creators. This could involve exploring compulsory licensing schemes or other models that ensure content owners are compensated when their work is used to build these powerful new technologies. The 2023 labor agreements, forged in the crucible of a historic strike, offer an invaluable roadmap for what that future could, and should, look like.
Works Cited
AI and Consent: What the SAG-AFTRA and WGA Agreements Tell Us About the Future of Generative AI - eRepository @ Seton Hall, accessed June 26, 2025, https://scholarship.shu.edu/cgi/viewcontent.cgi?article=2478&context=student_scholarship
A timeline of events related to the Hollywood strikes - The Medium, accessed June 26, 2025, https://themedium.ca/a-timeline-of-events-related-to-the-hollywood-strikes/
SAG-AFTRA and WGA Strikes: All the Major Dates to Know | Entertainment Tonight, accessed June 26, 2025, https://www.etonline.com/sag-and-wga-strikes-all-the-major-dates-to-know-207915
Solidarity With WGA - SAG-AFTRA, accessed June 26, 2025, https://www.sagaftra.org/solidarity-wga
Member Message: Strike Authorization Vote - SAG-AFTRA, accessed June 26, 2025, https://www.sagaftra.org/member-message-strike-authorization-vote
2023 SAG-AFTRA strike - Wikipedia, accessed June 26, 2025, https://en.wikipedia.org/wiki/2023_SAG-AFTRA_strike
SAG-AFTRA Strike: Tentative Agreement, accessed June 26, 2025, https://www.sagaftrastrike.org/
SAG-AFTRA Agreement Establishes Important Safeguards for Actors Around AI Use, accessed June 30, 2025, https://authorsguild.org/news/sag-aftra-agreement-establishes-important-ai-safeguards/
Silenced Voices and Empty Stages: The Impact of the SAG-AFTRA Strike, accessed June 26, 2025, https://bpr.studentorg.berkeley.edu/2023/10/25/silenced-voices-and-empty-stages-the-impact-of-the-sag-aftra-strike/
Is Artificial Intelligence Taking over the Entertainment Industry? - Queen Mary University of London, accessed June 26, 2025, https://www.qmul.ac.uk/lac/our-legal-blog/blogs/is-artificial-intelligence-taking-over-the-entertainment-industry.html
AI and Hollywood: 5 questions for SAG-AFTRA's chief negotiator | World Economic Forum, accessed June 26, 2025, https://www.weforum.org/stories/2024/03/ai-hollywood-strike-sag-aftra-technology/
AI Goes To Hollywood 2024 | SAG-AFTRA, accessed June 26, 2025, https://www.sagaftra.org/videos/ai-goes-hollywood-2024
Everything You Need To Know: AI, Streaming and SAG-AFTRA, accessed June 26, 2025, https://www.thelawyerportal.com/blog/everything-you-need-to-know-ai-streaming-and-sag-aftra/
SAG-AFTRA Strikes Video Games Over A.I., accessed June 26, 2025, https://www.sagaftra.org/sag-aftra-strikes-video-games-over-ai
AI and Ethics: the SAG-AFTRA Strike | Video Game Law - The University of British Columbia, accessed June 26, 2025, https://videogamelaw.allard.ubc.ca/2025/03/25/ai-and-ethics-the-sag-aftra-strike/
SAG-AFTRA Rejected AMPTP's Recent Offer for Insufficient AI Protections - AI In Hollywood, accessed June 26, 2025, https://www.aiinhollywood.com/home/sag-rejected-amptps-recent-offer-for-insufficient-ai-protections
SAG-AFTRA Outlines Remaining Sticking Points Over AI in Video Game Contract - TheWrap, accessed June 26, 2025, https://www.thewrap.com/sag-aftra-video-game-strike-counterproposal/
SAG-AFTRA publishes counterproposals in ongoing negotiations around performer rights and AI | GamesIndustry.biz, accessed June 26, 2025, https://www.gamesindustry.biz/sag-aftra-publishes-counterproposals-in-ongoing-negotiations-around-performer-rights-and-ai
Game On: SAG-AFTRA's Video Game Performer Members Strike Over AI Concerns, accessed June 26, 2025, https://www.entertainmentlawinsights.com/2024/08/game-on-sag-aftras-video-game-performer-members-strike-over-ai-concerns/
How the 2023 SAG-AFTRA and WGA Contracts Address Generative AI | Perkins Coie, accessed June 30, 2025, https://perkinscoie.com/insights/blog/generative-ai-movies-and-tv-how-2023-sag-aftra-and-wga-contracts-address-generative
The SAG-AFTRA Strike is Over, But the AI Fight in Hollywood is Just Beginning, accessed June 26, 2025, https://cdt.org/insights/the-sag-aftra-strike-is-over-but-the-ai-fight-in-hollywood-is-just-beginning/
What The Actors Really Won On A.I. - A detailed look at the tentative SAG-AFTRA deal, from streaming residuals to health benefits to the extraordinary restrictions on the use of A.I. - Reddit, accessed June 26, 2025, https://www.reddit.com/r/boxoffice/comments/17u4kim/what_the_actors_really_won_on_ai_a_detailed_look/
Summary of the 2023 WGA MBA, accessed June 26, 2025, https://www.wgacontract2023.org/the-campaign/summary-of-the-2023-wga-mba
Artificial Intelligence - WGA, accessed June 26, 2025, https://www.wga.org/contracts/know-your-rights/artificial-intelligence
Artificial Intelligence - Writers Guild of America East, accessed June 26, 2025, https://www.wgaeast.org/know-your-rights/artificial-intelligence/
New WGA Labor Agreement Gives Hollywood Writers Important Protections in the Era of AI, accessed June 26, 2025, https://cdt.org/insights/new-wga-labor-agreement-gives-hollywood-writers-important-protections-in-the-era-of-ai/
New York Times Clears First Legal Hurdle in Lawsuit Against OpenAI - The Fashion Law, accessed June 26, 2025, https://www.thefashionlaw.com/new-york-times-clears-first-legal-hurdle-in-lawsuit-against-openai/
The New York Times Case against OpenAI is Different. Here's Why. - Patent Docs, accessed June 26, 2025, https://www.patentdocs.org/2024/02/the-new-york-times-case-against-openai-is-different-heres-why.html
Does ChatGPT violate New York Times' copyrights? - Harvard Law School, accessed June 26, 2025, https://hls.harvard.edu/today/does-chatgpt-violate-new-york-times-copyrights/
New York Times Lawsuit Against OpenAI and Microsoft Could Redefine AI's Use of Copyrighted Content - Columbia Undergraduate Law Review, accessed June 26, 2025, https://www.culawreview.org/ddc-x-culr-1/new-york-times-lawsuit-against-openai-and-microsoft-could-redefine-ais-use-of-copyrighted-content
Is Generative AI Fair Use of Copyright Works? NYT v. OpenAI, accessed June 26, 2025, https://copyrightblog.kluweriplaw.com/2024/02/29/is-generative-ai-fair-use-of-copyright-works-nyt-v-openai/
The New York Times Company v. Microsoft Corporation et al - Law360, accessed June 26, 2025, https://www.law360.com/cases/658c2668ce10b801d176291a/articles
Judge dismisses authors' copyright lawsuit against Meta over AI training, accessed June 26, 2025, https://apnews.com/article/meta-ai-copyright-lawsuit-sarah-silverman-e77968015b94fbbf38234e3178ede578
Meta Wins AI Copyright Lawsuit Against Authors | Silicon UK, accessed June 26, 2025, https://www.silicon.co.uk/e-regulation/legal/meta-wins-ai-copyright-lawsuit-against-authors-619885
Anthropic wins key AI copyright case, but remains on the hook for using pirated books, accessed June 26, 2025, https://www.cbsnews.com/news/anthropic-ai-copyright-case-claude/
Federal Judge Rules AI Training Is Fair Use in Anthropic Copyright Case, accessed June 26, 2025, https://www.publishersweekly.com/pw/by-topic/digital/copyright/article/98089-federal-judge-rules-ai-training-is-fair-use-in-anthropic-copyright-case.html
Anthropic wins ruling on AI training in copyright lawsuit but must face trial on pirated books, accessed June 26, 2025, https://apnews.com/article/anthropic-ai-fair-use-copyright-pirated-libraries-1e5cece51c2e4bd0bb21d94de2abb035
Authors Lose Key Battle in AI Copyright Case, But Piracy Fight Continues - eWEEK, accessed June 26, 2025, https://www.eweek.com/news/anthropic-ai-training-copyright-ruling-fair-use-pirated-content/
Fair Use and AI Training Data: Practical Tips for Avoiding Infringement Claims – A Blog Post by Michael Whitener - VLP Law Group LLP, accessed June 26, 2025, https://www.vlplawgroup.com/blog/2025/02/04/fair-use-and-ai-training-data-practical-tips-for-avoiding-infringement-claims-a-blog-post-by-michael-whitener/
Copyright Office Weighs In on AI Training and Fair Use | Skadden, Arps, Slate, Meagher & Flom LLP, accessed June 26, 2025, https://www.skadden.com/insights/publications/2025/05/copyright-office-report
Court Rules AI Training on Copyrighted Works Is Not Fair Use — What It Means for Generative AI - Davis+Gilbert LLP, accessed June 26, 2025, https://www.dglaw.com/court-rules-ai-training-on-copyrighted-works-is-not-fair-use-what-it-means-for-generative-ai/
George Carlin Estate Sues Creators of AI-Generated Comedy Special in Key Lawsuit Over Stars' Likenesses - Yahoo, accessed June 26, 2025, https://www.yahoo.com/entertainment/george-carlin-estate-sues-creators-014359208.html
George Carlin estate sues over fake comedy special purportedly generated by AI | AP News, accessed June 26, 2025, https://apnews.com/article/george-carlin-artificial-intelligence-special-lawsuit-39d64f728f7a6a621f25d3f4789acadd
A Case Study: George Carlin, Artificial Intelligence and the Right of Publicity, accessed June 26, 2025, https://www.lutzker.com/a-case-study-george-carlin-artificial-intelligence-and-the-right-of-publicity/
George Carlin A.I. Imitation Case Reaches Settlement - Smithsonian Magazine, accessed June 26, 2025, https://www.smithsonianmag.com/smart-news/george-carlin-ai-imitation-case-reaches-settlement-180984087/
George Carlin Estate Settles AI-Made Comedy Special Lawsuit (1) - Bloomberg Law News, accessed June 26, 2025, https://news.bloomberglaw.com/ip-law/george-carlin-estate-settles-ai-deepfake-suit-against-podcasters
AI Copyright Issues Take Centre Stage in George Carlin Podcast Dispute, accessed June 26, 2025, https://www.fr.com/insights/thought-leadership/articles/ai-copyright-issues-take-centre-stage-in-george-carlin-podcast-dispute/
SAG-AFTRA A.I. Bargaining And Policy Work Timeline, accessed June 26, 2025, https://www.sagaftra.org/contracts-industry-resources/member-resources/artificial-intelligence/sag-aftra-ai-bargaining-and
Second Take: 'Now And Then' by The Beatles raises ethical questions of AI in music, accessed June 26, 2025, https://dailybruin.com/2024/02/25/second-take-now-and-then-by-the-beatles-raises-ethical-questions-of-ai-in-music
AI-Restored Beatles Song Now and Then Wins Historic Grammy - SentiSight.ai, accessed June 26, 2025, https://www.sentisight.ai/the-beatles-ai-assisted-now-and-then-grammy-win/
Now and Then (Beatles song) - Wikipedia, accessed June 26, 2025, https://en.wikipedia.org/wiki/Now_and_Then_(Beatles_song)
The Beatles new song "Now and Then" used AI to lift out John Lennon's voice - Quartz, accessed June 26, 2025, https://qz.com/new-beatles-song-now-and-then-ai-john-lennon-1850981701
The Beatles' 'Now and Then' Was Made With AI (and That's Okay) | Lifehacker, accessed June 26, 2025, https://lifehacker.com/tech/the-beatles-now-and-then-was-made-with-ai-but-thats-okay
Do You Think The Beatles' Now and Then Paved the Way for AI's Growing Role in Music? And Do You Care? - Reddit, accessed June 26, 2025, https://www.reddit.com/r/beatles/comments/1hsg72y/do_you_think_the_beatles_now_and_then_paved_the/
SAG-AFTRA Contract is a Landmark For AI and IP Interplay - QBE Insurance, accessed June 26, 2025, https://www.qbe.com/media/qbe/north-america/usa/pdf-files/research-and-insights/657830-evynne-grover-article-final.pdf
The AI protections for SAG-AFTRA are lackluster, and it's not just people trying to sow division : r/FilmIndustryLA - Reddit, accessed June 26, 2025, https://www.reddit.com/r/FilmIndustryLA/comments/17ur3pv/the_ai_protections_for_sagaftra_are_lackluster/
Communication expert said AI poses risks and opportunities in entertainment industry, accessed June 26, 2025, https://news.csu.edu.au/latest-news/communication-expert-said-ai-poses-risks-and-opportunities-in-entertainment-industry
Striking actors and studios fight over control of performers' digital replicas - CBS News, accessed June 30, 2025, https://www.cbsnews.com/news/actors-strike-digital-replicas-artificial-intelligence/
What is an Employment Based Digital Replica? - YouTube, accessed June 30, 2025, https://www.youtube.com/watch?v=VdS7GU6Drm0
Digital Replicas 101 What You Need to Know About the 2023 TV/Theatrical Contracts General Information - sag-aftra, accessed June 30, 2025, https://www.sagaftra.org/sites/default/files/sa_documents/DigitalReplicas.pdf
SAG-AFTRA's AI Deal, Explained: Is It Enough to Protect Actors? - Backstage, accessed June 30, 2025, https://www.backstage.com/magazine/article/sag-aftra-ai-deal-explained-76821/
Considerations for the Entertainment Industry after the SAG-AFTRA Deal - WILLIAM FRY, accessed June 30, 2025, https://www.williamfry.com/knowledge/considerations-for-the-entertainment-industry-after-the-sag-aftra-deal/
AI Virtual Actors: Revolutionizing Hollywood and Resurrecting Legends - CHESA, accessed June 30, 2025, https://chesa.com/ai-virtual-actors-revolutionizing-hollywood-and-resurrecting-legends/
De-aging in film and television - Wikipedia, accessed June 30, 2025, https://en.wikipedia.org/wiki/De-aging_in_film_and_television
Synthetic Actors and Digital Avatars - PANOPTICON, accessed June 30, 2025, https://panopticon.am/synthetic-actors-and-digital-avatars/
Digital Replicas: Harm Caused by Actors' Digital Twins and Hope Provided by the Right of Publicity | Texas Law Review, accessed June 30, 2025, https://texaslawreview.org/digital-replicas-harm-caused-by-actors-digital-twins-and-hope-provided-by-the-right-of-publicity/
The Brutalist used AI for Adrien Brody's Oscar-nominated performance. Is that cheating?, accessed June 30, 2025, https://www.cbc.ca/news/entertainment/adrien-brody-ai-brutalist-oscars-1.7465870
Respeecher in Film & TV | How AI is Changing Voice Acting - Celtx Blog, accessed June 30, 2025, https://blog.celtx.com/ai-in-film-respeecher-sonantic/
Comments
Post a Comment