Wednesday, March 11, 2026

The next legal frontier is your face and artificial intelligence

Share

This is A step backa weekly newsletter that highlights one gigantic story from the world of technology. To learn more about the AI ​​legal quagmire, follow Adi Robertson. A step back arrives in our subscribers’ inboxes at 8:00 a.m. ET. Report yourself A step back Here.

The song was called “Heart on My Sleeve” and if you didn’t know any better, you might guess you were hearing Drake. If you he did knowing better, you heard the bell ringing for a novel legal and cultural battle: a fight over how AI services should be able to apply people’s faces and voices, and how platforms should respond.

What was novel in 2023 was bogus Drake’s AI-generated song “Heart on My Sleeve”; still, the problems it posed were clear. The song, which faithfully imitates the outstanding artist, shocked the musicians. Streaming services removed it for copyright legal reasons. But the creator didn’t do it directly copy anything – just a very close imitation. Therefore, attention quickly focused on a separate area of ​​the law of similarity. It’s a field that was once synonymous with celebrities looking for unauthorized endorsements and parodies, and as audio and video deepfakes spread, it seemed to be one of the few tools available to regulate them.

Unlike copyright law, which is governed by the Digital Millennium Copyright Act and numerous international treaties, there is no federal law on similarity. It’s a patchwork of different state laws, none of which were originally designed with AI in mind. However, over the last few years, efforts have been made to change this state of affairs. In 2024 Tennessee Governor Bill Lee and California Gov. Gavin Newsom — both states that rely heavily on their media industries — signed bills expanding protections against unauthorized replicas of artists.

However, predictably, the law developed slower than the technology. Last month, OpenAI launched Sora, an AI video generation platform designed to capture and remix the likenesses of real people. This opened the floodgates to a torrent of often surprisingly realistic deepfakes, including those from people who did not consent to their creation. OpenAI and other companies are responding by implementing their own similarity rules, which, in the absence of anything else, could evolve into novel rules for Internet traffic.

OpenAI denies that Sora’s launch was reckless, with CEO Sam Altman stating that it was “too restrictive” due to guardrails in general. However, the service still generates many complaints. It launched with minimal restrictions on the likenesses of historical figures, only to reverse course after Martin Luther King Jr.’s estate filed a complaint about the “disrespectful” portrayal of the slain civil rights leader spreading racism or committing crimes. It advertised careful restrictions on the unauthorized apply of likenesses of living people, but users found a way around this problem and included celebrities such as Bryan Cranston in Sora videos, such as: taking a selfie with Michael Jackson, which led to complaints from SAG-AFTRA that forced OpenAI to strengthen its defenses there as well in unspecified ways.

Even some people who he did allow the performances of Sora (a word meaning a film that uses a person’s likeness) were unsatisfactory as a result of the results, including in the case of women: all kinds of fetishes. Altman said he didn’t realize that people might have “in-between” feelings about authorized likenesses, such as not wanting people to “say offensive things or things that they find deeply problematic” in a public speech.

Sora is dealing with change issues like adjusting policies around historical figures, but it’s not the only AI-powered video service and overall things are getting very weird. AI-related bloopers have become de rigueur for President Donald Trump’s administration and some other politicians, including vulgar or downright racist depictions of specific political enemies: Trump responded to last week’s No Kings protests with a video of him throwing shit at someone who he resembled liberal influencer Harry Sissonwhile also being a candidate for mayor of Recent York Posted by Andrew Cuomo (and quickly deleted) “Criminals for Zohran Mamdani” video, showing his democratic opponent devouring handfuls of rice. How Kat Tenbarge described in Spitfire News earlier this month, AI-powered videos are also becoming ammunition in influencer dramas.

There is an almost constant risk of legal action over unauthorized videos, with stars such as Scarlett Johansson bringing legal action over the apply of their likenesses. But unlike AI copyright infringement allegations, which have sparked numerous high-profile lawsuits and near-constant regulatory debates, few similarity incidents have reached this level — perhaps in part because the legal landscape continues to change.

When SAG-AFTRA thanked OpenAI for changing Sora’s defenses, it used the opportunity to promote the Nurture Originals, Foster Art and Keep Entertainment Safe and sound (NO FAKES) Act, a multi-year effort to codify protections against “unauthorized digital replications.” The Act on the Prohibition of Counterfeitswhich also received support from YouTube, introduces nationwide laws to control the apply of a “computer-generated, highly realistic electronic representation” of the voice or visual likeness of a living or deceased person. This includes liability for online services that knowingly also allow unauthorized digital replications.

The NO FAKES Act has drawn weighty criticism from online free speech groups. The EFF called it that a “new censorship infrastructure” mandate that forces platforms to filter content so widely that it will almost inevitably lead to unintentional content removals and “heckler vetoes” on the Internet. The bill includes excerpts for parody, satire and commentary that should be allowed even without permission, but will be “cold comfort to those who cannot afford to have this issue resolved,” the organization warned.

Opponents of the NO FAKES Act can take solace in how little legislation Congress is currently able to pass – the current life we ​​live in the second-longest federal government shutdown in historythere is even a separate push to block state AI regulations that could invalidate novel similarity laws. But from a pragmatic point of view, the principles of similarity are still coming. Earlier this week, YouTube announced that it would allow affiliate program creators to search for unauthorized files based on their likeness and request their removal. The move expands on existing policies that, among other things, allow music industry partners to remove content that “mimics the unique voice of a singing or rapping artist.”

And with all this, social norms continue to evolve. We’re entering a world where you can easily generate a video of almost anyone doing almost anything – but when should You? In many cases, these expectations remain unchanged.

  • Most of this recent discussion revolves around AI videos of people doing just weird or stupid things, but historically, research shows that the vast majority of deepfakes are pornographic images of women, often made without their consent. Outside of Sora, there’s a whole other conversation going on about things like nudify AI service resultsand legal issues are similar to those for other non-consensual sexual images.
  • In addition to the basic legal question of when a likeness is unauthorized, there are also questions about when a video may be defamatory (if it is sufficiently realistic) or harassing (if it is part of a broader pattern of harassment and threats), which can further complicate particular situations.
  • Social media platforms are accustomed to almost always being protected from liability under Section 230, which states that they cannot be treated as a publisher or presenter of third-party content. As more and more services actively support users generate content, a fascinating question is how much Section 230 will protect the resulting images and videos.
  • Despite long-standing concerns that artificial intelligence will make it truly impossible to distinguish fantasy from reality, it is still often basic to apply context and “hints” (from specific editing tics to obvious watermarks) to check if the video was generated by artificial intelligence. The problem is that many people don’t look closely enough or simply don’t care if it’s a bogus.
  • Sarah Jeong’s warning about smoothly manipulated photos is even more relevant now than when it was published in 2024.
  • Recent York Times has a comprehensive appearance due to Trump’s particular preference for AI-generated content.
  • Max Read Analysis Sora as a social media platform and whether it will “work”.
Follow topics and authors from this story to see more events like this in your personalized homepage feed and receive email updates.


Latest Posts

More News