Beyond Copyright: What Taylor Swift's Trademark Filing Tells Enterprise Leaders About AI Identity Risk
Taylor Swift's trademark filings for her voice and likeness expose a critical IP governance gap that AI has created. Here's what enterprise leaders need to know about deepfake risk, sound marks, and building legal defences before an incident occurs.
28 Apr 2026 • 7 min read

The Problem Is Not Just About Pop Stars
When Taylor Swift filed three trademark applications with the US Patent and Trademark Office (USPTO) in April 2026 — two audio clips of her voice and one stage photograph — the entertainment press covered it as a celebrity story. It is not. It is a governance story, and every enterprise with a public-facing brand, a spokesperson, or a voice-enabled product should pay attention.
The core issue is a structural gap in intellectual property law that AI has torn wide open. As trademark attorney Josh Gerben, who first reported Swift's filings, put it directly:
"Historically, singers relied on copyright law to protect their recorded music. But AI technologies now allow users to generate entirely new content that mimics an artist's voice without copying an existing recording, creating a gap that trademarks may help fill." — Josh Gerben, Trademark Attorney
Replace "singers" with "brand spokespersons, CEOs, or voice assistant products" and the sentence applies perfectly to the enterprise context.
What the Law Currently Protects — and What It Does Not
Copyright law protects original recordings and creative works. It does not protect the underlying characteristics of a voice, a speaking style, or a visual likeness. This was a manageable limitation in a pre-AI world where replicating a voice required significant effort and skill. Today, a generative AI model needs only a short reference audio clip to produce a convincing imitation — in seconds, without ever touching the original recording.
This creates a clean legal blind spot. An AI-generated audio clip that sounds like your CFO making a fraudulent payment instruction, or a deepfake video of your CEO announcing a policy that never happened, may not technically infringe any existing copyright. The original recording was never copied. The law's current perimeter does not cover it.
Swift's filing attempts to address this through trademark law's "confusingly similar" standard. As Gerben explains: "By registering specific phrases tied to her voice, Swift could potentially challenge not only identical reproductions, but also imitations that are 'confusingly similar,' a key standard in trademark law."
For enterprises, this framework has direct implications. Trademark law offers a federal-level claim that can supplement — not replace — existing right of publicity statutes, which vary significantly by jurisdiction and mostly target malicious or commercial exploitation.
The Incident Record Is Already Long
This is not a theoretical risk. The documented incidents involving public figures alone should trigger concern in any enterprise risk function:
- Scarlett Johansson (2023): Filed legal action against AI app Lisa AI for creating an unauthorised AI avatar using her likeness for advertising.
- Tom Hanks (2024): Publicly warned about multiple online advertisements falsely using his name, likeness, and voice to promote products he never endorsed.
- Bryan Cranston (2025): Raised formal concerns about OpenAI's Sora 2 and its ability to replicate celebrity likenesses without permission.
- Taylor Swift (2023–2026): Targeted by AI-generated deepfakes including fake cookware advertisements, sexually explicit images, and a manipulated political endorsement shared by then-President Donald Trump.
In each case, the harm was reputational, financial, or both. The mechanisms used — AI voice cloning, image synthesis, video manipulation — are the same tools available to anyone targeting an enterprise's executives or brand assets.
Three Enterprise Risk Areas That Require Immediate Review
1. Executive Voice and Likeness as an Unprotected Asset
Most enterprises have registered trademarks covering their logos, product names, and taglines. Very few have considered whether the voice of their CEO in an earnings call, the face of their CMO in a product video, or the synthetic voice of their customer service AI constitutes a protectable brand asset.
It does — and it is currently exposed.
Sound marks are not new. MGM's lion roar, NBC's chimes, and the Pillsbury Doughboy's giggle are all registered sound trademarks. What is new, as Gerben notes, is "attempting to register a celebrity's spoken voice," which "is a new use of trademark registration that has not been tested in court before."
First movers in this space — whether celebrities or enterprises — will have the advantage of establishing legal precedent on their terms rather than defending against it.
2. AI-Generated Content Compliance Is Not Optional
Enterprises using generative AI tools to create marketing materials, customer communications, or internal training content face a mirror version of this risk: they may be creating content that infringes on someone else's voice trademark or likeness rights without any intent to do so.
As AI content generation becomes standard practice inside organisations, legal and compliance teams need a clear policy framework covering:
- What voice or likeness data can be used as AI training input
- How AI-generated audio and video content is reviewed before publication
- What consent and licensing documentation is required
- Which jurisdictions' right of publicity laws apply to your operations
The legislative environment is tightening. Multiple US states have passed or are considering restrictions on AI misuse of identities. Tennessee's 2024 ELVIS Act (Ensuring Likeness Voice and Image Security Act) is the most prominent, offering broader protection than most comparable legislation. More will follow.
3. Deepfake Fraud Is a Financial Control Issue, Not Just a PR Issue
For sectors including financial services, legal, and healthcare — where decisions are made based on verbal or visual authority — the deepfake risk moves from reputational to transactional. An AI-cloned voice of a CFO authorising a wire transfer. A synthetic video of a board member approving a contract. These are not edge cases; they are documented attack vectors.
YouTube's recent move — partnering with talent agencies to open its proprietary deepfake detection tool to celebrities and entertainers — signals that platform-level controls are being built. Enterprises should not wait for platform solutions. Internal deepfake detection capability, particularly for executive communications, is a legitimate technology investment.
What Matthew McConaughey Got Right
In January 2026, McConaughey became the first A-list celebrity to file trademarks covering images, video, and audio of himself. When asked about the rationale, he was direct:
"We want to clearly define ownership in the AI world, where licensing and attribution become the industry norm." — Matthew McConaughey
That sentence describes a governance philosophy, not just a legal tactic. It reflects the same principle that underlies enterprise data governance, software licensing, and vendor contract management: define ownership boundaries before disputes arise, not during them.
Practical Steps for Enterprise Leaders
Short-term (0–3 months):
- Conduct an IP audit that explicitly includes voice assets, executive likenesses, and synthetic AI voices used in customer-facing products.
- Brief legal and compliance teams on the trademark vs. copyright distinction in the context of AI-generated content.
- Establish a documented internal policy for AI-generated audio and video content before publication.
Medium-term (3–12 months):
- Evaluate whether brand voice assets — including AI voice assistants or spokesperson recordings — meet the threshold for sound trademark registration.
- Invest in or procure deepfake detection capability for executive communications channels.
- Monitor legislative developments in key operating jurisdictions. The ELVIS Act model is spreading.
Long-term:
- Build AI identity governance into your broader enterprise AI policy framework, not as a separate initiative.
- Engage legal counsel with specific experience in sound marks and right of publicity law, particularly if your operations span multiple jurisdictions.
Conclusion
Taylor Swift's trademark filings are a signal, not a solution. They highlight that existing legal frameworks were not built for a world where any voice can be replicated from a short audio sample, and any face can be placed in any video at scale. The legal system will adapt, but that process is slow and the outcome is uncertain.
Enterprises cannot afford to wait for case law to settle. The tools to exploit voice and likeness exist now. The governance frameworks to manage that risk — IP strategy, compliance policy, detection technology — need to be built now.
The question is not whether your organisation's assets are at risk. They are. The question is whether you have drawn the ownership boundaries clearly enough to defend them.