It’s certainly an interesting time to be a copyright lawyer!

The past two decades has brought us Napster, YouTube, social media, mobile app development, cryptocurrency, and powerful blockchain technology. But nothing – like, really, nothing – has the capacity to transform our lives quite like artificial intelligence.

And in 2023, maybe no issue regarding AI received more attention, elicited more fear in the market, and was more hotly debated than the intellectual property issues raised by the early engagement with and implementation of generative AI into commerce.

Our readers know us to be nimble with new technology, first adopters and techno-optimists. We enjoy diving into the new technology, discovering who is using it, and why. We like to evangelize, debate, and discuss. And that’s exactly what we did in 2023.

As we close out the year, this piece aims to bring some early inning insights and experiences from our year with AI.


Intellectual Property + Name, Image, and Likeness


Two of the biggest areas of concern relate to intellectual property and publicity rights.

Intellectual Property (IP) refers to copyright, trademarks, and patents (sometimes, trade secrets are included). Copyrightable works are things like songs, movies, videos, books, artworks, anything original that’s fixed to a tangible medium and authored by a human (more on this last part later). A trademark is a word, phrase, symbol, design, smell, or any combination of these things that serves to identify the source of certain goods or services. The word NIKE, the Swoosh, the word APPLE for computers, you get the idea.

Publicity rights in the United States generally refer to NIL rights, or name, image, and likeness rights. Generally speaking (a dangerous thing for me to do since NIL rights vary from state to state, unlike our IP laws, which are federal and uniform), human beings have the right to control the commercial use of their name, image, and likeness (identity).

Let’s briefly discuss each in turn as they relate to issues created by generative AI.



ChatGPT + Midjourney


The 2022 holiday season included a market invasion of new technologies using generative AI. No product gained more attention than ChatGPT. ChatGPT is a form of generative AI — a tool that lets users enter prompts to receive humanlike images, text, videos, or a combination thereof that are created by AI.

Midjourney is a generative artificial intelligence program and service that generates images from natural language descriptions, called “prompts”, and is similar to OpenAI’s DALL-E and Stability AI’s Stable Diffusion. Humans enter text or image prompts, and the machines generate new images and artworks.

Together, these suite of generative AI tools – Chat GPT, Midjourney, DALL-E, and Stable Diffusion (and dozens of competitors) – found themselves in an avalanche of copyright litigation that helped inspire a Hollywood strike, created a new way for artists and content makers to create work, and kept everyone at Jayaram extraordinarily busy during the back half of 2023!


Are Works Created by AI Entitled to IP Protection?


Kris Kashtanova is a computer scientist and generative AI artist. In 2022, they created a graphic novel called Darya of the Dawn. They created the illustrations using generative AI tools like Midjourney. In September of 2022, the United States Copyright Office (USCO) granted Kris a copyright registration for Zarya of the Dawn (just like it grants thousands of book copyrights a year). But after learning that the work was created using generative AI tools, the USCO reversed course, and narrowed the scope of the registration to just the text, and the selection and arrangement of the visual elements. The new registration excluded the images themselves since those were created using Midjourney. The USCO said that while the images in Kashtanova’s book are original and fixed to a tangible medium, they lacked “human authorship,” the third requirement to obtain copyright registration in the United States.

Around the same time, scientist and inventor Stephen Thaler tried to get a copyright registration for an artwork called “A Recent Entrance to Paradise,” which was created by the Creativity Machine, an AI system created by Thaler. The USCO rejected Thaler’s application, too, by concluding that it lacked “human authorship.” Thaler took his grievances to federal court, where a federal judge issued the first ever court decision addressing the copyrightability of works created by a generative AI tool.  The Court agreed with the USCO and found that because “A Recent Entrance to Paradise” was not created by a human being (but rather by artificial intelligence), it failed to meet the “human authorship” requirement of the Copyright Act.

Kashtanova and Thaler make one thing crystal clear: content created by generative AI tools currently does not receive copyright protection in the United States.

Why is this so important? Because without copyright protection, Kashtanova and Thaler cannot stop anyone from making the same or similar artwork. A Supreme Court case from a few years ago requires plaintiffs in copyright litigation to have a registration in order to file a case against an infringer in federal court. And since this is the current state of affairs, brands will not be interested in licensing content from licensors and creators using generative AI to create content. These are works without any meaningful legal protection.


Can Models Be Trained on Copyrighted Material?


When a human enters a prompt on Midjourney and an image is created, how does that happen? Midjourney (and many others) “train” models on millions of images and other data. This data is called training data, and anything created on Midjourney essentially pulls from this data. So, when you ask Midjourney to make something, it follows your instructions and creates an image based on the hundreds of millions of images on which it has been trained. Midjourney says it uses “publicly accessible” data to train its models (images and data available on the internet to anyone).

On the 13th day of the year, a group of artists filed a class action lawsuit against Midjourney and others, claiming that the act of training a model with plaintiffs’ copyrighted material is copyright infringement.

A few months later, Sarah Silverman (the comic/author) and other authors sued OpenAI, claiming that ChatGPT was training its model by using Silverman’s copyrighted books. Michael Chabon followed suit, as did the Author’s Guild, and many others.

As of publication, at least a dozen suits have been filed that basically say the same thing:  using copyrighted material to train a model is infringement.

Predictions are always dangerous, but we’re all friends so here’s one: as a result of the Google v Oracle case (discussed at length in The Innovator 001) and the Hiq v LinkedIn case, I think that the mere act of training a model on truly “publicly accessible” data will probably be okay, whether on fair use or other grounds. But Silverman (and quite possibly others) should have some success in saying that their works are not “publicly accessible.” Scraping private or proprietary information has always been illegal under the Computer Fraud and Abuse Act, or any of the litany of state hacking laws out there.   

How do we know if something is public or if it’s private? Well, publicly accessible information is anything you can view on the internet without having additional credentials. So, if Midjourney is scraping images from an art gallery’s website that shows hundreds of its artist’s works on its publicly accessible page, that’s publicly accessible. If, on the other hand, the model is built by scraping copyrighted books (as in Sarah Silverman’s case) that are not available unless they are purchased, that’s something “private” (you can’t go read the Silverman book online anywhere unless you buy it!).   

The purpose of the Copyright Act is to inspire innovation, not hinder it. So, we can imagine a world in which Congress, or the Courts decide that training is an imperative for innovation. But we can also expect Congress or the Courts to put clear guardrails around training on stuff that’s not publicly available.


Drake, The Weeknd, and NIL


This spring, a producer named Ghostwriter dropped “Heart On My Sleeve,” a collaborative track featuring AI-generated facsimiles of Drake and The Weeknd‘s voices. Translation: Ghostwriter wrote a song and used AI to have Drake and The Weeknd “sing” on his track without their consent.

The internet went berserk, and the lawyers weighed in.

The long and short of it is that Drake and The Weeknd don’t have any copyright claims against Ghostwriter: the abled producer wrote the track himself.

But Drake and The Weeknd probably had a claim based on their right to control their name, image, and likeness. This goes back to a 1986 case involving Bette Midler, who sued Ford for using a backup singer to mimic her voice in a Mercury Sable ad with a cover of “Do You Want to Dance,” her 1970s hit. Midler won big, with the Court saying that a voice – like a person’s identity – cannot be pirated.

As we enter 2024, a year when music and AI will certainly be working more together, artists can take some solace in knowing that the current NIL landscape should be strong enough to protect their voices from piracy.


What Are Companies Hiring Us For?


Three groups of clients are coming to us for guidance on these kinds of issues:

1  Artists

2  Public-facing brands

3  Fashion brands

The artists are hiring us for guidance on how to protect artworks created by generative AI tools. Since they currently don’t get copyright protection (see Kashtanova and Thaler, above), they need creative strategies. We’ve had a good deal of success over the last few months securing registrations for works that have partially been created using AI.

The brands want our help on guiding their creative and marketing teams on how to responsibly use these tools without getting sued. They also want to know how they can use these tools to create advertising and marketing content, and also for product descriptions and other marketing copy. Like the artists, these brands want to know what protection they can obtain for these kinds of works.

And in fashion, the brands want to use these tools to help create products, which presents issues related to trade dress and trademark law.

This year alone, we provided CLE for in house legal departments at some of the largest companies in the US. We also provided more tailored seminars and guidance for content creating teams in house. We work closely with creative teams and with leadership to help them gain an understanding of the tools in the market, the risks involved, and the best paths to meaningful protection that can help during a liquidity event or other transaction down the road.


Some Takeaways


1  Works and content created using generative AI tools are not currently eligible for copyright protection.

2  The mere use of Midjourney or similar tool is not infringement. The focus remains on the output: if the output is substantially similar, then it is infringement. If not, it’s not, no matter what the prompt was in the first place.  

3  Training a model on non-publicly available data is problematic.

4  Voice is a part of likeness.

5  Using ChatGPT is not illegal. But if it provides you with infringing content that you use, that’s infringement. Like Midjourney, the focus remains on the output.

The post Chat 2023: The Year AI Broke by Vivek Jayaram first appeared on Jayaram Law.