Unfair Use: The Struggle for Ownership in the Age of AI

If not for several calls for interview requests from various news outlets, artist Erin Hanson might have never known that her career’s work was being trained and recreated on the AI platform Stable Diffusion. Hanson, a prominent modern artist known for her invention of the “open impressionism” painting style, produced thousands of oil paintings in her nearly two-decade career. But Hanson’s distinctive style, which is marked by vibrant and expressive landscapes, was so closely replicated by Stable Diffusion that she couldn’t help but feel impressed. “It was pretty amazing,” Hanson said. “Obviously, I’m an artist. I know what an original oil painting looks like, and I can tell when something’s been computer-generated. But someone not in the know might go, ‘Oh yeah, that looks like a painting by Erin Hanson.’”

Bates, D. (2024, September 5). In the footsteps of the Impressionists: McMinnville’s Erin Hanson Gallery unveils paintings inspired by the artist’s visit to France. Oregon ArtsWatch — Arts and Culture News. https://www.orartswatch.org/in-the-footsteps-of-the-impressionists-mcminnvilles-erin-hanson-gallery-unveils-paintings-inspired-by-the-artists-visit-to-france/

But Hanson’s initial admiration couldn’t overshadow her concern about the liberal use of her intellectual property. “The issue I have with [generative AI] is when people then sell the image. That’s where copyright violation comes into play,” said Hanson. “It’s one thing to use. You know, anyone can copy art for their own personal use. That’s not illegal. It’s only a problem when you then try to sell it.”

Hanson’s story is just one of many in the ongoing battle between creators and AI companies that depend on vast, often copyrighted, data to train their machines. Copyright law in the United States grants creators exclusive rights to the reproduction, distribution and derivatives of their original work. If a company wanted to use an artists’ work for commercial purposes, they would be legally obligated to obtain the creator’s permission, and, in most cases, provide compensation. Creators’ rights to the reproduction and distribution of their work are at the center of the suits leveled against AI companies. 

One particularly notable case is the ongoing The New York Times v. Open AI. The New York Times alleges that in response to certain prompts, popular chatbot ChatGPT’s output recites articles verbatim. The Times’ ability to cite specific examples of ChatGPT parroting its articles make its case a particularly strong one in the battle between writers and AI. Yet there remain nuances to the lawsuit. Jane Friedman, co-founder of publishing-industry newsletter The Hot Sheet, said, “The New York Times has been accused of manipulating the prompts to such an extent that they are trying to get an infringing result. It’s not like you can easily get these models to regurgitate.” But Friedman acknowledged that to many, this doesn’t change the fundamental violation of the AI companies. “If the models can regurgitate, regardless, then…they’re infringement machines, and you can’t get past that.”

New York Times v. Open AI is not the only copyright lawsuit taking place against AI models today. The explosion in popularity of these AI models three years ago sparked a veritable litigative reckoning against every aspect of their use. Some suits are concerned with the training process itself. Umair Kazi, Director of Policy and Advocacy at the Authors Guild, said that when an AI company “downloads illegal copies or unauthorized copies and makes further copies of those books in the course of training, that is copyright infringement.” In other words, both the initial pirating of content and the additional copies made during training violate creators’ exclusive rights to their works’ reproduction.

AI companies are fighting back against these claims, though, on the grounds that their work constitutes fair use, a doctrine that allows limited use of copyrighted material without permission. The specific conditions protected under fair use include purposes of commentary, criticism, education, or transformative work. A landmark fair use case is Authors Guild, Inc. v. Google, Inc., which concerned Google’s Google Books project. To create a searchable online database, Google scanned millions of copyrighted books. While the Authors Guild held that this project violated copyright law by reproducing and distributing authors’ work without permission, the U.S. Court of Appeals ruled in favor of Google in 2015 on the fair use doctrine. The court held that the project was sufficiently transformative, as its purpose was to create a searchable database separate from what the original works offered. 

But Kazi explained that new cases leveled against AI companies bear significant differences from the Google Books case. “There’s just copying happening…and the copying facilitates the creation of new works that then compete in the market for words. So, it’s hard to see what is transformative or what new medium is being added.”

Matthew Butterick is a lawyer working with Joseph Saveri Law Firm, which has fought a wave of intellectual property lawsuits against AI companies. Butterick is similarly suspicious of AI companies’ fair use arguments. “If they’re right, then that means that they get to use the data for free, forever, without asking anybody.” Butterick continued, “These CEOs are standing up and saying, we’re about to build the most valuable thing that capitalism has ever seen. Oh, but we can’t pay for the copyrighted works. I think there’s a huge amount of eye rolling.”

This issue is especially personal for Butterick, who, in addition to being a lawyer, is also a designer, author and programmer. Specializing in typography, Butterick has released several typefaces and also penned a 2010 book titled Typography for Lawyers. He explained, “I noticed that my own work was in the data sets of all these models. So it was especially interesting to me, right? I mean, I could tell that my words were being quoted back to me by ChatGPT. It was a little alarming.” Butterick added, “It was apparent to me that it was going to cause a very serious impact on creators of every kind, especially people of my stripe—independently self-employed creators. So, I reactivated my law license to get involved.”  

Butterick sees an exploitation of authentic creation at the heart of these AI technologies. “The term artificial intelligence itself is something of a misnomer,” he said. “It’s really human intelligence that’s being moved from one location, passed through this software object.”  Butterick explained that despite the companies’ promises of revolutionary societal change, they’re still heavily dependent on human ideas. “These companies have no way right now to scale their business without this voracious appetite for human-created work.”

W. Ralph Eubanks, a professor at the University of Mississippi and a member of the Author’s Guild, also remains unconvinced by AI companies’ claims about their software as a tool for creativity. Eubanks is currently writing a book about the Mississippi Delta, one chapter of which examines the town Mound Bayou. He uses his own research as an example to underscore a problem with AI-generated tools. “There [are] several books that I had to read on [Mound Bayou], just to get an idea of what was there… but by going to generative AI and getting something about Mound Bayou from it, I would have missed a lot of perspectives.” When it comes to the creative process, Eubanks ultimately sees AI as ineffective and overcomplicated. His advice is pretty simple: “Read a book, that’s my immediate reaction. Don’t use [AI] to brainstorm.” 

Eubanks offers a unique perspective on the issue, as he grapples with AI both as a writer and as a professor. Fair use includes a major educational component; in other words, the use of copyrighted works is often permitted for educational purposes. But far from an aid, Eubanks has found AI to be an impediment to learning and development in the classroom. “I’m concerned about what this says to students about the craft of writing, that you can really begin a project without your own blood, sweat, toil, tears. That you can generate something and then maybe take it from there and go ahead. That’s not how the best writing is produced.” Ultimately, Eubanks sees AI companies’ unauthorized use of writers’ work as an erasure of a writer’s legacy. If AI is permitted to freely use his and other authors’ works, “my intellectual labor has not been compensated,” Eubanks said. “When I’m not here, what my children inherit, it’s not valuable. My life’s work has no value.”

Eubanks’ concern underscores the deeply personal stakes in the fight over intellectual property rights; for many artists and writers, their work represents a lifetime of labor meant to be passed down. Yet from the perspective of AI models, even the most extensive collections are just a small fraction of the required data. 

Jane Friedman explained, “The metaphor that I think has been most useful in understanding this is grains of sand on a beach. The model works when you give it the entire beach and it can extrapolate from there how to build lots of different things or output lots of different things. But no single grain by itself is particularly valuable.” This means the bargaining power of one individual is relatively small. This complicates issues for creatives, many of whom are freelance or independent, who want credit for their works’ contributions to the models. 

As AI companies face mounting pressure over their data-training practices, some legal experts suggest collective action as a potential solution. James Grimmelmann, a professor at Cornell Law School, said, “It’s one possibility to put some pressure on artists to come together into collective unions that can do AI licenses for all of those members. [It’ll] capture some number of professional artists.” But though this united front could offer some protection, the vast majority of online content creators would remain unprotected. “It’s not going to capture you and me writing blog posts and social media posts that are public,” said Grimmelman, highlighting a key challenge with no clear solution. 

Even as the dispute between creatives and AI companies grows increasingly fraught, artists are not letting it slow their craft. Hanson, who owns an art studio in Oregon, remains optimistic about the future of art. “[AI] certainly hasn’t decreased my creativity in any way. I get my inspiration by going outside and looking at beautiful landscapes. I don’t go online to get my inspiration,” Hanson said. “I hope that our government will step up and do something to protect artists, but it’s certainly not going to stop creativity… artists are artists. They’re going to create no matter what.”