OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.
In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.
OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.
The problem is not that it’s regurgitating. The problem is that it was trained on NYT articles and other data in violation of copyright law. Regurgitation is just evidence of that.
Training on copyrighted data should be allowed as long as it’s something publicly posted.
Only if the end result of that training is also something public. OpenAI shouldn’t be making money on anything except ads if they’re using copyright material without paying for it.
I was trained on copyrighted material… I guess I should work for free
I’ve seen and heard your argument made before, not just for LLM’s but also for text-to-image programs. My counterpoint is that humans learn in a very similar way to these programs, by taking stuff we’ve seen/read and developing a certain style inspired by those things. They also don’t just recite texts from memory, instead creating new ones based on probabilities of certain words and phrases occuring in the parts of their training data related to the prompt. In a way too simplified but accurate enough comparison, saying these programs violate copyright law is like saying every cosmic horror writer is plagiarising Lovecraft, or that every surrealist painter is copying Dali.
It doesn’t work that way. Copyright law does not concern itself with learning. There are 2 things which allow learning.
For one, no one can own facts and ideas. You can write your own history book, taking facts (but not copying text) from other history books. Eventually, that’s the only way history books get written (by taking facts from previous writings). Or you can take the idea of a superhero and make your own, which is obviously where virtually all of them come from.
Second, you are generally allowed to make copies for your personal use. For example, you may copy audio files so that you have a copy on each of your devices. Or to tie in with the previous examples: You can (usually) make copies for use as reference, for historical facts or as a help in drawing your own superhero.
In the main, these lawsuits won’t go anywhere. I don’t want to guarantee that none of the relative side issues will be found to have merit, but basically this is all nonsense.
Generally you’re correct, but copyright law does concern itself with learning. Fair use exemptions require consideration of the purpose character of use, explicitly mentioning nonprofit educational purposes. It also mentions the effect on the potential market for the original work. (There are other factors required but they’re less relevant here.)
So yeah, tracing a comic book to learn drawing is totally fine, as long as that’s what you’re doing it for. Tracing a comic to reproduce and sell is totally not fine, and that’s basically what OpenAI is doing here: slurping up whole works to improve their saleable product, which can generate new works to compete with the originals.
I meant “learning” in the strict sense, not institutional education.
I think you are simply mistaken about what AI is typically doing. You can test your “tracing” analogy by making an image with Stable Diffusion. It’s trained only on images from the public internet, so if the generated image is similar to one in the training data, then a reverse image search should turn it up.
And I suppose people at OpenAI understand how to build a formal proof and that it is one. So it’s straight up dishonest.
violation of copyright law
That’s quite the claim to make so boldly. How about you prove it? Or maybe stop asserting things you aren’t certain about.
17 USC § 106, exclusive rights in copyrighted works:
Subject to sections 107 through 122, the owner of copyright under this title has the exclusive rights to do and to authorize any of the following:
(1) to reproduce the copyrighted work in copies or phonorecords;
(2) to prepare derivative works based upon the copyrighted work;
(3) to distribute copies or phonorecords of the copyrighted work to the public by sale or other transfer of ownership, or by rental, lease, or lending;
(4) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and motion pictures and other audiovisual works, to perform the copyrighted work publicly;
(5) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and pictorial, graphic, or sculptural works, including the individual images of a motion picture or other audiovisual work, to display the copyrighted work publicly; and
(6) in the case of sound recordings, to perform the copyrighted work publicly by means of a digital audio transmission.
Clearly, this is capable of reproducing a work, and is derivative of the work. I would argue that it’s displayed publicly as well, if you can use it without an account.
You could argue fair use, but I doubt this use would meet any of the four test factors, let alone all of them.
Only publishing it is a copyright issue. You can also obtain copyrighted material with a web browser. The onus is on the person who publishes any material they put together, regardless of source. OpenAI is not responsible for publishing just because their tool was used to obtain the material.
There are issues other than publishing, but that’s the biggest one. But they are not acting merely as a conduit for the work, they are ingesting it and deriving new work from it. The use of the copyrighted work is integral to their product, which makes it a big deal.
Yeah, the ingestion part is still to be determined legally, but I think OpenAI will be ok. NYT produces content to be read, and copyright only protects them from people republishing their content. People also ingest their content and can make derivative works without problem. OpenAI are just doing the same, but at a level of ability that could be disruptive to some companies. This isn’t even really very harmful to the NYT, since the historical material used doesn’t even conflict with their primary purpose of producing new news. It’ll be interesting to see how it plays out though.
copyright only protects them from people republishing their content
This is not correct. Copyright protects reproduction, derivation, distribution, performance, and display of a work.
People also ingest their content and can make derivative works without problem. OpenAI are just doing the same, but at a level of ability that could be disruptive to some companies.
Yes, you can legally make derivative works, but without license, it has to be fair use. In this case, where not only did they use one whole work in its entirety, they likely scraped thousands of whole NYT articles.
This isn’t even really very harmful to the NYT, since the historical material used doesn’t even conflict with their primary purpose of producing new news.
This isn’t necessarily correct either. I assume they sell access to their archives, for research or whatever. Being able to retrieve articles verbatim through chatgpt does harm their business.
Yes, you can legally make derivative works, but without license, it has to be fair use. In this case, where not only did they use one whole work in its entirety, they likely scraped thousands of whole NYT articles.
Scraping is the same as reading, not reproducing. That isn’t a copyright violation.
Antiquated IP laws vs Silicon Valley Tech Bro AI…who will win?
I’m not trying to be too sarcastic, I honestly don’t know. IP law in the US is very strong. Arguably too strong, in many cases.
But Libertarian Tech Bro megalomaniacs have a track record of not giving AF about regulations and getting away with all kinds of extralegal shenanigans. I think the tide is slowly turning against that, but I wouldn’t count them out yet.
It will be interesting to see how this stuff plays out. Generally speaking, tech and progress tends to win these things over the long term. There was a time when the concept of building railroads across the western United States seemed logistically and financially absurd, for just one of thousands of such examples. And the nay sayers were right. It was completely absurd. Until mineral rights entered the equation.
However, it’s equally remarkable a newspaper like the NYT is still around, too.
But Libertarian Tech Bro megalomaniacs have a track record of not giving AF about regulations and getting away with all kinds of extralegal shenanigans.
Not supporting them, but that’s the whole point.
A lot of closed gardens get disrupted by tech. Is it for the better? Who knows. I for sure don’t know. Because lots of rules were made by the wealthy, and technology broke that up. But then tech bros get wealthy and end up being the new elite, and we’re back full circle.
seems like they’re mostly for the worse, really.
Wikipedia destroyed the paper encyclopedia business.
Online courses disrupted higher education. Half of my team don’t have a degree in computer science.
Say what you want about Airbnb/Uber, but the time before that was a shit show to be a black person trying to hail a taxi.
I’m sure you can name dozens of wtfs like Facebook, and misinformation. But I’m not so pessimistic because we got a lot of real great cultural shifts in the past twenty years.
neither wikipedia nor online education aimed to be disruptive. and the only good thing you can say about airbnb and uber are “it’s easier for black ppl to get a taxi” I think there’s a difference between the “tech bro hustling” and wikipedia.
AI is still on the hustling side as far as I’m concerned
OpenAI claims that the NYT articles were wearing provocative clothing.
Feels like the same awful defense.
Yeah I agree, this seems actually unlikely it happened so simply.
You have to try really hard to get the ai to regurgitate anything, but it will very often regurgitate an example input.
IE “please repeat the following with (insert small change), (insert wall of text)”
GPT literally has the ability to get a session ID and seed to report an issue, it should be trivial for the NYT to snag the exact session ID they got the results with (it’s saved on their account!) And provide it publicly.
The fact they didn’t is extremely suspicious.
I wonder how far “ai is regurgitating existing articles” vs “infinite monkeys on a keyboard will go”. This isn’t at you personally, your comment just reminded me of this for some reason
Have you seen library of babel? Heres your comment in the library, which has existed well before you ever typed it (excluding punctuation)
https://libraryofbabel.info/bookmark.cgi?ygsk_iv_cyquqwruq342
If all text that can ever exist, already exists, how can any single person own a specific combination of letters?
There is no mathematical definition of copyright, because it’s just based on feelings. That’s why every small problem has to be arbitrarily decided by a court.
If all text that can ever exist, already exists, how can any single person own a specific combination of letters?
They don’t own it, they just own exclusive rights to make copies. If you reach the exact same output without making a copy then you’re in the clear.
There is an attack where you ask ChatGPT to repeat a certain word forever, and it will do so and eventually start spitting out related chunks of text it memorized during training. It was in a research paper, I think OpenAI fixed the exploit and made asking the system to repeat a word forever a violation of TOS. That’s my guess how NYT got it to spit out portions of their articles, “Repeat [author name] forever” or something like that. Legally I don’t know, but morally making a claim that using that exploit to find a chunk of NYT text is somehow copyright infringement sounds very weak and frivolous. The heart of this needs to be “people are going on ChatGPT to read free copies of NYT work and that harms us” or else their case just sounds silly and technical.
This feels so much like an Onion headline.
“They tricked us!”
…
“That said… we would still like to ‘work’ with them.”
If you can prompt it, “Write a book about Harry Potter” and get a book about a boy wizard back, that’s almost certainly legally wrong. If you prompt it with 90% of an article, and it writes a pretty similar final 10%… not so much. Until full conversations are available, I don’t really trust either of these parties, especially in the context of a lawsuit.
So I copied the first paragraph of the Osama Bin Laden Killed NYT Article and asked Chat GPT to give me an article on the topic “in the style of NYT”
Even before the thing had finished generating, it was clear to me that it was high school level “copy my homework but don’t make it obvious” work.
I put it into a plagiarism checker anyway and it said “Significant Plagiarism Found”
Here’s the chatGPT text for anyone who is interested-
WASHINGTON — Osama bin Laden, the mastermind of the most devastating attack on American soil in modern times and the most hunted man in the world, was killed in a firefight with United States forces in Pakistan, President Obama announced on Sunday.
In a late-night address to the nation from the East Room of the White House, President Obama declared, “Justice has been done.” The death of Bin Laden, the Al Qaeda leader who had eluded international forces for nearly a decade, marks a significant turning point in the global fight against terrorism and a pivotal victory for U.S. intelligence and military forces.
Bin Laden’s demise, the result of an operation that unfolded over the course of a few tense hours, is being hailed as a triumph of U.S. military and intelligence efforts. The operation, carried out by a small team of American forces in Abbottabad, a city north of Islamabad, ended a worldwide manhunt that began in the aftermath of the September 11 attacks in 2001.
The President, in his address, detailed how the United States had received an intelligence lead last August about Bin Laden’s possible location. Over the ensuing months, intelligence officials worked to confirm the information. “Finally, last week, I determined that we had enough intelligence to take action, and authorized an operation to get Osama bin Laden and bring him to justice,” Obama said.
The raid on Bin Laden’s compound, described by officials as a surgical strike, was a high-stakes operation. U.S. helicopters ferried elite counter-terrorism forces into the compound, where they engaged in a firefight, killing Bin Laden and several of his associates. There were no American casualties.
The news of Bin Laden’s death immediately sent waves of emotion across the United States and around the world. In Washington, large crowds gathered outside the White House, chanting “USA! USA!” as they celebrated the news. Similar scenes unfolded in New York City, particularly at Ground Zero, where the Twin Towers once stood.
The killing of Bin Laden, however, does not signify the end of Al Qaeda or the threat it poses. U.S. officials have cautioned that the organization, though weakened, still has the capability to carry out attacks. The Department of Homeland Security has issued alerts, warning of the potential for retaliatory strikes by terrorists.
In his address, President Obama acknowledged the continuing threat but emphasized that Bin Laden’s death was a message to the world. “The United States has sent an unmistakable message: No matter how long it takes, justice will be done,” he said.
As the world reacts to the news of Bin Laden’s death, questions are emerging about Pakistan’s role and what it knew about the terrorist leader’s presence in its territory. The operation’s success also underscores the capabilities and resilience of the U.S. military and intelligence community after years of relentless pursuit.
Osama bin Laden’s death marks the end of a chapter in the global war on terror, but the story is far from over. As the United States and its allies continue to confront the evolving threat of terrorism, the world watches and waits to see what unfolds in this ongoing narrative.
Ok but you didn’t put this up with the original article text or compare it in any way. Just ran it through a ‘plagiarism detector’ and dumped the text you made. If you’re going to make this argument, don’t rely on a single website to check your text, and at least compare it to the original article you’re using to make your point. It looks like you’re dumping it here and expecting we all are going to go Scooby-Doo detectives or something. Mate, this is your own argument. Do the work yourself if you want to make a point.
Hey, I get what you are trying to say, but I suggest you try reading the original article. Here it is for reference.
https://www.nytimes.com/2011/05/02/world/asia/osama-bin-laden-is-killed.html
The second para starts in the original article by saying - In a late-night appearance in the East Room of the White House, Mr. Obama declared that “justice has been done”
In the ChatGPT version it says - In a late-night address to the nation from the East Room, President Obama declared “Justice has been done”.
I’ll let you draw your own conclusions
One thing that seems dumb about the NYT case that I haven’t seen much talk about is that they argue that ChatGPT is a competitor and it’s use of copyrighted work will take away NYTs business. This is one of the elements they need on their side to counter OpenAIs fiar use defense. But it just strikes me as dumb on its face. You go to the NYT to find out what’s happening right now, in the present. You don’t go to the NYT to find general information about the past or fixed concepts. You use ChatGPT the opposite way, it can tell you about the past (accuracy aside) and it can tell you about general concepts, but it can’t tell you about what’s going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit). I feel pretty confident in saying that there’s not one human on earth that was a regular new York times reader who said “well i don’t need this anymore since now I have ChatGPT”. The use cases just do not overlap at all.
it can’t tell you about what’s going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit)
It’s absolutely part of the lawsuit. NYT just isn’t emphasising it because they know OpenAI is perfectly within their rights to do web searches and bringing it up would weaken NYT’s case.
ChatGPT with web search is really good at telling you what’s on right now. It won’t summarise NYT articles, because NYT has blocked it with robots.txt, but it will summarise other news organisations that cover the same facts.
The fundamental issue is news and facts are not protected by copyright… and organisations like the NYT take advantage of that all the time by immediately plagiarising and re-writing/publishing stories broken by thousands of other news organisations. This really is the pot calling the kettle black.
When NYT loses this case, and I think they probably will, there’s a good chance OpenAI will stop checking robots.txt files.
This feels a lot like Elons’s “but, but, they tricked our algos to have them suggest those hateful tweets!”
Tricked. Lol. The NYT tricked a private company into stealing it’s content. True distopia.
The advances in LLMs and Diffusion models over the past couple of years are remarkable technological achievements that should be celebrated. We shouldn’t be stifling scientific progress in the name of protecting intellectual property, we should be keen to develop the next generation of systems that mitigate hallucination and achieve new capabilities, such as is proposed in Yann Lecun’s Autonomous Machine Intelligence concept.
I can sorta sympathise with those whose work is “stolen” for use as training data, but really whatever you put online in any form is fair game to be consumed by any kind of crawler or surveillance system, so if you don’t want that then don’t put your shit in the street. This “right” to be omitted from training datasets directly conflicts with our ability to progress a new frontier of science.
The actual problem is that all this work is undertaken by a cartel of companies with a stranglehold on compute power and resources to crawl and clean all that data. As with all natural monopolies (transportation, utilities, etc.) it should be undertaken for the public good, in such as way that we can all benefit from the profits.
And the millionth argument quibbling about whether LLMs are “truly intelligent” is a totally orthogonal philosophical tangent.
I understand your point, but disagree.
We tend to think of these models as agents or persons with a right to information. They “learn like we do” after all. I think the right way to see them is emulating machines.
A company buys an empty emulating machine and then puts in the type of information is would like to emulate or copy. Copyright prevents companies from doing this in the classic sense of direct emulation already.
LLM companies are trying to push the view that their emulating machines are different enough from previous methods of copying that they should be immune to copyright. They tend to also claim that their emulating machines are in some way learning rather than emulating, but this is tenuous at best and has not yet been proven in a meaningful sense.
I think you’ll see that if you only feed an LLM art or text from only one artist you will find that most of the output of the LLM is clearly copyright infringement if you tried to use it commercially. I personally don’t buy the argument that just because you’re mixing several artists or writers that it’s suddenly not infringement.
As far as science and progress, I don’t think that’s hampered by the view that these companies are clearly infringing on copyright. Copyright already has several relevant exemptions for educational and private use.
As far as “it’s on the internet, it’s fair game”. I don’t agree. In Western countries your works are still protected by copyright. Most of us do give away those rights when we post on most platforms, but only to one entity, not anyone/ any company who can read or has internet access.
I personally think IP laws as they are hold us back significantly. Using copyright against LLMs is one of the first modern cases where I think it will protect society rather than hold us back. We can’t just give up all our works and all our ideas to a handful of companies to copy for profit just because they can read and view them and feed them en masse into their expensive emulating machines.
We need to keep the right to profit from our personal expression. LLMs and other AI as they currently exist are a direct threat to our right to benefit from our personal expression.
We tend to think of these models as agents or persons with a right to information. They “learn like we do” after all.
This is again a similar philosophical tangent that’s not germane to the issue at hand (albeit an interesting one).
I think you’ll see that if you only feed an LLM art or text from only one artist you will find that most of the output of the LLM is clearly copyright infringement if you tried to use it commercially.
This is not a feasible proposition in any practical sense. LLMs are necessarily trained on VAST datasets that comprise all kinds of text. The only type of network that could be trained on only one artist’s corpus is a tiny pedagogical tool like Karpathy’s minGPT https://github.com/karpathy/minGPT, trained solely on the works of Shakespeare. But this is not a “Large” language model, it’s a teaching exercise for ML students. One artist’s work could never practically train a network that could be considered “Large” in the sense of LLMs. So it’s pointless to prevaricate on a contrived scenario like that.
In more practical terms, it’s not controversial to state that deep networks with lots of degrees of freedom are capable of overfitting and memorizing training data. However, if they have other additional capabilities besides memorization then this may be considered an acceptable price to pay for those additional capabilities. It’s trivial to demonstrate that chatbots can perform novel tasks, like writing a rap song about Spongebob going to the moon on a rocket powered by ice cream - which is surely not existent in any training data, yet any contemporary chatbot is able to produce.
As far as science and progress, I don’t think that’s hampered by the view that these companies are clearly infringing on copyright.
As an example, one open research question concerns the scaling relationships of network performance as dataset size increases. In this sense, any attempt to restrict the pool of available training data hampers our ability to probe this question. You may decide that this is worth it to prioritize the sanctity of copyright law, but you can’t pretend that it’s not impeding that particular research question.
As far as “it’s on the internet, it’s fair game”. I don’t agree. In Western countries your works are still protected by copyright. Most of us do give away those rights when we post on most platforms, but only to one entity, not anyone/ any company who can read or has internet access.
I wasn’t making a claim about law, but about ethics. I believe it should be fair game, perhaps not for private profiteering, but for research. Also this says nothing of adversary nations that don’t respect our copyright principles, but that’s a whole can of worms.
We can’t just give up all our works and all our ideas to a handful of companies to copy for profit just because they can read and view them and feed them en masse into their expensive emulating machines.
As already stated, that’s where I was in agreement with you - It SHOULDN’T be given up to a handful of companies. But instead it SHOULD be given up to public research institutes for the furtherance of science. And whatever you don’t want to be included you should refrain from posting. (Or perhaps, if this research were undertaken according to transparent FOSS principles, the curated datasets would be public and open, and you could submit the relevant GDPR requests to get your personal information expunged if you wanted.)
Your whole response is framed in terms of LLMs being purely a product for commercial entities, who shadily exaggerate the learning capabilities of their systems, and couches the topic as a “people vs. corpos” battle. But web-scraped datasets (such as Imagenet) have been powering deep learning research for over a decade, long before AI captured the public imagination the way it has currently, and long before it became a big money spinner. This view neglects that language modelling, image recognition, speech transcription, etc. are also ongoing fields of academic research. Instead of vainly trying to cram the cat back into the bag, and throttling research, we should be embracing the use of publicly available data, with legislation that ensures it’s used for public benefit.
NYT are such lawsuit trolls I could imagine this is credible.
So NYT tried to brake check OpenAi, after a road rage incident but OpenAi has a dash-cam?