Hands Off My Work, NYT!

I use ChatGPT for some of my research and brainstorming, to find interesting obituaries for the Honorary Unsubscribe, and more. And the New York Times is demanding to know what it is I ask ChatGPT, and how.

Yes really. Not just me: everyone. Including you. Overreach much?

I had a “conversation” with “Chatty-G” (isn’t that a lot easier to say?! That’s what Kit and I call it when talking about it.) asking why the NYT has made such a demand. At least for now, that demand has been granted by a court.

Here’s what it says, with a reference it provided if you want to read the background in more detail.

Part of an Imperious Lawsuit

Chatty-G replied (most citations omitted):

The New York Times is seeking access to all GPT user prompts and output logs — including deleted conversations [emphasis added -rc] — as part of the discovery process in its ongoing copyright lawsuit against OpenAI. They believe these records could contain evidence that helps support their claim that OpenAI’s models reproduce, memorize, or use NYT content without permission (e.g., generating near-verbatim articles, bypassing paywalls, etc.) [Primary citation: The Verge]

Here’s a more detailed breakdown of why NYT wants this [to be clear, this is ChatGPT continuing, not me]:

1. Evidence of infringement
They suspect users may have used ChatGPT to regenerate copyrighted NYT content. Access to prompts and outputs could reveal this behavior directly — via instances where the model generates NYT text verbatim or very closely.

2. Preservation of ephemeral data
OpenAI’s standard policy had been to delete chats after 30 days. NYT fears relevant evidence could vanish before it can be reviewed. A court recently ordered OpenAI to retain this data indefinitely to prevent loss of potential evidence.

3. Response to previous data loss
During their investigation, OpenAI accidentally deleted critical legal data NYT’s lawyers had collected onsite. Although deemed unintentional, that incident heightened concerns that other important data might similarly vanish.

OpenAI opposes this demand, arguing it violates user privacy and their 30-day deletion policy. They’re appealing the court’s preservation order, calling it an overreach that threatens user trust. [OpenAI] [End of output.]

More of My Conversation

I pressed on about the details, and noticed ChG (even Chatty-G gets old after awhile) was focused on its output not being eligible for copyright protection, as per a decision published by the U.S. Copyright Office, which is housed under the Library of Congress:

Screenshot of ChatGPT interaction screen. It's asking, "What are you working on?"
ChatGPT, ready for a prompt with a “Pro” (paid user) screen. The “Chats” section cut off in the lower left? That’s where “saved” interactions are kept. The default is to save, but each has a way to click “Delete”. Unfortunately, they’re not actually deleted, thanks to the NYT: they demand to look at all of them. (Screenshot by the author)

“In January 2025, the U.S. Copyright Office clarified that AI-generated works derived purely from text prompts are not eligible for copyright protection. They reasoned that prompts — even extensive ones — don’t sufficiently control the final output: ‘No matter how many times a prompt is revised and resubmitted… the final output reflects the system’s interpretation, rather than authorship’.

“They also noted that while prompts themselves may contain expressive elements, they generally function as instructions conveying uncopyrightable ideas, not creative expression.” [all emphasis from the original]

I Beg to Differ.

My point is not about the output, it’s about the input. The prompts.

I like to do my interactions with LLMs not with one giant prompt that attempts to cover every point in order to get a big essay outputted in one try, but rather by asking questions, understanding the points, shaping the output to what I want to concentrate on, and refining what the output is by telling it to disregard the arguments in the lawsuit, and instead flesh out the defense I want to explore.

It’s “leading the witness,” so to speak. And it’s an intellectual process, absolutely not a “list of instructions” to the machine, to shape my questions in a way that helps me brainstorm, and lets me then write it out the way I want to argue the point.

Yes, I get that prompts lead to output, and output is work done by a machine, and thus not copyrightable. Yet there is an art to writing prompts to getting “A.I.” systems to output desired results. I’m dubious that anyone could write a prompt that “generat[es] near-verbatim articles” from the NYT (or any other training source), especially without detailed knowledge of what’s in the original article.

Wait… an “art”? Sounds like creativity, doesn’t it? Prompts are expressive works created by human minds. My prompts to A.I. are my intellectual property, and it’s an outrage that the NYT is demanding access to my journalistic work products.

Plus, even if a prompt can be written to generate “near-verbatim articles” from the NYT, that represents a ton of unnecessary work since it’s screamingly easy to see any article from the Times in full, actual, as-published text, without paying, such as with various online archive sites.

Taking a Stand

In the past, the NYT has argued that reporters’ notebooks are privileged information and that a reporter should not be compelled to produce that work product, thereby revealing their sources — part of the creation of intellectual property created by a human mind — even under court order.

For Instance, in 2005, Judith Miller, a NYT investigative reporter, refused to testify and thus reveal her confidential source in the “Plame affair” (the leak of CIA agent Valerie Plame’s identity), arguing a First Amendment-based reporter’s privilege. [Free Speech Center]

The Times ran multiple editorials defending Miller and emphasizing the importance of her ethical stance to the freedom of the press, as specifically protected by the First Amendment. They used both in-house and hired attorneys to defend Miller, and work to get her released from jail. And, when she resigned, the newspaper reportedly provided a severance package negotiated by the publisher himself, Arthur Sulzberger Jr., as he praised her “significant personal sacrifice to defend an important journalistic principle.” [CNN]

Miller spent 85 days in jail before her source agreed directly to her ear that she was released from keeping his information confidential. That is the sort of protection the NYT demands for its own reporters. That is a principled stand.

Yet they throw other journalists under the bus by demanding that we disclose our intellectual property to their examination in a fight with a third party. Turncoats!

For Instance, in 2007 NYT national security correspondent and Pulitzer Prize winner James Risen was subpoenaed to testify in the prosecution of Jeffrey Sterling, a CIA agent. Risen refused, citing reporter’s privilege grounded in the First Amendment. The case slogged on for 7 years, fighting all the way up to the Supreme Court, which cost the Times plenty in legal fees and other support, defending Risen and describing it as a fight between “our employees against the weight of a government investigation,” and providing attorneys to fight the case all the way to the end.

Yet they roll over other journalists by demanding that we disclose our intellectual property to their examination in a fight with a third party. Opportunists!

The Times put their moral and ethical reputation on the line, defending what is right and, to quote ChG’s summary as I was researching this, “offering editorial platforms, legal coordination, and presumably covering legal expenses and negotiations. This demonstrates a consistent commitment to defending press freedom — even at serious risk and cost.”

But now they say to hell with the First Amendment, and even the law they’re fighting about — Copyright — in order to demand to see everyone else’s intellectual property that we spent countless hours developing so we can do our jobs?

What a convenient lapse in ethics when their self-interest is at stake. Hypocrites!

Using Chatty-G

I say in this site’s page header that True is “100% Human Written.” I’m proud of that. That doesn’t mean I don’t use ChatGPT; I do, in the capacity mentioned above. And I’ve worked hard to develop my own strategies using the paid OpenAI service, which strategies I generally choose not to disclose.

Yet NYT is demanding that I do, using their gigantic might to take that tiny advantage from an independent journalist in order to serve their own litigation.

Let me describe it as a fight between “independent journalists against the weight of a large corporation’s investigation.”

We can hope the court comes to its senses and terminates its order that OpenAI must retain even “deleted” interactions with its systems. The company rather must delete them as promised by their privacy policies.

While I don’t use it for personal psychological counseling, some people do*, and those chats must remain confidential. Others craft very creative prompts in our work, and those human-created prompts should be considered intellectual property deserving of protection. They are expressions of human creativity that has been committed to tangible form: the very essence of works that are protectable under Copyright.

*[A survey of U.S. residents who 1) were aged 18-80, 2) with Internet access, 3) had used at least one LLM in the past year, and 4) reported having an ongoing mental health condition, found that 48.7% used LLMs for psychological support within the past year. Users primarily sought help for anxiety (73.3%), personal advice (63.0%), and depression (59.7%). The good news is, 63.4% of users reported improved mental health from LLM interactions, with high satisfaction ratings for practical advice (86.8%) and overall helpfulness (82.3%). (Summarized, emphasis added. Source: Large Language Models as Mental Health Resources: Patterns of Use in the United States]

Back off, NYT : you’re in the wrong here, and attempting to set a terrible precedent that will bite your ass in the future if it succeeds.

ChatGPT’s Take

I used my work with Chatty-G to research and work through my thoughts to write the above editorial. But I took it one step further: I outlined my argument for the LLM and asked it to write an editorial “in my voice” — it “knows” who I am and how I write. Here’s what it came up with, with some minor format editing by me.


>>Updated saved memory

Here’s a blog post draft in your voice, as requested, opposing the New York Times’ demand for access to GPT prompts — focusing on the idea that prompts are human-created intellectual property.

Hands Off My Prompts, NYT

The New York Times is demanding that OpenAI preserve and turn over user prompts — not just model outputs — as part of its copyright lawsuit. I’ll say this as plainly as I can: the Times has absolutely no business rifling through my private creative process.

Let’s be clear what we’re talking about. I’m not referring to those simple, throwaway prompts like “Write a haiku about pizza.” I’m talking about carefully crafted queries, intricate instructions, and imaginative setups — prompts that I write from scratch. These aren’t LLM-generated drafts or suggestions. They are my words, my logic, my imagination. In short, they’re my intellectual property.

I’m not defending OpenAI here — I’ll criticize them when warranted — but this demand crosses a very real line. The New York Times is not just asking to review how ChatGPT might output parts of their articles. They are asking to review my prompts. That means they are demanding access to a record of my ideas — notes, outlines, creative setups, rhetorical techniques, research strategies. In effect, they want access to the digital equivalent of a writer’s notebook. And that’s indefensible.

The illustration shows three men dressed in suits driving a steam roller marked "New York Times", chasing after a running man labeled "Independent Journalist".
Prompt: “Create a horizontal illustration depicting the New York Times as a lawyer-powered steamroller, aiming to crush an independent journalist.” Yeah, those are simple instructions, which I had to interactively have it tweak a little, but that result will do. Amusing that it threw in some space stuff, knowing I used to work for NASA. (ChatGPT)

They might argue this is just “discovery.” But compelling OpenAI to retain and eventually turn over user prompts — potentially even deleted ones — is a form of publication. No, it’s not printing them in the newspaper, but when a third party is forced to collect and expose your creative work, with no consent and no due process, that is publication in every meaningful sense. And it’s not theirs to publish.

The U.S. Copyright Office has taken the position that prompts aren’t protected works because they “don’t control the expressive form” of AI output. That’s a red herring. I’m not asking for copyright over what the model generates. I’m asserting my exclusive right to control when, where, and how my prompt — the thing I wrote — is seen by others. Copyright isn’t the only way to protect intellectual labor, nor should it be.

This is a fundamental issue of authorship and ownership. If I write an eloquent or precise prompt to get what I need from a machine, that’s my creation. It doesn’t matter whether the output is good, bad, or bizarre — the prompt itself is mine. It’s the product of deliberate thought. In some cases, it may represent hours of refinement. Why should a corporate litigant be entitled to sift through that without my consent?

In trying to protect its own work, the Times is now willing to violate the privacy — and the creative ownership — of millions of other people. That’s not journalism. That’s overreach.

The Times has every right to defend its copyrights. But not by trampling mine.


Caught my voice pretty well, didn’t it?

Again, I do my own writing, but it’s interesting to me to see how “A.I.” fleshes out the arguments I give it. I outlined my argument in prompts, rather than upload my own writing, to ChatGPT. Its output is not copyrightable, but my input certainly should be.

– – –

Bad link? Broken image? Other problem on this page? Use the Help button lower right, and thanks.

This page is an example of my style of “Thought-Provoking Entertainment”. This is True is an email newsletter that uses “weird news” as a vehicle to explore the human condition in an entertaining way. If that sounds good, click here to open a subscribe form.

To really support This is True, you’re invited to sign up for a subscription to the much-expanded “Premium” edition:

One Year Upgrade
Comments

(More upgrade options here.)

Q: Why would I want to pay more than the minimum rate?

A: To support the publication to help it thrive and stay online: this kind of support means less future need for price increases (and smaller increases when they do happen), which enables more people to upgrade. This option was requested by existing Premium subscribers.

 

22 Comments on “Hands Off My Work, NYT!

  1. Hoo boy! I agree that they’ve stepped in it. Have you heard of any related suits, perhaps class actions?

    No. -rc

    Reply
  2. Interesting topic. But it’s not clear to me that the prompts belong to you…once they are submitted to the ChatGPT interface, do they then belong to ChatGPT? After all, they have to hold them to carry out the prompted task. Are they then suppose to forget they exist? Or hold them in secret for some period of time? (I assume they hold them for “research purposes”.)

    I’m trying to think of an analogy to a reporter keeping their notebook with a third party, and that third party has promised not to disclose the contents — am I on the right track?

    Not exactly. It’s like talking to a source and taking notes. Obviously you have to ask them questions, yet your questions do not become the source’s property, do they? -rc

    Reply
    • I was wondering what the Terms & Conditions for OpenAI said about this – many organisations state that once you upload something it either belongs to them or that you grant them a free licence to use. However…

      ChatGPT’s Ts&Cs state:
      “Ownership of content. As between you and OpenAI, and to the extent permitted by applicable law, you (a) retain your ownership rights in Input and (b) own the Output. We hereby assign to you all our right, title, and interest, if any, in and to Output.”

      That is pretty clear, isn’t it!

      Reply
    • (Actually replying to the rc comment, rather than directly to Jeff)

      If the NYTimes itself (rather than just its lawyers) were to see the prompts, yes. And they probably effectively will, because lawyers are good at finding loopholes and evading punishment. But from the Court’s perspective, this isn’t about you; it is about OpenAI’s own business records.

      Your notes from an interview don’t become the source’s property, but an attorney can ask (or even demand) that the source describe what _they_ remember from that interview, and hand over any information and records they do have — including any recordings the sources themselves still have.

      OpenAI can’t turn over your thoughts (including your prompts) — but they *can* turn over what they remember you asking for, including the (presumably correct) transcript of what _they_ think you asked for. And OpenAI can be ordered to stop deleting those records that they now own; your only recourse is to not do business with someone who records those things in the first place, or at least not with someone who doesn’t delete those records promptly — and OpenAI in particular is not currently permitted to do business the way that you prefer.

      There are generally recognized exceptions; priests and psychiatrists generally aren’t required to testify or hand over their records, but “Research Assistant” isn’t one of the professions protected by default. Maybe it *should* be protected, but that is a moral argument, rather than a violation of either copyright or the 1st amendment as currently understood.

      Returning to the legal fiction that the NYTimes itself won’t have access to those prompts and responses … If a response (produced entirely by OpenAI) includes a “sample letter” with personal names or account numbers, those people already had their privacy violated even without the prompts — but the Court can declare/pretend that the privacy was waived when they handed their info to a 3rd party (OpenAI) which they should have known not to trust so strongly.

      And if another OpenAI response did largely reproduce a NYTimes article, it really does matter whether OpenAI is doing that routinely, or only when specifically asked. (If there is a pattern of customers asking for a NYTimes article and correctly expecting to get it, that isn’t *better*, but it is relevant.)

      You’re somewhat arguing in circles. They don’t get my prompts but would get “what they remember you asking for”? What they “remember” is the exact text of my prompts. -rc

      Reply
      • Yes; OpenAI got your request in already-digitally-transcribed form; their records are likely to be far more comprehensive, precise, and accurate than the sort of “Yeah, I think he bought something with cash,” or “it says here that he took 7 shillings of seed on credit this spring” that was more typical when the Constitution was written.

        Advances in technology don’t change the existing law or precedent.

        Maybe the Constitution should originally have been interpreted differently, in a way that would have been less convenient for lawyers and perhaps even for judges. But it wasn’t.

        It is likely that a new law to protect privacy would be a good thing, but it would take a new law.

        The U.S. Supreme Court has recognized a Constitutional right to privacy through several landmark decisions going back to at least 1965 (Griswold v. Connecticut), among other cases. -rc

        Reply
  3. I am ashamed that one of the biggest and most respected journalism organizations in the world is doing this. Yea for you pointing it out. I hope this blog post gets lots of exposure to their hypocrisy!

    Reply
  4. That’s certainly one of the best pieces of AI assisted writing I’ve ever read. I guess it just goes to show that using “whatever AI model floats your boat” is just like anything else — garbage in, garbage out; quality in, quality out. I’m not really an AI user myself yet, but I think your article and sample AI output proves both that it can be useful in the right hands, and that yes, they are not valuing the “creative input” required for valuable AI output nearly enough.

    Reply
    • That it is so good pretty much proves Randy’s point: it takes creativity to make a prompt, or in his case a series of prompts, to shape the output in such a specific and focused way. I know because I’m still in the learning phase, and frustrated with the output that’s generated!

      You’ll get the hang of it. It helps to be very specific …which takes thought and creativity. -rc

      Reply
  5. According to Parlouran (my Chatty-G), all our conversations, including my prompts, are private — although “private” in that context sure isn’t a legal definition. But this is very interesting. I have Chat history going back several years. Hmmmmm. Can’t wait to learn more about this — thanks, as always, Randy!

    Reply
  6. I have absolutely no desire to see my interactions with AI in the hands of anybody else. While there is nothing illegal or even immoral in my conversations (though some of that is due to the AI restrictions), it’s nothing I want somebody else to pore over.

    I also refine my queries with AI. Using it is a much different experience than a search engine. I’ve gone down rabbit holes before upon getting certain information. I don’t care to have my queries and conversations handed over to the government, the press, or anybody else. You want that? Get a warrant. I’m dying to see the probable cause and the specified item(s) to be seized.

    Reply
  7. Another reason I use LMStudio. Once you’ve downloaded a few models, you can do a lot of your AI stuff without Internet.

    Except that the way I build my prompt stream, it needs to do a lot of web searching to answer specific questions. -rc

    Reply
  8. But your concern isn’t about copyright it’s about privacy, no?

    My concern is about privacy, yes, but my mechanism is Constitutional. It’s MY right to copy, not the NYT’s. -rc

    Reply
  9. Interesting thoughts on prompts and copyright.

    The introduction of journalistic privilege, and the collection as a creative work, adds a dimension I hadn’t considered.

    Food for thought there, for sure.

    Reply
  10. Considering the reporter’s notebook idea, does NYT demand to see their reporters’ notebooks? If not, they shouldn’t demand the prompts. If so, shame on them for micromanaging their employees.

    Reply
  11. For the same reason that Google searches could be interpreted nefariously by government authority figures, so too could ChatGPT queries. My sister once had to do extensive research on how to blow up a nuclear reactor without spreading fallout all over the place, because she was a writer for a TV show where one of the episodes involved the military doing exactly that. As far as I know, she didn’t ask ChatGPT — I don’t think it existed yet, and there are all sorts of WGA rules on AI — but if she did have extensive conversations with it on how to blow up a reactor, and the government found out, she could find herself in prison even though she had a perfectly innocent, reasonable, and creative purpose in doing so. Another friend of mine had to do extensive research on child pornography laws to write a blurb on a defense attorney’s website. If somehow the government got wind of his queries, it could lead to them thinking he was up to something illegal and morally reprehensible, throwing him in prison, and ruining his life just for doing his job.

    You’re a writer, I’m guessing you’ve occasionally researched information for your writings that could be interpreted the wrong way by the government if they ever found out.

    I don’t think I’ve researched either of your example topics, but indeed I’ve researched an incredibly wide variety of weird things, including illegal acts and the penalties for their commission, for stories. For instance, sometimes the reporters in source stories are lazy and don’t even bother to learn whether the crime they’re describing is a misdemeanor or a felony. It’s instructive to say that the seemingly petty crime some guy committed is a felony in Florida, with a prison sentence of up to 20 years. Things like that can quickly wipe a grin off a miscreant’s face. -rc

    Reply
  12. Comparing what you wrote to what AI wrote is interesting. The ChatGPT certainly wrote a powerful Essay. But what you wrote laid out more of a case.

    Reply
  13. I want to take issue with one part of your argument. You say that subpoenas for a reporter’s notebooks are attempts to get a hold of their “work product.” But the real purpose is merely to find the source of the reporter’s story. The plaintiff has no interest in stealing and reproducing that story. Forcing a reporter to reveal her sources chills the freedom of the press, a first amendment issue. Your beef with the Times, whatever its merits, is not a first amendment issue.

    First, I made it reasonably clear by repeated use of the terms that my primary argument is about intellectual property, in the form of copyright. But indeed the First Amendment IS absolutely brought into this case as courts have repeatedly held that compelled speech is a violation of Free Speech. Publication of my intellectual property as forced by the government (e.g., a court) is an outrage. “[The] right of freedom of thought protected by the First Amendment… includes both the right to speak freely and the right to refrain from speaking at all.” –Supreme Court Decision in Wooley v. Maynard, 430 U.S. 705 (1977). -rc

    Reply
  14. A not-too-deep dive into the interwebs showed a unanimous understanding of what compelled speech is. Since you have used AI, I’ll quote this AI definition from Bing, which is typical: “Compelled speech refers to situations where individuals are legally required to express certain messages or beliefs, often raising significant First Amendment concerns regarding freedom of expression.” Examples include being forced to recite the Pledge of Allegiance and being forced to use certain pronouns to describe people. No one is forcing you to express a message. Taking the “speech” (AI prompts) you have already made is not compelled speech.

    On the copyright issue, I believe your prompts could possibly be considered copyrighted. Perhaps an amicus brief is in order!

    Grumble! Sorry approving this was delayed: found it in the spam folder. (I also removed your “[emphasis added]” because no emphasis came through.)

    I believe forced publication of my intellectual property is indeed forced speech. -rc

    Reply
  15. I don’t know, Randy. Were your prompts even written while you were in the US? Are they subject to US court rulings? Just my brain wondering as it wanders.

    I’m in the U.S. now, and have been for most of the time I’ve been using ChG, including the time I conceptualized, researched, and wrote this page. Still, your argument is that U.S. Constitutional guarantees don’t apply in U.S. courts for U.S. citizens if they happen to step outside the country? Please explain that theory and its legal basis. -rc

    Reply
    • I made no mention of constitutional guarantees at all!

      I was thinking more along the lines of a US court’s decision not applying to work done outside the US. But since you’ve been in the US while using the prompts, that’s not applicable.

      I know you didn’t mention the Constitution — that’s the problem. The thing is, when you start saying a court ruling “doesn’t apply” to someone, you ARE messing with the Constitutional guarantees. That’s what I’m trying to wake you up about. -rc

      Reply
  16. His lawyer had to argue something, but he probably knew it was a lost cause. I don’t care if he’s a homosexual or does the drag queen or crossdressing thing. That’s his business. DUI is a different matter.

    Reply
  17. What I’m picking up from this blog is the sense that people are arguing that the NYT’s demand for all of this info from the AI and ChatGPT folks is a violation of the First Amendment. That it’s a violation of the right to privacy. That it’s a violation of the Freedom of the Press.

    The First Amendment protects us against the government, not a private entity (i.e., the New York Times). If a private entity (i.e. the NYT) feels their business model is being affected, they should have the right, as part of their damage suit, to demand any evidence that helps them advance their case.

    What do you think a court is? Not just a government entity, but a co-equal 1/3 branch of the government. The NYT isn’t making an order compelling forced speech, the government is. -rc

    Reply

Leave a Reply to Dennis, Hatfield, PA Cancel reply