Be a part of the occasion trusted by enterprise leaders for almost 20 years. VB Remodel brings collectively the folks constructing actual enterprise AI technique. Study extra
Common ChatGPT customers (amongst whom embrace the creator of this text) could or could not have seen that the hit chatbot from OpenAI permits customers to enter right into a “temporary chat” that’s designed to wipe all the knowledge exchanged between the person and the underlying AI mannequin as quickly because the chat session is closed by the person.
As well as, OpenAI additionally permits customers to manually delete prior ChatGPT periods from the left sidebar on the internet and desktop/cellular apps by left-clicking or control-clicking, or holding down/lengthy urgent on them from the selector.
Nonetheless, this week, OpenAI discovered itself going through criticism from a few of stated ChatGPT customers after they found that the corporate has not really been deleting these chat logs as beforehand indicated.
As AI influencer and software program engineer Simon Willison wrote on his private weblog: “Paying customers of [OpenAI’s] APIs may well make the decision to switch to other providers who can offer retention policies that aren’t subverted by this court order!”
As a substitute, OpenAI confirmed it has been preserving deleted and momentary person chat logs since mid-Might 2025 in response to a federal court docket order, although it didn’t disclose this to customers till yesterday, June fifth.
The order, embedded beneath and issued on Might 13, 2025, by U.S. Justice of the Peace Decide Ona T. Wang, requires OpenAI to “preserve and segregate all output log data that would otherwise be deleted on a going forward basis,” together with chats deleted by person request or on account of privateness obligations.
Whereas OpenAI complied with the order instantly, it didn’t publicly notify affected customers for greater than three weeks, when OpenAI issued a weblog submit and an FAQ describing the authorized mandate and outlining who’s impacted.
Nonetheless, OpenAI is putting the blame squarely on the NYT and the decide’s order, saying it believes the preservation demand to be “baseless.”
OpenAI clarifies what’s occurring with the court docket order to protect ChatGPT person logs — together with which chats are impacted
In a weblog submit printed yesterday, OpenAI Chief Working Officer Brad Lightcap defended the corporate’s place and acknowledged that it was advocating for person privateness and safety towards an over-broad judicial order, writing:
“The New York Times and other plaintiffs have made a sweeping and unnecessary demand in their baseless lawsuit against us: retain consumer ChatGPT and API customer data indefinitely. This fundamentally conflicts with the privacy commitments we have made to our users.”
The submit clarified that ChatGPT Free, Plus, Professional, and Staff customers, together with API prospects and not using a Zero Knowledge Retention (ZDR) settlement, are affected by the preservation order, which means even when customers on these plans delete their chats or use momentary chat mode, their chats shall be saved for the foreseeable future.
Nonetheless, subscribers to the ChatGPT Enterprise and Edu customers, in addition to API purchasers utilizing ZDR endpoints, usually are not impacted by the order and their chats shall be deleted as directed.
The retained information is held beneath authorized maintain, which means it’s saved in a safe, segregated system and solely accessible to a small variety of authorized and safety personnel.
“This data is not automatically shared with The New York Times or anyone else,” Lightcap emphasised in OpenAI’s weblog submit.
Sam Altman floats new idea of ‘AI privilege’ permitting for confidential conversations between fashions and customers, much like chatting with a human physician or lawyer
OpenAI CEO and co-founder Sam Altman additionally addressed the problem publicly in a submit from his account on the social community X final evening, writing:
“recently the NYT asked a court to force us to not delete any user chats. we think this was an inappropriate request that sets a bad precedent. we are appealing the decision. we will fight any demand that compromises our users’ privacy; this is a core principle.”
He additionally advised a broader authorized and moral framework could also be wanted for AI privateness:
“we have been thinking recently about the need for something like ‘AI privilege’; this really accelerates the need to have the conversation.”
“imo talking to an AI should be like talking to a lawyer or a doctor.”
“i hope society will determine this out quickly.“
The notion of AI privilege—as a possible authorized commonplace—echoes attorney-client and doctor-patient confidentiality.
Whether or not such a framework would achieve traction in courtrooms or coverage circles stays to be seen, however Altman’s remarks point out OpenAI could more and more advocate for such a shift.
What comes subsequent for OpenAI and your momentary/deleted chats?
OpenAI has filed a proper objection to the court docket’s order, requesting that it’s vacated.
In court docket filings, the corporate argues that the demand lacks a factual foundation and that preserving billions of further information factors is neither vital nor proportionate.
Decide Wang, in a Might 27 listening to, indicated the order is momentary. She instructed the events to develop a sampling plan to check whether or not deleted person information materially differs from retained logs. OpenAI was ordered to submit that proposal by at the moment, June 6, however I’ve but to see the submitting.
What it means for enterprises and decision-makers in command of ChatGPT utilization in company environments
Whereas the order exempts ChatGPT Enterprise and API prospects utilizing ZDR endpoints, the broader authorized and reputational implications matter deeply for professionals answerable for deploying and scaling AI options inside organizations.
Those that oversee the complete lifecycle of huge language fashions—from information ingestion to fine-tuning and integration—might want to reassess assumptions about information governance. If user-facing elements of an LLM are topic to authorized preservation orders, it raises pressing questions on the place information goes after it leaves a safe endpoint, and how you can isolate, log, or anonymize high-risk interactions.
Any platform touching OpenAI APIs should validate which endpoints (e.g., ZDR vs non-ZDR) are used and guarantee information dealing with insurance policies are mirrored in person agreements, audit logs, and inside documentation.
Even when ZDR endpoints are used, information lifecycle insurance policies could require evaluate to verify that downstream programs (e.g., analytics, logging, backup) don’t inadvertently retain transient interactions that have been presumed short-lived.
Safety officers answerable for managing danger should now develop menace modeling to incorporate authorized discovery as a possible vector. Groups should confirm whether or not OpenAI’s backend retention practices align with inside controls and third-party danger assessments, and whether or not customers are counting on options like “temporary chat” that now not operate as anticipated beneath authorized preservation.
A brand new flashpoint for person privateness and safety
This second is not only a authorized skirmish; it’s a flashpoint within the evolving dialog round AI privateness and information rights. By framing the problem as a matter of “AI privilege,” OpenAI is successfully proposing a brand new social contract for a way clever programs deal with confidential inputs.
Whether or not courts or lawmakers settle for that framing stays unsure. However for now, OpenAI is caught in a balancing act—between authorized compliance, enterprise assurances, and person belief—and going through louder questions on who controls your information while you discuss to a machine.
Each day insights on enterprise use circumstances with VB Each day
If you wish to impress your boss, VB Each day has you lined. We provide the inside scoop on what corporations are doing with generative AI, from regulatory shifts to sensible deployments, so you’ll be able to share insights for max ROI.
An error occured.