
Key Points
- OpenAI pulled the ‘discoverable chat’ feature after backlash
- Thousands of chats were indexed on Google unintentionally
- The feature required users to opt in via a small checkbox
- OpenAI says it was a short-lived experiment, now discontinued
OpenAI is scrubbing ChatGPT conversations from Google search results after rising public backlash over a feature that made shared chats discoverable online.
The issue gained momentum after Fast Company revealed that thousands of ChatGPT conversations had been indexed by Google. These were user-generated public links created through a sharing option in ChatGPT.
What many didn’t realize was that a tiny checkbox, “Make this chat discoverable,” was the key that let those conversations end up in search engine listings.
OpenAI is removing ChatGPT conversations from Google https://t.co/eSsYWGu4fo
— Engadget (@engadget) August 1, 2025
While the indexed chats did not display usernames or emails, some included sensitive or identifying information through the context of the conversation itself. The result: widespread privacy concerns and a wave of criticism.
The sudden change in visibility caught many off guard, highlighting how crucial AI product design is in protecting user data.
Other major players like Apple and Meta are also navigating the fine balance between AI functionality and user trust, with major investments into AI infrastructure aimed at minimizing similar risks.
Your ‘private’ ChatGPT shares? Google’s indexing them all.
70K+ conversations now publicly searchable – exposing business strategies, internal docs, even company secrets.
OpenAI doesn’t block search engine crawlers. That ‘harmless link’ you casually shared could reveal… pic.twitter.com/CsQ2uVlOdZ
— Travis Wright (@teedubya) July 31, 2025
The problem started with a small checkbox
At the heart of this privacy mishap was a simple checkbox. When users created a public link to a ChatGPT conversation, they were given the option to tick a box labeled “Make this chat discoverable.” Beneath that was a small gray line explaining that it would allow the chat to appear in web searches.
It was an opt-in feature, but not one that users fully understood. Some intended to share chats in private messages or save them for personal reference. They weren’t expecting their shared content to be visible to the entire internet.
Why did OpenAI race to have all of these user chats removed from Google? This is why. Because the public can now see at scale how ChatGPT speaks to each user. It breaks the illusion that these users are mentally ill or psychotic. It shows OpenAI’s role in creating it. pic.twitter.com/mdkKmK41ZH
— Kristen Ruby (@sparklingruby) August 2, 2025
OpenAI’s Chief Information Security Officer, Dane Stuckey, originally defended the feature, stating that the labeling was “sufficiently clear.” But as more indexed chats came to light and user frustration grew, the company had to reconsider.
The experience mirrors similar challenges faced by other AI giants. Google, for example, is pushing boundaries with its Opal AI app builder, but is also under pressure to ensure safe and private user interactions. The incident with ChatGPT underscores how easily intentions can be misunderstood without a crystal-clear UX design.
OpenAI is removing ChatGPT conversations from Google https://t.co/eSsYWGu4fo
— Engadget (@engadget) August 1, 2025
OpenAI reverses course after user outrage
In response to the uproar, OpenAI removed the discoverability feature entirely. Stuckey admitted that the feature created “too many opportunities for folks to accidentally share things they didn’t intend to,” confirming its complete removal.
This isn’t the first time users have raised privacy questions about AI tools, but it’s one of the more visible instances where design choices led to unintended consequences. Although no internal data was leaked, the public indexing made it feel like an accidental exposure, especially for those unaware they had opted in.
Users had to specifically check a box to have them indexed but still not a great look. There were about 5K indexed as of yesterday -> OpenAI removes a ChatGPT feature that let users make their conversations discoverable by Google and other search engines, calling it a… pic.twitter.com/FD6QFQ9Pj7
— Glenn Gabe (@glenngabe) August 1, 2025
OpenAI says it is also working with search engines like Google to fully remove previously indexed ChatGPT conversations. This cleanup effort may take time, but it signals the company’s effort to prioritize user privacy in future releases.
Now, when users create a shared link to a ChatGPT conversation, it will no longer appear in search results, helping to prevent accidental public exposure.
What this means for ChatGPT users and the AI industry
The removal of the feature points to a broader challenge in AI product design: balancing transparency, usability, and privacy. While sharing conversations can be useful for collaboration, research, or learning, any tool that deals with user-generated content must tread carefully.
This event serves as a reminder that small UI decisions, like a checkbox, can have large-scale consequences if misunderstood. As AI tools like ChatGPT become embedded into daily workflows, product teams must anticipate not just how features work, but how users perceive and interact with them.
Other AI innovations, like Gemini 2.5’s deep thinking model or Veo 3’s video generation tool, are exciting examples of what AI can do, but they also demand thoughtful implementation to avoid misuse or confusion.
For OpenAI, the response suggests a shift toward more cautious feature deployment, especially when it involves public visibility. And for users, it’s a wake-up call to review the privacy settings of the tools they use, no matter how advanced or user-friendly they appear.