OpenAI pulls back ChatGPT sharing feature amid privacy concerns over accidental data exposure
OpenAI has quickly removed a feature that allowed users to make their ChatGPT conversations searchable via search engines like Google, citing privacy and security concerns. The move follows immediate backlash after users reported that sensitive or personal discussions were being indexed publicly. Dane Stuckey, OpenAI’s chief information security officer, announced the rollback on Thursday via social media. He explained the feature had been a short-lived experiment designed to help users share helpful or insightful conversations with a broader audience. However, the company concluded that the risks of accidental data exposure outweighed the benefits. “We just removed a feature from @ChatGPTapp that allowed users to make their conversations discoverable by search engines, such as Google,” Stuckey wrote. “Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option.” The feature required users to actively opt in by checking a box labeled “make this chat discoverable,” with a note that the chat would appear in web searches. While OpenAI anonymized the content to prevent direct identification, the potential for unintended disclosures remained high. The issue gained attention after newsletter writer Luiza Jarovsky shared on X that personal and sensitive exchanges with ChatGPT were appearing in search results. She highlighted examples of users discussing mental health struggles, experiences with harassment, and even using the chatbot for informal therapy. These conversations, once shared, were accessible to anyone conducting a web search. Despite the anonymization, users expressed concern that the opt-in process could be overlooked or misunderstood, especially by those unfamiliar with the implications of public indexing. Many pointed out that the warning labels might not be sufficient to prevent accidental sharing. In response, OpenAI confirmed it is actively working to remove previously indexed content from search engines. The change is being rolled out to all users by the following morning. The incident underscores growing challenges in balancing innovation with user privacy, particularly as AI tools become more integrated into daily life. OpenAI’s swift reversal highlights the company’s commitment to minimizing risks, even when it means scrapping a feature early in its lifecycle.