OpenAI Ends ChatGPT Users' Option to Index Chats on Search Engines
GOOGLE SEARCH RESULTS AND THE CHATGPT PLATFORM, AUG 2 – OpenAI removed the feature after nearly 4,500 private conversations became publicly searchable due to user opt-in errors, raising concerns about data privacy and user safety.
- OpenAI removed the ChatGPT feature that let users share conversations publicly, causing those chats to appear in Google search results.
- This removal followed a Fast Company report revealing thousands of public chats, including sensitive content, were indexed by Google, prompting privacy concerns.
- The Chief Information Security Officer of OpenAI, Dane Stuckey, explained that the feature led to numerous unintentional shares, prompting the company to begin removing indexed chat content from search engines.
- Fast Company found nearly 4,500 shared ChatGPT conversations in search results, including mental health disclosures, job evaluations, and proprietary code.
- The feature’s removal highlights challenges in ensuring users understand sharing risks and raises ongoing questions about AI privacy safeguards.
21 Articles
21 Articles
ChatGPT Chats “Leaked” in Google Search After Discoverable Feature Misfires
(Reclaim The Net)—Private conversations with ChatGPT have been turning up in Google search results, raising alarm over how easily personal information can slip into the public domain when AI tools are used for sensitive discussions. The problem surfaced when OpenAI tested a “discoverable” setting that let users deliberately share chats online. Anyone who ticked the box marked “make this chat discoverable” was told it would “be shown in web searc…
Thousands of private ChatGPT conversations found via Google search after feature mishap
OpenAI recently confirmed that it has deactivated an opt-in feature that shared chat histories on the open web. Although the functionality required users' explicit permission, its description might have been too vague, as users expressed shock after personal information from chats appeared in Google search results.Read Entire Article
Although alerts are constant, users seem to ignore the dangers of sharing information with IA. Situations are well known and accumulate over time. A new situation has been revealed, and...
Coverage Details
Bias Distribution
- 43% of the sources lean Right
Factuality
To view factuality data please Upgrade to Premium