In recent days, a news story has spread widely, raising serious concerns about the security of our interactions with artificial intelligence. Thousands of ChatGPT conversations were indexed by Google, making them publicly accessible and setting off alarm bells around digital privacy. But what exactly happened? And more importantly: how can we protect ourselves?
In this article, we’ll break down the case, share practical tips to avoid data leaks, and explain how BBLTranslation, through our specialized unit BBL IA – Intelligent Linguistic Solutions, offers guidance and protection in the realm of generative AI.
What Happened? The “Make this chat discoverable” feature
The root of the problem was a seemingly innocent feature introduced by OpenAI: the ability to make a conversation “public” via a shareable link, activated by selecting the “make this chat discoverable” option.
When users ticked this checkbox, the conversation became indexable by Google and other search engines.
The result? Over 4,500 conversations were published online, many containing names, phone numbers, business information, or sensitive questions — all exposed without the average user even realizing it.
It wasn’t a bug, it was a perception error
It’s important to emphasize: this wasn’t a technical glitch. The feature was enabled voluntarily by users, albeit often without understanding its implications.
This makes the incident even more serious: we can’t simply trust the technology — we need digital education and linguistic awareness.
What OpenAI did
After being alerted, OpenAI removed the “make this chat discoverable” option and worked with search engines to de-index the already published conversations.
However, some conversations are still visible in Google’s cache or stored by archival services like the Wayback Machine.
Why this is a serious risk for businesses
For professionals working with confidential information — law firms, international companies, public institutions, or healthcare organizations — this kind of exposure is unacceptable.
Just one sentence taken out of context can ruin a project, derail a negotiation, or damage a company’s reputation.
Conversations with language models may include:
- Confidential contractual terms
- Marketing or product strategies
- Client or supplier data
- Sensitive legal or technical information
- Ideas for training custom GPT models
That’s why it’s crucial to rethink how we manage generative privacy.
5 Tips to protect your conversations with AI
1. Avoid any kind of public sharing
Never enable the option to make a conversation public or visible. If you need to share generated content, only copy the necessary fragments and paste them into a secure document.
2. Review previously shared links
Go to ChatGPT → “Shared Links” and manually delete any shared conversations. If you’ve posted them on websites or social media, remove them there too.
3. Check if your chats have been indexed by Google
Use Google with this formula:
pgsqlCopiarEditarsite:chat.openai.com/share/ [your name, company, or sensitive terms]
If you find your own content indexed, you can request its removal via Google Search Console.
4. Educate your team
Organize internal training on the proper use of AI tools. Every conversation may contain identifiable personal or corporate data.
5. Work with linguistic AI experts
Not all AI-generated content is the same. At BBLTranslation, we provide specialized support in designing, training, and linguistically auditing custom GPT models — ensuring confidentiality, consistency, and regulatory compliance.
Advanced solutions from BBL IA
With over 20 years of experience managing complex and confidential language content, BBLTranslation has been a pioneer in:
- Sworn translations with digital signatures
- Blockchain-certified documents
- Technical and legal terminology management with high confidentiality
Now, with BBL IA – Intelligent Linguistic Solutions, we apply these same values to AI:
What do we offer?
- Custom GPT model training with linguistic and legal control
- Prompt and dataset creation aligned with your brand voice
- Terminology audits for AI in regulated sectors
- Advice on cybersecurity and protection of generated content
- Risk prevention for legal or reputational misuse of AI
A new quality standard… for AI too
The takeaway is clear: privacy can no longer be treated as optional — it’s a basic requirement.
At BBLTranslation, we are committed to an ethical, rigorous, and conscious use of AI, just as we’ve always done with professional translation.
If you want to build trustworthy models, protect your data, or simply work more securely with AI, we are your trusted linguistic partner in the age of artificial intelligence.
Conclusion
The accidental indexing of ChatGPT chats by Google is just the latest wake-up call. But today, we have the tools, knowledge, and allies to work securely.
And at BBLTranslation, we turn your concern into smart linguistic protection.