Meta AI’s Public Feed Is Exposing Private Chats
Meta’s new AI chatbot app includes a public “discover” feed where users' private conversations are being shared — sometimes without their full understanding. From intimate relationship confessions to awkward or explicit prompts, users are broadcasting personal queries without realizing the feed is visible to others. This design choice highlights deeper privacy concerns in AI products and raises urgent questions about consent, transparency, and regulation.
The Incident
In June 2025, The Washington Post reported that users of Meta’s AI chatbot were unknowingly posting private, intimate, and often embarrassing conversations to a public feed inside the app called the “discover” tab.
From confessions about romantic struggles and sexual preferences to questions about religion and family, some of the most personal queries people directed at Meta AI were ending up online — visible to the world.
“It’s like reading someone’s Google search history mixed with their diary — all broadcast publicly.”
What Caused It?
While Meta insists chats are private by default, the app includes a "share" button that publishes conversations to the public feed — without making the implications clear to users.
Some tapped it thinking they were saving their conversation, not realizing they were broadcasting it. Others, drawn by the novelty or comedic appeal, posted purposefully. But many clearly didn't understand where their chats were going.
Even more troubling: some real names and personal audio clips have been attached to these posts.
A Flawed Design Choice
Meta’s decision to embed social media-style publishing into its AI app reflects its wider strategy: blend AI with social engagement to drive content and user stickiness.
But that choice backfired — users expect chatbots to be confidential, not performative. Unlike rivals like ChatGPT or Claude, which don't have public feed features, Meta blurred the lines between private interaction and public exposure.
Privacy Red Flags
This incident joins a growing list of AI-related privacy concerns:
OpenAI briefly enabled memory features in ChatGPT without clear user prompts, later rolling it back due to manipulative behavior.
Congress is considering federal legislation to preempt stricter state-level AI privacy laws.
Lack of regulation means most AI platforms set their own transparency and data usage rules — often unclear or misleading to users.
As Calli Schroeder of EPIC (Electronic Privacy Information Center) put it:
“People assume there’s some baseline level of confidentiality. There’s not.”
Why It Matters
AI chatbots aren’t just tools anymore — they’re companions, therapists, and sounding boards. People are turning to them for emotional support, advice, and personal growth.
But if AI systems — especially ones run by companies like Meta — leak those interactions into public or use them to train future models, the risk is no longer theoretical.
What Needs to Change
Clearer defaults: “Private by default” should mean no accidental publishing is possible.
Transparent UI/UX: Sharing options must be explicit, obvious, and explained.
Data usage clarity: Companies need to be up front about what happens to your prompts and content.
Stronger regulation: The U.S. needs comprehensive AI privacy standards — now more than ever.
Sources
Washington Post: https://www.washingtonpost.com/technology/2025/06/13/meta-ai-privacy-users-chatbot/
Business Insider: https://www.businessinsider.com/mark-zuckerberg-meta-ai-chatbot-discover-feed-depressing-why-2025-6
Dwarkesh Podcast: https://www.dwarkesh.com/p/mark-zuckerberg-2
WaPo on AI Regulation: https://www.washingtonpost.com/politics/2025/06/03/ai-regulation-moratorium-state-lawmakers-letter/