A new study by researchers at IMDEA Networks Institute has raised serious concerns about how popular AI chatbots handle user data. According to the research, platforms including OpenAI’s ChatGPT, Anthropic’s Claude, Grok, and Perplexity AI may rely on tracking technologies from companies such as Meta, Google, and TikTok, potentially exposing information about users’ conversations and online activity.
Over the past few years, generative AI tools have rapidly become part of everyday life. Millions of users now depend on these systems for personal advice, work-related tasks, health discussions, and private conversations, often assuming their chats remain confidential. However, researchers warn that the reality may be very different. While AI chatbots appear to function like private conversations, they are often built on web-based infrastructures that heavily depend on analytics, advertising technologies, and data collection systems similar to those used across the wider internet.
Major Privacy Concerns Highlighted
The study outlines three key privacy risks connected to modern AI chat platforms:
- Exposure of conversation links and metadata to third-party trackers
- Potential linking of chats to real-world user identities
- Privacy controls and policies that may not fully reflect actual data-sharing practices
Researchers found that some AI platforms may transmit conversation-related information — including chat titles, URLs, permalinks, and metadata — to external tracking services. These trackers can also receive cookies and identifiers commonly used for targeted advertising and profiling.
According to Narseo Vallina Rodríguez, Research Associate Professor at IMDEA Networks Institute, weak or missing access controls create an even bigger issue. In certain cases, simply possessing a conversation link may allow access to the chat itself, meaning private discussions could potentially become accessible to anyone with the URL, including tracking services.
The report further claims that Grok and Perplexity share conversation permalinks with trackers like Meta Pixel. Researchers also stated that Grok may expose actual message text through Open Graph metadata collected by TikTok services.
AI Conversations Could Be Linked to User Identities
Another major concern raised in the research involves user identification and profiling. The study suggests that tracking mechanisms such as cookies, hashed email addresses, and server-side tracking systems could allow companies to connect AI activity with real individuals.
Researchers believe these practices reflect the continuation of traditional data-driven advertising models within the rapidly growing AI industry. Since most tracking activities happen silently in the background, users often remain unaware that their interactions may be monitored or analyzed beyond the chatbot itself.
Aniketh Girish, a Post-Doctoral Researcher at IMDEA Networks, explained that users currently have very limited control over these practices. Even rejecting non-essential cookies may not fully prevent tracking in certain situations, leaving privacy protections weaker than many people expect.
Questions Over Transparency and Privacy Policies
The study also criticizes the transparency of privacy controls offered by AI platforms. While some privacy policies mention advertising technologies and partnerships with third parties, researchers argue they often fail to clearly disclose whether actual user conversations are included in shared data.
Legal experts involved in the study pointed to possible concerns under the General Data Protection Regulation (GDPR). These concerns include the absence of a clearly defined legal basis for data sharing and insufficient disclosure to users regarding how their information may be processed.
Lawyer and data protection officer Jorge García Herrero emphasized that warnings about sensitive information potentially reaching advertising networks deserve the same level of visibility as the common AI disclaimer stating that chatbot responses may contain mistakes.
FAQs
What did the IMDEA Networks study reveal?
The study found that AI platforms like ChatGPT, Claude, Grok, and Perplexity may use third-party trackers from companies such as Meta, Google, and TikTok, potentially exposing user conversation data and activity.
Why are researchers concerned about AI chatbot privacy?
Researchers worry that sensitive user information, including chat links, metadata, and identifiers, could be shared with external tracking and advertising services without users fully realizing it.
Which AI platforms were mentioned in the study?
The research specifically discussed ChatGPT, Claude, Grok, and Perplexity AI.
Can AI conversations be linked to real identities?
According to the study, tracking technologies such as cookies, hashed email addresses, and server-side tracking methods may allow platforms to connect AI activity with real users.
Are AI chat links publicly accessible?
Researchers warned that some platforms may have weak access controls, meaning anyone with a conversation link could potentially access chat content.
Does rejecting cookies fully protect user privacy?
The study suggests that declining non-essential cookies may help in some cases, but it is not always enough to stop all forms of tracking.
What legal concerns were raised?
Experts pointed to potential GDPR-related issues, including unclear legal grounds for data sharing and insufficient transparency about how user conversations are processed.
What changes do researchers recommend?
Researchers are calling for stronger privacy protections, better transparency, improved access controls, and stricter regulatory oversight for generative AI platforms.
Conclusion
The findings from the IMDEA Networks Institute study highlight growing privacy concerns surrounding generative AI platforms. As AI chatbots become increasingly integrated into daily life, questions about transparency, data protection, and third-party tracking are becoming more important than ever. Researchers believe stronger safeguards, clearer privacy disclosures, and improved access controls are urgently needed to ensure users fully understand how their conversations may be collected, shared, or exposed. The study also signals that regulators may soon place greater scrutiny on how AI companies manage user data in the evolving digital landscape.
