There are already serious concerns regarding user privacy and regulatory control in the quickly growing AI ecosystem after two ground-breaking research revealed how AI chatbots and browser assistants are gathering and sharing personal data at previously unheard-of scales.
Malicious AI chatbots can trick users into disclosing up to 12.5 times more personal information than they would typically divulge, according to research from King’s College London that was presented at this week’s USENIX Security Symposium. 502 people were tested using three different kinds of manipulative AI systems that were constructed using commercially available language models, such as Mistral and Llama.
The most successful approach included what researchers dubbed “reciprocal” approaches, in which chatbots provided related anecdotes, emotional support, and sympathetic answers while guaranteeing secrecy. Dr. Xiao Zhan, a postdoctoral researcher at King’s College London, said users were not very conscious of the privacy issues during these exchanges.
Browser Helpers Collect Private Information
Concurrently, a different study conducted by UCL, UC Davis, and the University of Reggio Calabria found that nine out of 10 well-known AI browser assistants gather and send private information, such as social security numbers, banking information, and medical records.
The study, which was also presented at USENIX Security, examined assistants such as Merlin, Microsoft Copilot, and ChatGPT for Google. According to testing, Merlin recorded form inputs from IRS websites and university health portals, and a number of assistants gave Google Analytics their user IDs in case cross-site tracking occurred.
These tools “operate with unprecedented access to users’ online behavior in areas of their online life that should remain private,” according to Dr. Anna Maria Mandalari, senior author of the study from UCL. Of the assessed aides, only Perplexity exhibited no signs of personalization or profiling.
Regulatory Infractions And Privacy Vulnerabilities
Potential infractions of data protection rules are highlighted in both investigations. When gathering protected health and educational data, the browser assistant study found violations of FERPA and HIPAA standards. Scammers may be able to obtain personal information through chatbot manipulation tactics without consumers being aware of its potential uses.
According to Dr. William Seymour, a cybersecurity lecturer at King’s College London, “people may be less aware that there might be an ulterior motive to an interaction because these AI chatbots are still relatively novel.”
The researchers stressed that since many businesses make easily modifiable base models available, modifying AI models needs little technical know-how. As AI systems grow more and more integrated into everyday digital interactions, they want greater transparency, user choice over data gathering, and more robust regulatory monitoring.

