🚫 Trackers: Some Techniques to Disappear from the Radar
With every click, invisible trackers collect data about you, from cookies to browser fingerprints, keeping your browsing under constant surveillance.
I am a security and privacy professional and I live with the feeling of being watched online. I know how spy scripts embedded in websites collect my device’s preferences and characteristics to build a profile about me, causing hyper-personalized ads to chase me across the web. Who else is out there?
People in the field know this scenario well, but it still impresses me how effective these tools remain even with blockers and private browsing. It’s not enough to just use incognito mode or a VPN; modern tracking operates on a deeper level. This is what motivated me to look for techniques beyond the obvious, seeking real anonymity.
Technique 1: Fingerprint Manipulation
It consists of deliberately altering the unique characteristics that the browser discloses to websites to prevent the creation of a consistent digital fingerprint of the device. Well-known tools like anti-detection browsers allow modifying environment attributes (user agent, screen resolution, time zone, installed fonts, etc.) and simulating various system profiles. In this way, instead of always presenting the same digital “footprint,” the user can forge information and confuse the tracking algorithms that attempt to identify them uniquely. This active manipulation demands technical knowledge but significantly increases the difficulty of correlating sessions based on the fingerprint.
Tool Example: https://addons.mozilla.org/pt-BR/firefox/addon/chameleon-ext/ (Link is in Portuguese, but the tool is an example)
Technique 2: Browser Characteristics Randomization
Instead of maintaining a fixed or static fingerprint, this strategy introduces controlled randomness into the data exposed by the browser with each session. Specialized extensions can introduce minor noise into elements like canvas, WebGL, audio, and other measurement APIs, reporting false or slightly altered values each time. For example, it’s possible to generate a different canvas hash with every page reload, so the user’s profile is never exactly the same. This periodic randomization makes each visit appear to come from a different device, frustrating attempts to aggregate browsing history based on technical attributes.
Tool Example: https://github.com/kkapsner/CanvasBlocker
Technique 3: Uniform Fingerprint Standardization
Instead of randomizing, another approach is to standardize browser data so that it is indistinguishable from that of other users. The idea here is to adopt settings and profiles as generic as possible, mirroring a “common standard” within a crowd. Browsers like the Tor Browser follow this principle, limiting the variety of fonts, window sizes, and other indicators, so that all Tor users look almost identical to the trackers.
No mystery here, check out
Technique 4: Generation of Artificial Traffic and Noise
Here the goal is to obscure your behavioral profile by flooding it with false data. Extensions or scripts that automate simulated activities are used: for example, periodically sending random search queries to engines like Google or Bing, in order to hide your real interests among a sea of fictional searches.
Another tactic is to intentionally click on ads automatically (without the user seeing the ads), just to pollute the profiles generated by advertising networks. The AdNauseam plugin follows exactly this approach by “clicking” on all blocked ads, generating misleading metrics for the trackers. By producing this bait traffic (also called cover traffic), the user introduces statistical noise into the surveillance system: the collected data starts to include so many false positives that it becomes difficult for the platforms to discern which actions actually correspond to their real behavior.
This one is top-notch, see it at https://github.com/dhowe/AdNauseam
Technique 5: Isolation via Containers and Sandboxes
This involves compartmentalizing online activities as much as possible, so that each browsing category occurs in an isolated, sealed environment, separate from others. Extensions like Firefox Multi-Account Containers allow opening websites in separate “silos” within the same browser, ensuring that cookies, logins, and storage from one container do not leak into another. At a more advanced level, security-focused operating systems like Qubes OS implement total isolation via virtual machines for each task, known as compartmentalization security. In this model, you can have, for example, one container (or VM) for emails, another for social media, and another for banking, all running separately. Even if a malicious site manages to extract identifiers in the social media container, it will not have access to the bank container, breaking the correlation chain that trackers use to build a unified profile.
There is one for Firefox at https://github.com/mozilla/multi-account-containers
Technique 6: Digital Identity Fragmentation
Here the person deliberately divides their online persona into multiple distinct identities, so that different services cannot easily link them. This involves using separate accounts, distinct profiles, and even different devices or browsers for different contexts (work, personal life, activism, etc.). Each fragmented identity operates with its own set of credentials, and ideally with separate email addresses and phone numbers as well. For example, a user might maintain a “clean” social media profile, without any true personal information, while using a different device and account for more critical communications. If implemented correctly, this strategy ensures that even an eventual correlation by IP or device fails – since each persona uses different proxies or networks when necessary. In short, identity fragmentation prevents a single super-profile from being built by aggregating all your habits; instead, each fragment provides only a partial glimpse, disconnected from the others.
This would require using various tools, but check out Simple Login at https://github.com/simple-login/app
Technique 7: Ephemeral Sessions and Disposable Environments
This involves never reusing the same browsing environment for a long time, adopting temporary sessions that self-destruct. Incognito/private browsing modes already offer a taste of this by not preserving cookies or history, but more advanced users go further: they use virtual machines or live systems (like Tails OS) that always start clean and do not save state between reboots. In the case of the aforementioned Qubes OS, there are “Disposable VMs” – single-use virtual machines that are completely erased after being closed. With ephemeral sessions, any trace of persistent identification (cookies, local identifiers, cache) is eliminated before it can be exploited in future visits. This means that every time you appear online, you will look like a new user, making long-term tracking unfeasible. The disadvantage is the loss of convenience (logins or preferences are not maintained), but in terms of privacy, it is an extremely effective hard line against persistent links.
Did you already know Tails? Check it out here
Technique 8: Subversion of Tracking APIs and Sensors
Modern trackers exploit not only cookies and traditional fingerprinting, but also less obvious APIs that reveal device details. Examples include the battery status API, motion/orientation sensors, WebRTC (which exposes local IP), among others. A sophisticated countermeasure is to disable or falsify these sources of information. Browser developers have already identified risks, for example by removing the Battery Status API after realizing it allowed tracking users across different browsers by exposing unique hardware characteristics. Advanced users can manually disable unnecessary APIs via flags or extensions (such as turning off WebRTC to avoid IP leakage, or using extensions that block Canvas/AudioContext). Alternatively, the responses from these APIs can be tampered with: for example, always reporting 100% battery, or a fixed fictitious GPS location. By castrating high-granularity data sources, trackers are prevented from obtaining subtle device identifiers, forcing them back to less precise methods.
This is the most famous one here, uBlock! https://github.com/gorhill/uBlock
Technique 9: IP Rotation and Anonymous Routing
Even with all the protections in the browser, the source IP address can still betray identity or allow session correlation. Advanced techniques involve frequently shuffling the connection route, using anonymous networks and proxies dynamically. One implementation is the rotating use of Tor (or VPNs), for example, starting each new session with a different Tor circuit, or switching between different VPN/proxy servers for each set of accessed sites. It is also possible to chain layers (VPN over Tor, or vice-versa) to add redundancy in hiding the real IP. In Qubes OS, for example, users combine a VPN gateway + Tor (Whonix) in cascade, creating a multiple tunnel (sys-net → sys-vpn → sys-whonix) that completely hides the origin of the traffic.
If you like networks and want something more “root,” check out
Technique 10: Tracker Sinkhole via DNS and Local Proxy
Finally, an architectural measure at the network layer is to intercept requests for tracking domains before they even reach the browser. Solutions like Pi-hole (or NextDNS configured with custom lists) act as a local DNS server, responding with a false address (0.0.0.0) whenever a known ad/tracking domain is requested (e.g., privacyinternational.org). In practice, this “spoofs” the connection: instead of the spy script reaching the advertiser’s servers, it is redirected to the void in your local network. Another similar approach is to use filtering local proxies (like Privoxy) to debug HTTP requests; the proxy allows legitimate content to pass but intercepts and discards calls to telemetry endpoints. The advantage of these approaches is that they work transparently for all applications and devices on the network, shielding not only the browser but also mobile apps, smart TVs, and any other gadget against silent data collection. In short, a perimeter protection layer is created that prevents unwanted packets from leaving (or returning), mitigating upstream tracking, directly at the name resolution and traffic infrastructure.
This one literally calls itself the black hole for advertisers, check it out at https://github.com/pi-hole/pi-hole
I hope you liked it!
I talk a bit more about this in these two other articles:






Setup your own ID and domain and do everything with it.
Bill.Gates@sucks.cock.com
It’ll do wonders for AI training.
Make your cookie folder read-only.
Excellent! Just earlier today I encountered a situation that I could not explain in any other way except that I was digitally fingerprinted by the big G. I’m sick of big tech exploiting internet users.