Over at the MIT Technology Review, Karen Hao has a seemingly promising article with the enticing title How to poison the data that Big Tech uses to surveil you. I read the article with great anticipation hoping it would provide me with some actionable strategies for striking back at adtech.
Sadly, that didn’t happen. That’s not really Hao’s fault, though. Her article was a report on this paper by Vicnet, Li, Tilly, Chancellor, and Hatchet. The paper is more of a theoretical discussion of the issues and although it does discuss some strategies, those discussions are general and not really actionable. If you want to read the paper and aren’t interested in extending its research, you can skip Section 2 on related work. It adds nothing interesting and is full of academic speak that makes it tiresome to read.
Despite its practical shortcomings, the paper does provide a useful framework for thinking about the problem of data abuse. The authors’ first useful concept is that of “data labor.” The idea is that when users provide data to adtech as a result of their normal Internet activities, that data should be thought of not as “exhaust” but as labor on the part of the users to produce it and that, therefore, the users are entitled to some sort of remuneration.
The problem is that users are essentially powerless with respect to the large data aggregators. The paper offers three “data levers:”
- Data strike: a refusal to provide data for the offending entity.
- Data poisoning: providing false or misleading input to poison their data and interfere with machine learning.
- Conscious data contribution: providing your data to a competitor. For example, switching from Google to DuckDuckGo.
I was really hoping for some specific strategies but the framework the authors provide does give us a useful way of thinking about the problem.