Wasn't like 15? years ago with the AOL search data anonymized release that people found multiple ways to connect dots to whittle it down to just one person or a small subset of people?
Wasn't this also shown with anonymized taxi-cab data (released in NY?) many moons ago?
Would it not be possible with knowing that you are tracking this data to funnel people into doing searches in a way that would reveal things?
Directions to the out of state reproductive health clinic, combined with card data would be all it takes to do serious things to people in some states.
Defaults matter. A lot.
Anonymized data is not always anonymous, collected server side or otherwise.
Yes, this is a process called (fittingly) data re-identification.
There are many papers on the topic. One of the more popular examples is "Robust De-anonymization of Large Sparse Datasets" using the Netflix Prize Dataset.
>We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world’s largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber’s record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.
However, it should be noted that the AOL dataset had a bunch of stuff that was identifiable by its nature (e.g. people searching for their full names or address), and the dataset wasn't scrubbed of those searches.
So the controversy wasn't just re-identification of data, but also just a bunch of already-identifiable data.
>Anonymized data is not always anonymous
More importantly, in my opinion, is that data that is anonymous now is just one other dataset away from not being anonymous anymore.
> Anonymized data is not always anonymous, collected server side or otherwise
If anything, I think it's both safer and more accurate to start from the assumption "anonymized" data can be de-anonymized and and require evidence to refute that rather than starting from a place of assumption that anonymization works and then trying to find a way to attack it. In practice, there's just not a good track record of this being done effectively, and I think people should generally be skeptical of whether this is even possible in many cases.
There is only one way that data can really be "anonymized": if the individual data points are aggregated and the original collected data is deleted. Short of that, anonymization is basically illusory.
The trouble is that we'd still have to take the word of the entity doing the data collection that they've done this properly, and it's clear that we can't take anyone's word for that.
Anonymization is effectively not achievable. Limited anonymoty may be possible within the scope of a particular dataset, but all it yakes is one enrichment pipeline to strip that all away. If you don't think that's what places like insurers do on a regular basis, you're a fool.
Wasn't this also shown with anonymized taxi-cab data (released in NY?) many moons ago?
Would it not be possible with knowing that you are tracking this data to funnel people into doing searches in a way that would reveal things?
Directions to the out of state reproductive health clinic, combined with card data would be all it takes to do serious things to people in some states.
Defaults matter. A lot.
Anonymized data is not always anonymous, collected server side or otherwise.