In this article, David Rabb discusses the various privacy implications of the monetization of data. Rabb focuses specifically on Personally Identifiable Information (PII) that companies can obtain about people through cookies, IP addresses, GPS, and so forth. Companies have often touted the anonymity of cookies but, as Rabb points out, there are many ways to tie cookies to known individuals, a process that often includes “consent” consumers don’t know they’ve granted. Other theoretically anonymous identifiers such as device IDs and IP addresses can also often be connected to PII. And research has shown that even less specific information, such as a collection of taxi trips or a combination of birthdate [sic] and Zip code, are [sic] often enough to identify specific individuals.
I don’t think that most internet users are naive enough to think that companies don’t have their PII but, as Rabb points out, customers may broadly assume your company knows everything about them but they can still be surprised at the data presented in specific situations – especially if that data is wrong.
Information managers face the increasingly complex task of maintaining the security of PII, ensuring this information is accurate, using only the personal information needed for a specific task, and ensuring that the privacy rights of customers are respected.
A study was conducted in Stanford University to examine the impact on privacy of the National Security Agency’s collection of bulk telephone metadata nationwide. The study found that telephone metadata is densely interconnected, can trivially be reidentified, enables automated location and relationship inferences, and can be used to determine highly sensitive traits.
The authors conclude that more broadly, this project emphasizes the need for scientifically rigorous surveillance regulation. Much of the law and policy that we explored in this research was informed by assumption and conventional wisdom, not quantitative analysis. To strike an appropriate balance between national security and civil liberties, future policymaking must be informed by input from the relevant sciences.
This article provides useful and sobering information about how the digital assistants Siri, Cortana, Amazon Alexa, Facebook M, and Goole Now use your data. The article highlights the privacy and security features of these digital assistants; for example: By using Siri, Apple adds, you agree to allow Apple and its subsidiaries and agents to transmit, collect, maintain, process, and use your voice input and user data. Amazon Alexa saves your voice recordings, but you can erase them via your personal settings. As we move increasingly in the direction of voice-activated applications such as search, and voice-to-text, we need to consider carefully the new personal metadata footprints and trails that we generate.
A report on fitness tracker activity has just been published by OpenEffect, Canadian not-for-profit applied research organization focusing on digital privacy and security, and The Citizen Lab at the Munk School of Global Affairs, University of Toronto. The scope of this report is as follows:
Every Step You Fake explores what information is collected by the companies which develop and sell some of the most popular wearables in North America. Moreover, it explores whether there are differences between the information that is collected by the devices2 and what companies say they collect, and what they subsequently provide to consumers when compelled to disclose all the personal information that companies hold about residents of Canada.
The report does not contain any conclusions or specific recommendations yet, so this is obviously very preliminary at this point. Some points raised, however, include:
- Seven of the eight wearables tested revealed unique Bluetooth identifiers that allowed them to be tracked by nearby Bluetooth beacons. Beacons are used more and more in stores and malls to profile shoppers and push tailored offers.
- While the devices themselves show the wearers’ location, the accompanying apps provide more personal information, e.g., they failed to protect against interception and tampering when they were transmitting data between smartphone, wearable, and the wearable company’s own servers.
I have worn a fitness tracker for some years now, and I tend to not have my Bluetooth device active on my smartphone when I am away from home. I sync my wearable device when I am at home. I’m not sure how much protection this affords me. The default setting on my Bluetooth is to not make the device visible to anyone other than me, but I’m not sure if this is sufficient. I minimize the information I load to my tracker; I don’t include what I’ve eaten, or track my sleep, so at least I do control how much of my personal information is tracked. Still, this report does raise a few red flags, even as preliminary as it may be.
The title of this article is telling: Microsoft’s Cortana to spy on email to keep you on track. The article discusses Cortana’s “helpful” features that can scan your email and recognize language indicating a commitment and use this information to create reminders. If, for example, you send a message to your boss stating, “I will send you the project by 4:00 p.m.,” Cortana will set an alert so you don’t forget. Now, I’m all about keeping myself organized, but isn’t this what keeping calendars is all about? When I have an event or task, I schedule it in my calendar, and a reminder is sent to me. Do I really need or, more importantly, want, Cortana to scan my emails to send me reminders? No mention is made in the article about where this information is stored. Is Microsoft tracking any of this data? I don’t think that I’m a particularly paranoid person, but this feature does raise a few alarm bells with regard to privacy.
According to today’s Globe and Mail, an Ontario court is set to issue what could be a landmark ruling on a Charter of Rights challenge filed by two of Canada’s biggest wireless carriers over “tower dump” production orders that would have required the companies to turn over personal information of about 40,000 customers.
Since I’m a Rogers wireless customer, it’s comforting to know that these companies challenged 2014 production order from Peel Regional Police obtained production orders requiring the two companies to provide communication records related to 21 cellular towers or sites. Rogers and Telus argued that complying with the order would have resulted in the disclosure of customer name and address information for more than 9,000 Telus subscribers and more than 30,000 Rogers subscribers.
Rogers: We want to ensure our customers’ privacy rights are protected and there are clear ground rules for what law enforcement is able to request and access… [our] policy is only to share customer information when required by law or in emergency situations. This case did not meet the test for us and we are hopeful the court agrees. As am I.
The much despised (by me, at least) Vuvuzela is being put to interesting use by a group of researchers at MIT. This article discusses techniques that are being developed to hide metadata that is normally included in email and messaging systems. As we know, metadata can give away a lot of information about the parties involved in the exchange, even if the content of the messages cannot be accessed. This new messaging system creates a lot of noise to bury the metadata, e.g.:
- Messages are stored on server rather than sent directly to their recipients.
- The messages are released only in delayed rounds and not when each user requests them.
- The system generates a large amount of dummy or fake messages (the Vuvuzela effect), which makes it difficult to distinguish the “true” metadata from the “false.”
With all these mechanisms working, the researcher behind the project say that the only variables Vuvuzela reveals are “the total number of users engaged in a conversation, and the total number of users not engaged in one.” And even then, it doesn’t reveal which group the user is part of. All of this is intended to obscure the metadata only, but the servers themselves also encrypt the message content the same as any other encrypted chat system.
This system can cause some annoyances in the form of delays, and it’s not clear how the false messages would be managed. The software is in its infancy, but it’s an intriguing idea, and raises the question (not new, of course), about the balance between the desire for privacy, and the willingness to take the steps necessary to guard that privacy.