A “pay it back tax” on data brokers: a modest (and also politically untenable and impossibly naïve) policy proposal
I’ve just returned from the “Social, Cultural, and Ethical Dimensions of Big Data” event, held by the Data & Society Initiative (led by danah boyd), and spurred by the efforts of the White House Office of Technology and Policy to develop a comprehensive report on issues of privacy, discrimination, and rights around big data. And my head is buzzing. (Oh boy. Here he goes.) There must be something about ma and workshops aimed at policy issues. Even though this event was designed to be wide-ranging and academic, I always get this sense of urgency or pressure that we should be working towards concrete policy recommendations. It’s something I rarely do in my scholarly work (to its detriment, I’d say, wouldn’t you?) But I don’t tend to come up with reasonable, incremental, or politically viable policy recommendations anyway. I get frustrated that the range of possible interventions feels so narrow, so many players that must be untouched, so many underlying presumptions left unchallenged. I don’t want to suggest some progressive but narrow intervention, and in the process confirm and reify the way things are – though believe me, I admire the people who can do this. I long for there to be a robust vocabulary for saying what we want as a society and what we’re willing to change, reject, regulate, or transform to get it. (But at some point, if it’s too pie in the sky, it ceases being a policy recommendation, doesn’t it?) And this is especially true when it comes to daring to restrain commercial actors who are doing something that can be seen as publicly detrimental, but somehow have this presumed right to engage in this activity because they have the right to profit. I want to be able to say, in some instances, “sorry, no, this simply isn’t a thing you get to profit on.”
All that said, I’m going to propose a policy recommendation. (It’s going to be a politically unreasonable one, you watch.)
I find myself concerned about this hazy category of stakeholders that, at our event, were generally called “data brokers.” There are probably different kinds of data brokers that we might think about: companies that buy up and combine data about consumers; companies that scrape public data from wherever it is available and create troves of consumer profiles. I’m particularly troubled by the kind of companies that Kate Crawford discussed in her excellent editorial for Scientific American a few weeks ago — like Turnstyle, a company that has set up dummy wifi transponders in major cities to pick up all those little pings your smartphone gives off when its looking for networks. Turnstyle coordinates those pings into a profile of how you navigated the city (i.e. you and your phone walked down Broadway, spent twenty minutes in the bakery, then drove to the south side), then aggregates those navigation profiles into data about consumers and their movements through the city and sells them to marketers. (OK, that is particularly infuriating.) What defines this category for me is that data brokers do not gather data as part of a direct service they provide to those individuals. Instead they gather at a point once removed from the data subjects: such as purchasing the data gathered by others, scraping our public utterances or traces, or tracking the evidence of our activity we give off. I don’t know that I can be much more specific than that, or that I’ve captured all the flavors, in part because I’ve only begun to think about them (oh good, then this is certain to be a well-informed suggestion!) and because they are a shadowy part of the data industry, relatively far with consumers, with little need to advertise or maintain a particularly public profile.
I think these stakeholders are in a special category, in terms of policy, for a number of reasons. First, they are important to questions of privacy and discrimination in data, as they help to move data beyond the settings in which we authorized its collection and use. Second, they are outside of traditional regulations that are framed around specific industries and their data use (like HIPAA provisions that regulate hospitals and medical record keepers, but not data brokers who might nevertheless traffic in health data). Third, they’re a newly emergent part of the data ecosystem, so they have not been thought about in the development of older legislation. But most importantly, they are a business that offers no social value to the individual or society whose data is being gathered. (Uh oh.) In all of the more traditional instances in which data is collected about individuals, there is some social benefit or service presumed to be offered in exchange. The government conducts a census, but we authorized that, because it is essential to the provision of government services: proportional representation of elected officials, fair imposition of taxation, etc. Verizon collects data on us, but they do so as a fundamental element of the provision of telephone service. Facebook collect all of our traces, and while that data is immensely valuable in its own right and to advertisers it is also an important component in providing their social media platform. I am by no means saying that there are no possible harms in such data arrangements (I should hope not) but at the very least, the collection of data comes with the provision of service, and there is a relationship (citizen, customer) that provides a legally structured and sanctioned space for challenging the use and misuse of that data — class action lawsuit, regulatory oversight, protest, or just switching to another phone company. (Have you tried switching phone companies lately?) Some services that collect data have even voluntarily sought to do additional, socially progressive things with that data: Google looking for signs of flu outbreaks, Facebook partnering with researchers looking to encourage voting behavior, even OK Cupid giving us curious insights about the aggregate dating habits of their customers. (You just love infographics, don’t you.) But the third party data broker who buys data from an e-commerce site I frequent, or scrapes my publicly available hospital discharge record, or grabs up the pings my phone emits as I walk through town, they are building commercial value on my data, but offer me no value to me, my community, or society in exchange.
So what I propose is a “pay it back tax” on data brokers. (Huh?! Does such a thing exist, anywhere?) If a company collects, aggregates, or scrapes data on people, and does so not as part of a service back to those people (but is that distinction even a tenable one? who would decide and patrol which companies are subject to this requirement?), then they must grant access to their data and access 10% of their revenue to non-profit, socially progressive uses of that data. This could mean they could partner with a non-profit, provide them funds and access to data, to conduct research. Or, they could make the data and dollars available as a research fund that non-profits and researchers could apply for. Or, as a nuclear option, they could avoid the financial requirement by providing an open API to their data. (I thought your concern about these brokers is that they aggravate the privacy problems of big data, but you’re making them spread that collected data further?) I think there could be valuable partnerships: Turnstyle’s data might be particularly useful for community organizations concerned about neighborhood flow or access for the disabled; health data could be used by researchers or activists concerned with discrimination in health insurance. There would need to be parameters for how that data was used and protected by the non-profits who received it, and perhaps an open access requirement for any published research or reports.
This may seem extreme. (I should say so. Does this mean any commercial entity in any industry that doesn’t provide a service to customers should get a similar tax?) Or, from another vantage point, it could be seen as quite reasonable: companies that collect data on their own have to spend an overwhelming amount of their revenue providing whatever service they do that justifies this data collection: governments that collect data on us are in our service, and make no profit. This is merely 10% and sharing their valuable resource. (No, it still seems extreme.) And, if I were aiming more squarely at the concerns about privacy, I’d be tempted to say that data aggregation and scraping could simply be outlawed. (Somebody stop him!) In my mind, it at the very least levels back the idea that collecting data on individuals and using that as a primary resource upon which to make profit must, on balance, provide some service in return, be it customer service, social service, or public benefit.
This is cross-posted at Culture Digitally.
This was an incredible, overwhelming year for internship applications. We had well over 200 PhD students apply, and we were deeply impressed by the quality of suggested projects. Thanks to everyone for your submissions. Here are the four people who will be joining us over the summer – congratulations to you all. We’re looking forward to working with you!
Tressie McMillan Cottom is a Ph.D. candidate in the Sociology Department at Emory University in Atlanta, GA. Broadly Tressie studies organizations, inequality, and education. Her doctoral research is a comparative study of the expansion of for-profit colleges (like the University of Phoenix) in the 1990s.) She will be working with Kate, Mary and Nancy this summer on a project about hashtag activist groups on Twitter and their ties to institutional power.
Luke Stark is a PhD student in the Department of Media, Culture, and Communication at New York University under the supervision of Helen Nissenbaum. His dissertation research focuses on the history and philosophy of digital media technologies, and their use in tracking, monitoring and shaping the everyday emotional lives and experiences of users. This summer he will be working with Kate on epistemologies of big data, privacy, and computational culture.
Katrin Tiidenberg is a Ph.D. candidate at the Institute of International and Social Studies at Tallinn University in Estonia. Her dissertation is about online experience and identity in the context of NSFW blogs on Tumblr. She will be working this summer with Nancy on a project about selfies, power and shame.
Kathryn Zyskowski is a Ph.D. Student in the Department of Anthropology at the University of Washington and an Editorial Intern at the Journal of the Society for Cultural Anthropology. Her doctoral work examines identity, representation, and Muslim/Hindu relations in South India. This summer, she will work with Mary studying how people crowdsourcing in India and the United States use online discussion forums to organize their work and structure their identities as workers in specific locations.
(Reblogged from jessalingel.tumblr.com)
It’s the last day of the iconference and I’m just leaving an awesome, much needed discussion of social justice issues related to library and information science. It’s always affirming to see people in my field who care about social justice exchanging ideas, frustrations, success stories, failure stories and giving advice, here are some brief notes from the discussion. Many of these examples focus on teaching and academic life, but there are ways to reposition them towards other contexts.
+Discomfort is okay. Nicole Cooke pointed out that it’s actually productive and useful to generate moments of discomfort in class – I really appreciate this point as a reminder that as tempting as it is to shy away from moments of social awkwardness that come from identifying gaps in privilege, it can also be an important opportunity to reshape assumptions.
+When it comes to convincing administrators and senior faculty of the importance, we need allies who are higher ups and money talks. The members of the panel were from GSLIS at the ischool at Illinois, and they noted the importance of having champions in their program. Also, having received a grant to work on diversity and inclusion lends a degree of legitimacy to politics of challenging heteronormativity.
+Even if we’re making our classes full of theories of power, students self-select for classes specifically geared towards issues of race class and gender, so how do we get issues of social justice into the curriculum as a whole? Some inventive ideas include course releases for faculty to partner with existing classes to integrate issues of critical theory and social justice into coursework. Also, a clearer articulation of how these efforts fit into the category of service. Another idea is building momentum with interdisciplinary efforts towards feminist ideology, like Laura Portwood-Stacer’s efforts to generate conversations of feminists working on social media at a range of communication and HCI conferences.
+When it comes to the examples that you’re using in class, it’s important to think about the examples that we use. It’s an easy thing to bring up with colleagues as a way of talking about diversity that can be fairly easily integrated into the classroom. (Shout out to Emily Knox for making this point.)
Organized as self-defense forces, some residents of the Mexican state of Michoácan have been attempting to regain control of their towns from powerful organized criminals. Although these Mexican militias have received a fair amount of media coverage, its fascinating social media presence has not been examined. Saiph Savage, a grad student at UNAM/UCSB, and I have started to collect some data, and wanted to share some initial observations of one of the militias’ online spaces: Valor por Michoacán, a Facebook page with more than 130,000 followers devoted to documenting the activities of the self-defense militia groups in their fight against the Knights Templar Cartel. We contrast this page with a similar one from a different state: Valor por Tamaulipas, which has enabled residents of that state cope with the Drug War related violence.
I’m thrilled to announce that our anthology, Media Technologies: Essays on Communication, Materiality, and Society, edited by myself with Pablo Boczkowski and Kirsten Foot, is now officially available from MIT Press. Contributors include Geoffrey Bowker, Finn Brunton, Gabriella Coleman, Gregory Downey, Steven Jackson, Christopher Kelty, Leah Lievrouw, Sonia Livingstone, Ignacio Siles, Jonathan Sterne, Lucy Suchman, and Fred Turner. We’ve secured permission to share the introduction with you. A blurb:
In recent years, scholarship around media technologies has finally shed the presumption that technologies are separate from and powerfully determining of social life, seeing them instead as produced by and embedded in distinct social, cultural, and political practices – and as socially significant because of that. This has been helped along by a productive intersection between work in science and technology studies (STS) interested in information technologies as complex sociomaterial phenomena, and work in communication and media studies attuned to the symbolic and public dimensions of these tools.
In this volume, scholars from both fields come together to provide some conceptual paths forward for future scholarship. Two sets of essays and commentaries comprise this collection: the first addresses the relationship between materiality and mediation, considering such topics as the lived realities of network infrastructure. The second highlights media technologies as fragile and malleable, held together through the minute, unobserved work of many, including efforts to keep these technologies alive.
Please feel free to circulate this introduction to others, and write back to us with your thoughts, criticisms, and ideas. We hope this volume helps anchor the exciting conversations we see happening in the field, and serves a launchpad for future scholarship.
(or, Social Media circa 1994)
(or, Happy 20th Birthday, My Home Page!)
Thanks to the rigorous use of backups, I’ve just noticed that it is the twentieth anniversary of my personal home page. In the spirit of commemoration, I’ve uploaded the original version (c. 1994). For reasons I don’t remember now, I named it “booger.html.” A screenshot:
I stumbled upon this file while looking through my backups for something else. I also found all kinds of other interesting stuff. For example, I found my personal list of “hotlinks” (as we called them then).
It’s very hard to reconstruct what the Web was like then. The Internet Archive had not begun operation yet. All of my old links to things are now dead, but it’s still interesting to try to remember how we were social with computers. Yes, there were “social media.” I’ll explain:
- Apparently I was in a Webring.
- I found my PGP Public Key. (No idea where the private key is.) I made my PGP public key available so people could send me a PGP encrypted message at any time. However, in ten years no one ever sent me a PGP encrypted message. But I was ready. (Take that NSA.) As long as I could find my PGP private key and remember the password from ten years ago, that is.
- My preferred search engine was Web Crawler.
- Later in the year I was very excited about Hot Wired, the first commercial magazine on the Web (an online version of Wired Magazine). It had its own URL then, which still works: http://www.hotwired.com Everything was prefaced with “hot” back then. That is a hotlink to HotWired.
- I spent a lot of time doing ytalk with my friends. Screenshot (found on the Internet — not mine):
- I exhorted people to look me up on whois and to “finger me.” I regularly updated my .plan and .project files, which were status updates. Yes, Mark Zuckerberg basically ripped off the finger protocol from 1971, then added a facility to help Harvard men look at Harvard women (the “Facebook”) and “poke” them. Great job. Here’s an example finger query (not mine, found on the Web):
A lot of being on the Web in 1994 seems to be about just being on the Web at all. For instance:
- I used the HotDog Web Editor for my HTML. Apparently because the logo was so cool. (I don’t think I used it for my first Web page — booger.html though because the HTML is terrible.)
- I appear to have been on an obsessive search for new “icons.” I bookmarked a bunch of icon sharing sites, all now defunct.
- I was very interested in how to interlace GIFs.
- Does anyone else remember Carlos’s Forms Tutorial at NCSA? I spent a huge amount of time there and looking at the CGI documentation on a server named hoohoo (the link is a capture from 1996). I spent so much time on it that I memorized the URL, and we didn’t believe in short URLs then. UIUC loomed large in my imagination purely because of its Web stuff. Little did I know I would go on to work there and genuflect at the monument to the Web Browser every single day.
The ephemera above remind me that the Web was so exciting that a friend went to the DMV and got the California personalized license plate “IDOWWW“. I thought this might be the coolest thing anyone had ever done. In fact, I still think it is.
It’s hard to believe twenty years have passed since booger.html. I want to keep the nostalgia going. Does anyone else remember anything about social media in 1994?
(or, Are We Social Insects?)
I worried that my last blog post was too short and intellectually ineffectual. But given the positive feedback I’ve received, my true calling may be to write top ten lists of other people’s ideas, based on conferences I attend. So here is another list like that.
These are my notes from my attendance at “Algorithmic Culture,” an event in the University of Michigan’s Digital Currents program. It featured a lecture by the amazing Ted Striphas. These notes also reflect discussion after the talk that included Megan Sapnar Ankerson, Mark Ackerman, John Cheney-Lippold and other people I didn’t write down.
Ted has made his work on historicizing the emergence of an “algorithmic culture” (Alex Galloway‘s term) available widely already, so my role here is really just to point at it and say: “Look!” (Then applaud.)
If you’re not familiar with this general topic area (“algorithmic culture”) see Tarleton Gillespie’s recent introduction The Relevance of Algorithms and then maybe my own writing posse’s Re-Centering the Algorithm. OK here we go:
Eight Questions About Algorithms and Culture
- Are algorithms centralizing? Algorithms, born from ideas of decentralized control and cybernetics, were once seen as basically anti-hierarchical. Fifty years ago we searched for algorithms in nature and found them decentralized — today engineers write them and we find them centralizing.
- OR, are algorithms fundamentally democratic? Even if Google and Facebook have centralized the logic, they claim “democracy!” because we provide the data. YouTube has no need of kings. The LOLcats and fail videos are there by our collective will.
- Many of today’s ideas about algorithms and culture can be traced to earlier ideas about social insects. Entomology once noted that termites “failed to evolve” because their algorithms, based on biology, were too inflexible. How do our algorithms work? Too inflexible? (and does this mean we are social insects?)
- The specific word “algorithm” is a recent phenomenon, but the idea behind it is not new. (Consider: plan, recipe, procedure, script, program, function, …) But do we think about these ideas differently now? If so, maybe it is who looks at them and where they look. In early algorithmic thinking people were the logic and housed the procedure. Now computers house the procedure and people are the operands.
- Can “algorithmic culture” be countercultural? Fred Turner and John Markoff have traced the links between the counterculture and computing. Striphas argued that counterculture-like influences on what would become modern computing came much earlier than the 60s: consider the influence of WWII and The Holocaust. For example, Talcott Parsons saw culture through the lens of anti-authoritarianism. He also saw culture as the opposite of state power. Is culture fundamentally anti-state? This also leads me to ask: Is everything always actually about Hitler in the end?
- Today, the computer science definition of “algorithm” is similar to anthropologist Clifford Geertz’s definition of culture in 1970s — that is, a recipe, plan, etc. Why is this? Is this significant?
- Is Reddit the conceptual anti-Facebook? Reddit publicly discloses the algorithm that it uses to sort itself. There have been calls for Facebook algorithm transparency on normative grounds. What are the consequences of Reddit’s disclosure, if any? As Reddit’s algorithm is not driven by Facebook’s business model, does that mean these two social media platform sorting algorithms are mathematically (or more properly, procedurally) opposed?
- Are algorithms fundamentally about homeostasis? (That’s the idea, prevalent in cybernetics and 1950s social science, that the systems being described are stable.) In other words, when algorithms are used today is there an implicit drive toward stability, equilibrium, or some other similar implied goal or similar standard of beauty for a system?
Whew, I’m done. What a great event!
I’m skeptical about that last point (algorithms = homeostasis) but the question reminds me of “The Use and Abuse of Vegetational Concepts,” part 2 of the 2011 BBC documentary/insane-music-video by Adam Curtis titled All Watched Over by Machines of Loving Grace. It is a favorite of mine. Although I think many of the implied claims are not true, it’s worth watching for the soundtrack and jump cuts alone.
It’s all about cybernetics and homeostasis. I’ll conclude with it… “THIS IS A STORY ABOUT THE RISE OF THE MACHINES”:
Some of us also had an interesting side conversation about what job would be the “least algorithmic.” Presumably something that was not repeatable — it differs each time it is performed. Some form of performance art? This conversation led us to think that everything is actually algorithmic.