SMC media roundup

This is a collection of some of our researchers’ quotes, mentions, or writings in mainstream media. Topics include Facebook’s supposed neutral community standards, sharing economy workers uniting to protest, living under surveillance and relational labor in music.

Tarleton Gillespie in the Washington Post –> The Big Myth Facebook needs everyone to believe

And yet, observers remain deeply skeptical of Facebook’s claims that it is somehow value-neutral or globally inclusive, or that its guiding principles are solely “respect” and “safety.” There’s no doubt, said Tarleton Gillespie, a principal researcher at Microsoft Research in New England, that the company advances a specific moral framework — one that is less of the world than of the United States, and less of the United States than of Silicon Valley.

Mary Gray in The New York Times –> Uber drivers and others in the gig economy take a stand

“There’s a sense of workplace identity and group consciousness despite the insistence from many of these platforms that they are simply open ‘marketplaces’ or ‘malls’ for digital labor,” said Mary L. Gray, a researcher at Microsoft Research and professor in the Media School at Indiana University who studies gig economy workers.

Kate Crawford’s (and others’) collaboration with Laura Poitras (Academy Award-winning documentary film director and privacy advocate) in the book about living under surveillance in Boing Boing.

Poitras has a show on at NYC’s Whitney Museum, Astro Noise, that is accompanied by a book in which Poitras exposes, for the first time, her intimate notes on her life in the targeting reticule of the US government at its most petty and vengeful. The book includes accompanying work by Ai Weiwei, Edward Snowden, Dave Eggers, former Guantanamo Bay detainee Lakhdar Boumediene, Kate Crawford and Cory Doctorow.

(More on the upcoming book and Whitney museum event on Wired)

Canadian Songwriter’s Association interview with Nancy Baym –> Sound Advice: How to use social media in 2016

When discussing the use of social media by songwriters, Baym prefers to present a big-picture view rather than focusing on a ‘Top Ten Tips” approach, or on one platform or means of engagement. Practicality is key: “I’d love for 2016 to be the year of people getting realistic about what social media can and can’t do for you, of understanding that it’s a mode of relationship building, not a mode of broadcast,” says Baym.

 

 

A “pay it back tax” on data brokers: a modest (and also politically untenable and impossibly naïve) policy proposal

I’ve just returned from the “Social, Cultural, and Ethical Dimensions of Big Data” event, held by the Data & Society Initiative (led by danah boyd), and spurred by the efforts of the White House Office of Technology and Policy to develop a comprehensive report on issues of privacy, discrimination, and rights around big data. And my head is buzzing. (Oh boy. Here he goes.) There must be something about ma and workshops aimed at policy issues. Even though this event was designed to be wide-ranging and academic, I always get this sense of urgency or pressure that we should be working towards concrete policy recommendations. It’s something I rarely do in my scholarly work (to its detriment, I’d say, wouldn’t you?) But I don’t tend to come up with reasonable, incremental, or politically viable policy recommendations anyway. I get frustrated that the range of possible interventions feels so narrow, so many players that must be untouched, so many underlying presumptions left unchallenged. I don’t want to suggest some progressive but narrow intervention, and in the process confirm and reify the way things are – though believe me, I admire the people who can do this. I long for there to be a robust vocabulary for saying what we want as a society and what we’re willing to change, reject, regulate, or transform to get it. (But at some point, if it’s too pie in the sky, it ceases being a policy recommendation, doesn’t it?) And this is especially true when it comes to daring to restrain commercial actors who are doing something that can be seen as publicly detrimental, but somehow have this presumed right to engage in this activity because they have the right to profit. I want to be able to say, in some instances, “sorry, no, this simply isn’t a thing you get to profit on.”

All that said, I’m going to propose a policy recommendation. (It’s going to be a politically unreasonable one, you watch.)

I find myself concerned about this hazy category of stakeholders that, at our event, were generally called “data brokers.” There are probably different kinds of data brokers that we might think about: companies that buy up and combine data about consumers; companies that scrape public data from wherever it is available and create troves of consumer profiles. I’m particularly troubled by the kind of companies that Kate Crawford discussed in her excellent editorial for Scientific American a few weeks ago — like Turnstyle, a company that has set up dummy wifi transponders in major cities to pick up all those little pings your smartphone gives off when its looking for networks. Turnstyle coordinates those pings into a profile of how you navigated the city (i.e. you and your phone walked down Broadway, spent twenty minutes in the bakery, then drove to the south side), then aggregates those navigation profiles into data about consumers and their movements through the city and sells them to marketers. (OK, that is particularly infuriating.) What defines this category for me is that data brokers do not gather data as part of a direct service they provide to those individuals. Instead they gather at a point once removed from the data subjects: such as purchasing the data gathered by others, scraping our public utterances or traces, or tracking the evidence of our activity we give off. I don’t know that I can be much more specific than that, or that I’ve captured all the flavors, in part because I’ve only begun to think about them (oh good, then this is certain to be a well-informed suggestion!) and because they are a shadowy part of the data industry, relatively far with consumers, with little need to advertise or maintain a particularly public profile.

I think these stakeholders are in a special category, in terms of policy, for a number of reasons. First, they are important to questions of privacy and discrimination in data, as they help to move data beyond the settings in which we authorized its collection and use. Second, they are outside of traditional regulations that are framed around specific industries and their data use (like HIPAA provisions that regulate hospitals and medical record keepers, but not data brokers who might nevertheless traffic in health data). Third, they’re a newly emergent part of the data ecosystem, so they have not been thought about in the development of older legislation. But most importantly, they are a business that offers no social value to the individual or society whose data is being gathered. (Uh oh.) In all of the more traditional instances in which data is collected about individuals, there is some social benefit or service presumed to be offered in exchange. The government conducts a census, but we authorized that, because it is essential to the provision of government services: proportional representation of elected officials, fair imposition of taxation, etc. Verizon collects data on us, but they do so as a fundamental element of the provision of telephone service. Facebook collect all of our traces, and while that data is immensely valuable in its own right and to advertisers it is also an important component in providing their social media platform. I am by no means saying that there are no possible harms in such data arrangements (I should hope not) but at the very least, the collection of data comes with the provision of service, and there is a relationship (citizen, customer) that provides a legally structured and sanctioned space for challenging the use and misuse of that data — class action lawsuit, regulatory oversight, protest, or just switching to another phone company. (Have you tried switching phone companies lately?) Some services that collect data have even voluntarily sought to do additional, socially progressive things with that data: Google looking for signs of flu outbreaks, Facebook partnering with researchers looking to encourage voting behavior, even OK Cupid giving us curious insights about the aggregate dating habits of their customers. (You just love infographics, don’t you.) But the third party data broker who buys data from an e-commerce site I frequent, or scrapes my publicly available hospital discharge record, or grabs up the pings my phone emits as I walk through town, they are building commercial value on my data, but offer me no value to me, my community, or society in exchange.

So what I propose is a “pay it back tax” on data brokers. (Huh?! Does such a thing exist, anywhere?) If a company collects, aggregates, or scrapes data on people, and does so not as part of a service back to those people (but is that distinction even a tenable one? who would decide and patrol which companies are subject to this requirement?), then they must grant access to their data and access 10% of their revenue to non-profit, socially progressive uses of that data. This could mean they could partner with a non-profit, provide them funds and access to data, to conduct research. Or, they could make the data and dollars available as a research fund that non-profits and researchers could apply for. Or, as a nuclear option, they could avoid the financial requirement by providing an open API to their data. (I thought your concern about these brokers is that they aggravate the privacy problems of big data, but you’re making them spread that collected data further?) I think there could be valuable partnerships: Turnstyle’s data might be particularly useful for community organizations concerned about neighborhood flow or access for the disabled; health data could be used by researchers or activists concerned with discrimination in health insurance. There would need to be parameters for how that data was used and protected by the non-profits who received it, and perhaps an open access requirement for any published research or reports.

This may seem extreme. (I should say so. Does this mean any commercial entity in any industry that doesn’t provide a service to customers should get a similar tax?) Or, from another vantage point, it could be seen as quite reasonable: companies that collect data on their own have to spend an overwhelming amount of their revenue providing whatever service they do that justifies this data collection: governments that collect data on us are in our service, and make no profit. This is merely 10% and sharing their valuable resource. (No, it still seems extreme.) And, if I were aiming more squarely at the concerns about privacy, I’d be tempted to say that data aggregation and scraping could simply be outlawed. (Somebody stop him!) In my mind, it at the very least levels back the idea that collecting data on individuals and using that as a primary resource upon which to make profit must, on balance, provide some service in return, be it customer service, social service, or public benefit.

This is cross-posted at Culture Digitally.

Lectio Precursoria: Interpersonal Boundary Regulation in the Context of Social Network Services

Interpersonal boundary regulation constitutes of the efforts needed to make the world work, that is, for people to achieve contextually desirable degrees of social interaction and to build and sustain their relations with others and with the self. In my dissertation, I examined the topic in the context of social network services. 

I defended the work last week at University of Helsinki, with Assistant Professor Lorraine Kisselburgh from Purdue University as my opponent. Below, you can find an adapted version of the talk, lectio precursoria, that I gave as a part of the public examination. If you are curious to take a look at the dissertation itself, a digital version is freely available online.

Madam Opponent, Madam Custos, Ladies and Gentlemen,

In the last decade, social network services have grown to play important roles in the everyday life of millions of people. While this new year is only about to begin, chances are many of you have already visited a social network service, such as Facebook, during its first days. Most likely even earlier today. And, to be honest, I would not be surprised if some of you accessed one during this talk, too.

Continue reading “Lectio Precursoria: Interpersonal Boundary Regulation in the Context of Social Network Services”

Keeping Teens ‘Private’ on Facebook Won’t Protect Them

(Originally written for TIME Magazine)

We’re afraid of and afraid for teenagers. And nothing brings out this dualism more than discussions of how and when teens should be allowed to participate in public life.

Last week, Facebook made changes to teens’ content-sharing options. They introduced the opportunity for those ages 13 to 17 to share their updates and images with everyone and not just with their friends. Until this change, teens could not post their content publicly even though adults could. When minors select to make their content public, they are given a notice and a reminder in order to make it very clear to them that this material will be shared publicly. “Public” is never the default for teens; they must choose to make their content public, and they must affirm that this is what they intended at the point in which they choose to publish.

Representatives of parenting organizations have responded to this change negatively, arguing that this puts children more at risk. And even though the Pew Internet & American Life Project has found that teens are quite attentive to their privacy, and many other popular sites allow teens to post publicly (e.g. Twitter, YouTube, Tumblr), privacy advocates are arguing that Facebook’s decision to give teens choices suggests that the company is undermining teens’ privacy.

But why should youth not be allowed to participate in public life? Do paternalistic, age-specific technology barriers really protect or benefit teens?

One of the most crucial aspects of coming of age is learning how to navigate public life. The teenage years are precisely when people transition from being a child to being an adult. There is no magic serum that teens can drink on their 18th birthday to immediately mature and understand the world around them. Instead, adolescents must be exposed to — and allowed to participate in — public life while surrounded by adults who can help them navigate complex situations with grace. They must learn to be a part of society, and to do so, they must be allowed to participate.

Most teens no longer see Facebook as a private place. They befriend anyone they’ve ever met, from summer-camp pals to coaches at universities they wish to attend. Yet because Facebook doesn’t allow youth to contribute to public discourse through the site, there’s an assumption that the site is more private than it is. Facebook’s decision to allow teens to participate in public isn’t about suddenly exposing youth; it’s about giving them an option to treat the site as being as public as it often is in practice.

Rather than trying to protect teens from all fears and risks that we can imagine, let’s instead imagine ways of integrating them constructively into public life. The key to doing so is not to create technologies that reinforce limitations but to provide teens and parents with the mechanisms and information needed to make healthy decisions. Some young people may be ready to start navigating broad audiences at 13; others are not ready until they are much older. But it should not be up to technology companies to determine when teens are old enough to have their voices heard publicly. Parents should be allowed to work with their children to help them navigate public spaces as they see fit. And all of us should be working hard to inform our younger citizens about the responsibilities and challenges of being a part of public life. I commend Facebook for giving teens the option and working hard to inform them of the significance of their choices.

(Originally written for TIME Magazine)

eyes on the street or creepy surveillance?

This summer, with NSA scandal after NSA scandal, the public has (thankfully) started to wake up to issues of privacy, surveillance, and monitoring. We are living in a data world and there are serious questions to ask and contend with. But part of what makes this data world messy is that it’s not so easy as to say that all monitoring is always bad. Over the last week, I’ve been asked by a bunch of folks to comment on the report that a California school district hired an online monitoring firm to watch its students. This is a great example of a situation that is complicated.

The media coverage focuses on how the posts that they are monitoring are public, suggesting that this excuses their actions because “no privacy is violated.” We should all know by now that this is a terrible justification. Just because teens’ content is publicly accessible does not mean that it is intended for universal audiences nor does it mean that the onlooker understands what they see. (Alice Marwick and I discuss youth privacy dynamics in detail in “Social Privacy in Networked Publics”.) But I want to caution against jumping to the opposite conclusion because these cases aren’t as simple as they might seem.

Consider Tess’ story. In 2007, she and her friend killed her mother. The media reported it as “girl with MySpace kills mother” so I decided to investigate the case. For 1.5 years, she documented on a public MySpace her struggles with her mother’s alcoholism and abuse, her attempts to run away, her efforts to seek help. When I reached out to her friends after she was arrested, I learned that they had reported their concerns to the school but no one did anything. Later, I learned that the school didn’t investigate because MySpace was blocked on campus so they couldn’t see what she had posted. And although the school had notified social services out of concern, they didn’t have enough evidence to move forward. What became clear in this incident – and many others that I tracked – is that there are plenty of youth crying out for help online on a daily basis. Youth who could really benefit from the fact that their material is visible and someone is paying attention.

Many youth cry out for help through social media. Publicly, often very publicly. Sometimes for an intended audience. Sometimes as a call to the wind for anyone who might be paying attention. I’ve read far too many suicide notes and abuse stories to believe that privacy is the only frame viable here. One of the most heartbreaking was from a girl who was commercially sexually exploited by her middle class father. She had gone to her school who had helped her go to the police; the police refused to help. She published every detail on Twitter about exactly what he had done to her and all of the people who failed to help her. The next day she died by suicide.  In my research, I’ve run across too many troubled youth to count. I’ve spent many a long night trying to help teens I encounter connect with services that can help them.

So here’s the question that underlies any discussion of monitoring: how do we leverage the visibility of online content to see and hear youth in a healthy way? How do we use the technologies that we have to protect them rather than focusing on punishing them?  We shouldn’t ignore youth who are using social media to voice their pain in the hopes that someone who cares might stumble across their pleas.

Urban theorist Jane Jacobs used to argue that the safest societies are those where there are “eyes on the street.” What she meant by this was that healthy communities looked out for each other, were attentive to when others were hurting, and were generally present when things went haywire. How do we create eyes on the digital street? How do we do so in a way that’s not creepy?  When is proactive monitoring valuable for making a difference in teens’ lives?  How do we make sure that these same tools aren’t abused for more malicious purposes?

What matters is who is doing the looking and for what purposes. When the looking is done by police, the frame is punitive. But when the looking is done by caring, concerned, compassionate people – even authority figures like social workers – the outcome can be quite different. However well-intended, law enforcement’s role is to uphold the law and people perceive their presence as oppressive even when they’re trying to help. And, sadly, when law enforcement is involved, it’s all too likely that someone will find something wrong. And then we end up with the kinds of surveillance that punishes.

If there’s infrastructure put into place for people to look out for youth who are in deep trouble, I’m all for it. But the intention behind the looking matters the most. When you’re looking for kids who are in trouble in order to help them, you look for cries for help that are public. If you’re looking to punish, you’ll misinterpret content, take what’s intended to be private and publicly punish, and otherwise abuse youth in a new way.

Unfortunately, what worries me is that systems that are put into place to help often get used to punish. There is often a slippery slope where the designers and implementers never intended for it to be used that way. But once it’s there….

So here’s my question to you. How can we leverage technology to provide an additional safety net for youth who are struggling without causing undue harm? We need to create a society where people are willing to check in on each other without abusing the power of visibility. We need more eyes on the street in the Jacbos-ian sense, not in the surveillance state sense. Finding this balance won’t be easy but I think that it behooves us to not jump to extremes. So what’s the path forward?

(I discuss this issue in more detail in my upcoming book “It’s Complicated: The Social Lives of Networked Teens.”  You can pre-order the book now!)

Data Dealer is Disastrous

(or, Unfortunately, Algorithms Sound Boring.)

Finally, a video game where you get to act like a database!

This morning, the print version of the New York Times profiled the Kickstarter-funded game “Data Dealer.” The game is a browser-based single-player farming-style clicker with a premise that the player “turns data into cash” by playing the role of a behind-the-scenes data aggregator probably modeled on a real company like Axciom.

Currently there is only a demo, but the developers have big future ambitions, including a multi-player version.  Here’s a screen shot:

Data Dealer screenshot
 
Data Dealer screen shot (click to enlarge.)

One reason Data Dealer is receiving a lot of attention is that there really isn’t anything else like it. It reminds me of the ACLU’s acclaimed “Ordering Pizza” video (now quite old) which vividly envisioned a dystopian future of totally integrated personal data through the lens of placing orders for pizza. The ACLU video shows you the user interface for a hypothetical software platform built to allow the person who answers the phone at an all-knowing pizza parlor to enter your order. 

(In the video, a caller tries to order a “double meat special” and is told that there will be an additional charge because of his high-blood pressure and high cholesterol. He complains about the high price and is told, “But you just bought those tickets to Hawaii!”)

The ACLU video is great because it uses a silly hook to get across some very important societal issues about privacy. It makes a topic that seems very boring — data protection and the risks involved in the interconnection of databases — vivid and accessible. As a teacher working with these issues, I still find the video useful today. Although it looks like the pizza ordering computer is running Windows 95.

Data Dealer has the same promise, but they’ve made some unusual choices. The ACLU’s goal was clearly public education about legal issues, and I think that the group behind Data Dealer has a similar goal. On their Kickstarter profile they describe themselves as “data rights advocates.”

Yet some of the choices made in the game design seem indefensible, as they might create awareness about data issues but they do so by promulgating misguided ideas about how data surveillance actually works. I found myself wondering: is it worth raising public awareness of these issues if they are presented in a way that is so distorted?

As a data aggregator, the chief antagonist in the demo is public opinion. While clearly that would be an antagonist for someone like Axciom, there are actually real risks to data aggregation that involve quantifiable losses. Data protection laws don’t exist solely because people are squeamish.

By focusing on public opinion, the message I am left with isn’t that privacy is really important, it is that “some people like it.” Those darn privacy advocates sure are fussy! (They periodically appear, angrily, in a pop-up window.) This seems like a much weaker argument than “data rights advocates” should be making. It even feels like the makers of Data Dealer are trying to demean themselves!  But maybe this was meant to be self-effacing.

I commend Data Dealer for grappling with one of the hardest problems that currently exists in the study of the social implications of computing: how to visualize things like algorithms and databases comprehensibly. In the game, your database is cleverly visualized as a vaguely vacuum-cleaner-like object. Your network is a kind of octopus-like shape. Great stuff!

However, some of the meatiest parts of the corporate data surveillance infrastructure go unmentioned, or are at least greatly underemphasized. How about… credit cards? Browser cookies? Other things are bizarrely over-emphasized relative to the actual data surveillance ecology: celebrity endorsements, online personality tests, and poster ad campaigns.

Algorithms are not covered at all (unless you count the “import” button that automatically “integrates” different profiles into your database.)  That’s a big loss, as the model of the game implies that things like political views are existing attributes that can be harvested by (for instance) monitoring what books you buy at a bookstore. The bookstores already hold your political views in this model, and you have to buy them from there. That’s not AT ALL how political views are inferred by data mining companies, and this gameplay model falsely creates the idea that my political views remain private if I avoid loyalty cards in bookstores.

A variety of the causal claims made in the game just don’t work in real life. A health insurance company’s best source for private health information about you is not mining online dating profiles for your stated weight. By emphasizing these unlikely paths for private data disclosure, the game obscures the real process and seems to be teaching those concerned about privacy to take useless and irrelevant precautions.

The crucial missing link is the absence of any depiction of the combination of disparate data to produce new insights or situations. That’s the topic the ACLU video tackles head-on. Although the game developers know that this is important (integration is what your vacuum-cleaner is supposed to be doing), that process doesn’t exist as part of the gameplay. Data aggregation in the game is simply shopping for profiles from a batch of blue sources and selling them to different orange clients (like the NSA or a supermarket chain). Yet combination of databases is the meat of the issue.

By presenting the algorithmic combination of data invisibly, the game implies that a corporate data aggregator is like a wholesaler that connects suppliers to retailers. But this is not the value data aggregation provides, that value is all about integration.

Finally, the game is strangely interested in the criminal underworld, promoting hackers as a route that a legitimate data mining corporation would routinely use. This is just bizarre. In my game, a real estate conglomerate wanted to buy personal data so I gathered it from a hacker who tapped into an Xbox Live-like platform. I also got some from a corrupt desk clerk at a tanning salon. This completely undermines the game as a corporate critique, or as educational.

In sum, it’s great to see these hard problems tackled at all, but we deserve a better treatment of them. To be fair, this is only the demo and it may be that the missing narratives of personal data will be added. A promised addition is that you can create your own social media platform (Tracebook) although I did not see this in my demo game. I hope the missing pieces are added. (It seems more unlikely that the game’s current flawed narratives will be corrected.)

My major reaction to the game is that this situation highlights the hard problems that educational game developers face. They want to make games for change, but effective gameplay and effective education are such different goals that they often conflict. For the sake of a salable experience the developers here clearly felt they had to stake their hopes on the former and abandon the latter, abandoning reality.

(This post was cross-posted at multicast.)

thoughts on Pew’s latest report: notable findings on race and privacy

Yesterday, Pew Internet and American Life Project (in collaboration with Berkman) unveiled a brilliant report about “Teens, Social Media, and Privacy.” As a researcher who’s been in the trenches on these topics for a long time now, none of their finding surprised me but it still gives me absolute delight when our data is so beautifully in synch. I want to quickly discuss two important issues that this report raise.

Race is a factor in explaining differences in teen social media use.

Pew provides important measures on shifts in social media, including the continued saturation of Facebook, the decline of MySpace, and the rise of other social media sites (e.g., Twitter, Instagram). When they drill down on race, they find notable differences in adoption. For example, they highlight data that is the source of “black Twitter” narratives: 39% of African-American teens use Twitter compared to 23% of white teens.

Most of the report is dedicated to the increase in teen sharing, but once again, we start to see some race differences. For example, 95% of white social media-using teens share their “real name” on at least one service while 77% of African-American teens do. And while 39% of African-American teens on social media say that they post fake information, only 21% of white teens say they do this.

Teens’ practices on social media also differ by race. For example, on Facebook, 48% of African-American teens befriend celebrities, athletes, or musicians while one 25% of white teen users do.

While media and policy discussions of teens tend to narrate them as an homogenous group, there are serious and significant differences in practices and attitudes among teens. Race is not the only factor, but it is a factor. And Pew’s data on the differences across race highlight this.

Of course, race isn’t actually what’s driving what we see as race differences. The world in which teens live is segregated and shaped by race. Teens are more likely to interact with people of the same race and their norms, practices, and values are shaped by the people around them. So what we’re actually seeing is a manifestation of network effects. And the differences in the Pew report point to black youth’s increased interest in being a part of public life, their heightened distrust of those who hold power over them, and their notable appreciation for pop culture. These differences are by no means new, but what we’re seeing is that social media is reflecting back at us cultural differences shaped by race that are pervasive across America.

Teens are sharing a lot of content, but they’re also quite savvy.

Pew’s report shows an increase in teens’ willingness to share all sorts of demographic, contact, and location data. This is precisely the data that makes privacy advocates anxious. At the same time, their data show that teens are well-aware of privacy settings and have changed the defaults even if they don’t choose to manage the accessibility of each content piece they share. They’re also deleting friends (74%), deleting previous posts (59%), blocking people (58%), deleting comments (53%), detagging themselves (45%), and providing fake info (26%).

My favorite finding of Pew’s is that 58% of teens cloak their messages either through inside jokes or other obscure references, with more older teens (62%) engaging in this practice than younger teens (46%). This is the practice that I’ve seen significantly rise since I first started doing work on teens’ engagement with social media. It’s the source of what Alice Marwick and I describe as “social steganography” in our paper on teen privacy practices.

While adults are often anxious about shared data that might be used by government agencies, advertisers, or evil older men, teens are much more attentive to those who hold immediate power over them – parents, teachers, college admissions officers, army recruiters, etc. To adults, services like Facebook that may seem “private” because you can use privacy tools, but they don’t feel that way to youth who feel like their privacy is invaded on a daily basis. (This, btw, is part of why teens feel like Twitter is more intimate than Facebook. And why you see data like Pew’s that show that teens on Facebook have, on average 300 friends while, on Twitter, they have 79 friends.) Most teens aren’t worried about strangers; they’re worried about getting in trouble.

Over the last few years, I’ve watched as teens have given up on controlling access to content. It’s too hard, too frustrating, and technology simply can’t fix the power issues. Instead, what they’ve been doing is focusing on controlling access to meaning. A comment might look like it means one thing, when in fact it means something quite different. By cloaking their accessible content, teens reclaim power over those who they know who are surveilling them. This practice is still only really emerging en masse, so I was delighted that Pew could put numbers to it. I should note that, as Instagram grows, I’m seeing more and more of this. A picture of a donut may not be about a donut. While adults worry about how teens’ demographic data might be used, teens are becoming much more savvy at finding ways to encode their content and achieve privacy in public.

Anyhow, I have much more to say about Pew’s awesome report, but I wanted to provide a few thoughts and invite y’all to read it. If there is data that you’re curious about or would love me to analyze more explicitly, leave a comment or drop me a note. I’m happy to dive in more deeply on their findings.

Measuring Networked Social Privacy

Xinru Page, Karen Tang, Fred Stutzman and I are organizing a two-day workshop on measuring networked social privacy at the CSCW 2013 conference next spring. We are inviting researchers from diverse backgrounds to come and work with us on what would it look like to “measure” networked social privacy in rigorous, productive ways. Please pass our CfP on to your networks, or even better, submit a position paper and join the endeavor!

Call for Participation

Measuring Networked Social Privacy: Qualitative & Quantitative Approaches

Social media plays an increasingly important role in interpersonal relationships and, consequently, raises privacy questions for end-users. However, there is little guidance or consensus for researchers on how to measure privacy in social media contexts, such as in social network sites like Facebook or Twitter. To this point, privacy measurement has focused more on data protection for end-users and used privacy scales like CFIP, IUIPC, and the Westin Segmentation Index. While these scales have been used for cross-study comparisons, they primarily emphasize informational privacy concerns and are less effective at capturing interpersonal and interactional privacy concerns.

Thus, there is a clear need to develop appropriate metrics and techniques for measuring privacy concerns in social media. Accomplishing such a goal requires knowledge of the current methods for measuring social privacy, as well as various existing interpersonal privacy frameworks. In this workshop, we will cultivate a common understanding of privacy frameworks, provide an overview of recent empirical work on privacy in social media, and encourage the development of consensus among the community on how to approach measuring social privacy for these networked, interpersonal settings. Our 2-day workshop will provide participants the opportunity to work more deeply on these issues, including opportunities to create and pilot new privacy measures, methods, and frameworks that will comprise a toolbox of techniques that can be used to study privacy concerns in social media.

We invite researchers from various domains to join this multidisciplinary workshop and address a number of key challenges in achieving this research vision. Some of these challenges include:

  1. “Measuring” privacy: How should privacy be measured? Many studies run into the “privacy paradox” which points to how privacy concerns are not correlated with actual behavior. How should studies ensure that they are capturing untainted privacy concerns? How do we connect concerns with behavior?
  2. Contextualizing privacy: How context-specific should privacy metrics be? How can we anticipate the types of social privacy concerns that will be most salient for different audiences? What types of situational context need to be captured in order to effectively capture interpersonal privacy concerns in social media?
  3. Cross-study comparisons: How can general privacy measures be useful across different studies? What ways can we measure whether one privacy design is more effective than another in addressing social privacy concerns? How should context be considered when comparing privacy concerns across studies?
  4. Integrating qualitative with quantitative: What is the role of various qualitative and quantitative methods in developing metrics? How can these methods complement each other? In which situations should a particular method, tool, and/or study design be used?
  5. Integrating frameworks and metrics: How can we draw from existing privacy frameworks to contribute to our understanding of privacy in social media? What aspects of social privacy do these frameworks do a good job of capturing? What aspects of social privacy do these frameworks neglect to capture? How can we translate these privacy frameworks into a tool for capturing privacy concerns?

Interested parties should submit a position paper (2-4 pages in the Extended Abstracts format) by November 16, 2012, 11:59PM Pacific Standard Time.

We welcome a range of work including (but not limited to): (1) addressing one of the challenges described above, (2) experiences and/or case studies about measuring privacy and/or developing novel privacy frameworks, (3) lessons learned of what works and what doesn’t work when capturing social privacy concerns, (4) challenges to established assumptions about measuring privacy, and (5) ideas on novel directions in creating new privacy metrics and frameworks.

All submissions should be made in English. Our program committee will peer-review submissions and evaluate participants based on their potential to contribute to the workshop goals and discussions. At least one author of each accepted paper must register for the workshop.

Important dates

  • Submission deadline – November 16, 2012
  • Notification of acceptance – December 11, 2012
  • Workshop at CSCW 2013 – February 23-24, 2013

In all issues related to the workshop, please contact us by e-mail at networkedprivacy(at)gmail.com

Reflecting on Dharun Ravi’s conviction

On Friday, Dharun Ravi – the Rutgers student whose roommate Tyler Clementi killed himself – was found guilty of privacy invasion, tampering with evidence, and bias intimidation (a hate crime). When John Palfrey and I wrote about this case three weeks ago, I was really hopeful that the court proceedings would give clarity and relieve my uncertainty. Instead, I am left more conflicted and deeply saddened. I believe that the jury did their job, but I am not convinced that justice was served. More disturbingly, I think that the symbolic component of this case is deeply troubling.

In New Jersey, someone can be convicted of bias intimidation for committing an act…

  1. with the express purpose of intimidating an individual or group…
  2. knowing that the offense would cause an individual or group to feel intimidated…
  3. with which the individual or group on the receiving end believes that they were targeted…

… because of their race, color, religion, gender, handicap, sexual orientation, or ethnicity.

In Ravi’s trial, the jury concluded that Ravi neither intended to intimidate Clementi nor believed that his acts would make Clementi feel intimidated because of his sexuality. Yet, the jury did conclude that, based on computer evidence, Clementi probably felt intimidated because of his sexuality.

As someone who wants to rid the world of homophobia, this conviction leaves me devastated. I recognize the symbolic move that this is supposed to make. This is supposed to signal that homophobia will not be tolerated. But Ravi wasn’t convicted of being homophobic, but, rather, creating the “circumstances” in which Clementi would probably feel intimidated. In other words, Ravi is being punished for living in a culture of homophobia even though there’s little evidence to suggest that he perpetuated it intentionally. As Mary Gray has argued, we are all to blame for the culture of homophobia that has resulted in this tragedy.

I can’t help but think of Clementi’s parents in light of this. By all accounts, their reaction to their son’s confession that he was gay did more to intimidate Clementi based on his sexuality than Ravi’s stupid act. Yet, I can’t even begin to imagine that the court would charge, let alone convict, Clementi’s distraught parents of a hate crime. ::shudder::

I can’t justify Ravi’s decision to invade his roommate’s privacy, especially not at a moment in which he would be extremely vulnerable. I also cannot justify Ravi’s decision to mess with evidence, even though I suspect he did so out of fear. But I also don’t think that either of these actions deserve 10 years of jail time or deportation (two of the options given to the judge). I don’t think that’s justice.

This case is being hailed for its symbolism, but what is the message that it conveys? It says that a brown kid who never intended to hurt anyone because of their sexuality will do jail time, while politicians and pundits who espouse hatred on TV and radio and in stump speeches continue to be celebrated. It says that a teen who invades the privacy of his peer will be condemned, even while companies and media moguls continue to profit off of more invasive invasions.

I’m also sick and tired of people saying that this will teach kids an important lesson. Simply put, it won’t. No teen that I know identifies their punking and pranking of their friends and classmates as bullying, let alone bias intimidation. Sending Ravi to jail will do nothing to end bullying. Yet, it lets people feel like it will and that makes me really sad. There’s a lot to be done in this realm and this does nothing to help those who are suffering every day.

The jury did its job. The law was followed. I have little doubt that Ravi did the things that he was convicted of doing. But I am not celebrating because I don’t think that this case made the world a better place. I think that it simply destroyed another life.