SMC media roundup

This is a collection of some of our researchers’ quotes, mentions, or writings in mainstream media. Topics include Facebook’s supposed neutral community standards, sharing economy workers uniting to protest, living under surveillance and relational labor in music.

Tarleton Gillespie in the Washington Post –> The Big Myth Facebook needs everyone to believe

And yet, observers remain deeply skeptical of Facebook’s claims that it is somehow value-neutral or globally inclusive, or that its guiding principles are solely “respect” and “safety.” There’s no doubt, said Tarleton Gillespie, a principal researcher at Microsoft Research in New England, that the company advances a specific moral framework — one that is less of the world than of the United States, and less of the United States than of Silicon Valley.

Mary Gray in The New York Times –> Uber drivers and others in the gig economy take a stand

“There’s a sense of workplace identity and group consciousness despite the insistence from many of these platforms that they are simply open ‘marketplaces’ or ‘malls’ for digital labor,” said Mary L. Gray, a researcher at Microsoft Research and professor in the Media School at Indiana University who studies gig economy workers.

Kate Crawford’s (and others’) collaboration with Laura Poitras (Academy Award-winning documentary film director and privacy advocate) in the book about living under surveillance in Boing Boing.

Poitras has a show on at NYC’s Whitney Museum, Astro Noise, that is accompanied by a book in which Poitras exposes, for the first time, her intimate notes on her life in the targeting reticule of the US government at its most petty and vengeful. The book includes accompanying work by Ai Weiwei, Edward Snowden, Dave Eggers, former Guantanamo Bay detainee Lakhdar Boumediene, Kate Crawford and Cory Doctorow.

(More on the upcoming book and Whitney museum event on Wired)

Canadian Songwriter’s Association interview with Nancy Baym –> Sound Advice: How to use social media in 2016

When discussing the use of social media by songwriters, Baym prefers to present a big-picture view rather than focusing on a ‘Top Ten Tips” approach, or on one platform or means of engagement. Practicality is key: “I’d love for 2016 to be the year of people getting realistic about what social media can and can’t do for you, of understanding that it’s a mode of relationship building, not a mode of broadcast,” says Baym.

 

 

eyes on the street or creepy surveillance?

This summer, with NSA scandal after NSA scandal, the public has (thankfully) started to wake up to issues of privacy, surveillance, and monitoring. We are living in a data world and there are serious questions to ask and contend with. But part of what makes this data world messy is that it’s not so easy as to say that all monitoring is always bad. Over the last week, I’ve been asked by a bunch of folks to comment on the report that a California school district hired an online monitoring firm to watch its students. This is a great example of a situation that is complicated.

The media coverage focuses on how the posts that they are monitoring are public, suggesting that this excuses their actions because “no privacy is violated.” We should all know by now that this is a terrible justification. Just because teens’ content is publicly accessible does not mean that it is intended for universal audiences nor does it mean that the onlooker understands what they see. (Alice Marwick and I discuss youth privacy dynamics in detail in “Social Privacy in Networked Publics”.) But I want to caution against jumping to the opposite conclusion because these cases aren’t as simple as they might seem.

Consider Tess’ story. In 2007, she and her friend killed her mother. The media reported it as “girl with MySpace kills mother” so I decided to investigate the case. For 1.5 years, she documented on a public MySpace her struggles with her mother’s alcoholism and abuse, her attempts to run away, her efforts to seek help. When I reached out to her friends after she was arrested, I learned that they had reported their concerns to the school but no one did anything. Later, I learned that the school didn’t investigate because MySpace was blocked on campus so they couldn’t see what she had posted. And although the school had notified social services out of concern, they didn’t have enough evidence to move forward. What became clear in this incident – and many others that I tracked – is that there are plenty of youth crying out for help online on a daily basis. Youth who could really benefit from the fact that their material is visible and someone is paying attention.

Many youth cry out for help through social media. Publicly, often very publicly. Sometimes for an intended audience. Sometimes as a call to the wind for anyone who might be paying attention. I’ve read far too many suicide notes and abuse stories to believe that privacy is the only frame viable here. One of the most heartbreaking was from a girl who was commercially sexually exploited by her middle class father. She had gone to her school who had helped her go to the police; the police refused to help. She published every detail on Twitter about exactly what he had done to her and all of the people who failed to help her. The next day she died by suicide.  In my research, I’ve run across too many troubled youth to count. I’ve spent many a long night trying to help teens I encounter connect with services that can help them.

So here’s the question that underlies any discussion of monitoring: how do we leverage the visibility of online content to see and hear youth in a healthy way? How do we use the technologies that we have to protect them rather than focusing on punishing them?  We shouldn’t ignore youth who are using social media to voice their pain in the hopes that someone who cares might stumble across their pleas.

Urban theorist Jane Jacobs used to argue that the safest societies are those where there are “eyes on the street.” What she meant by this was that healthy communities looked out for each other, were attentive to when others were hurting, and were generally present when things went haywire. How do we create eyes on the digital street? How do we do so in a way that’s not creepy?  When is proactive monitoring valuable for making a difference in teens’ lives?  How do we make sure that these same tools aren’t abused for more malicious purposes?

What matters is who is doing the looking and for what purposes. When the looking is done by police, the frame is punitive. But when the looking is done by caring, concerned, compassionate people – even authority figures like social workers – the outcome can be quite different. However well-intended, law enforcement’s role is to uphold the law and people perceive their presence as oppressive even when they’re trying to help. And, sadly, when law enforcement is involved, it’s all too likely that someone will find something wrong. And then we end up with the kinds of surveillance that punishes.

If there’s infrastructure put into place for people to look out for youth who are in deep trouble, I’m all for it. But the intention behind the looking matters the most. When you’re looking for kids who are in trouble in order to help them, you look for cries for help that are public. If you’re looking to punish, you’ll misinterpret content, take what’s intended to be private and publicly punish, and otherwise abuse youth in a new way.

Unfortunately, what worries me is that systems that are put into place to help often get used to punish. There is often a slippery slope where the designers and implementers never intended for it to be used that way. But once it’s there….

So here’s my question to you. How can we leverage technology to provide an additional safety net for youth who are struggling without causing undue harm? We need to create a society where people are willing to check in on each other without abusing the power of visibility. We need more eyes on the street in the Jacbos-ian sense, not in the surveillance state sense. Finding this balance won’t be easy but I think that it behooves us to not jump to extremes. So what’s the path forward?

(I discuss this issue in more detail in my upcoming book “It’s Complicated: The Social Lives of Networked Teens.”  You can pre-order the book now!)

Data Dealer is Disastrous

(or, Unfortunately, Algorithms Sound Boring.)

Finally, a video game where you get to act like a database!

This morning, the print version of the New York Times profiled the Kickstarter-funded game “Data Dealer.” The game is a browser-based single-player farming-style clicker with a premise that the player “turns data into cash” by playing the role of a behind-the-scenes data aggregator probably modeled on a real company like Axciom.

Currently there is only a demo, but the developers have big future ambitions, including a multi-player version.  Here’s a screen shot:

Data Dealer screenshot
 
Data Dealer screen shot (click to enlarge.)

One reason Data Dealer is receiving a lot of attention is that there really isn’t anything else like it. It reminds me of the ACLU’s acclaimed “Ordering Pizza” video (now quite old) which vividly envisioned a dystopian future of totally integrated personal data through the lens of placing orders for pizza. The ACLU video shows you the user interface for a hypothetical software platform built to allow the person who answers the phone at an all-knowing pizza parlor to enter your order. 

(In the video, a caller tries to order a “double meat special” and is told that there will be an additional charge because of his high-blood pressure and high cholesterol. He complains about the high price and is told, “But you just bought those tickets to Hawaii!”)

The ACLU video is great because it uses a silly hook to get across some very important societal issues about privacy. It makes a topic that seems very boring — data protection and the risks involved in the interconnection of databases — vivid and accessible. As a teacher working with these issues, I still find the video useful today. Although it looks like the pizza ordering computer is running Windows 95.

Data Dealer has the same promise, but they’ve made some unusual choices. The ACLU’s goal was clearly public education about legal issues, and I think that the group behind Data Dealer has a similar goal. On their Kickstarter profile they describe themselves as “data rights advocates.”

Yet some of the choices made in the game design seem indefensible, as they might create awareness about data issues but they do so by promulgating misguided ideas about how data surveillance actually works. I found myself wondering: is it worth raising public awareness of these issues if they are presented in a way that is so distorted?

As a data aggregator, the chief antagonist in the demo is public opinion. While clearly that would be an antagonist for someone like Axciom, there are actually real risks to data aggregation that involve quantifiable losses. Data protection laws don’t exist solely because people are squeamish.

By focusing on public opinion, the message I am left with isn’t that privacy is really important, it is that “some people like it.” Those darn privacy advocates sure are fussy! (They periodically appear, angrily, in a pop-up window.) This seems like a much weaker argument than “data rights advocates” should be making. It even feels like the makers of Data Dealer are trying to demean themselves!  But maybe this was meant to be self-effacing.

I commend Data Dealer for grappling with one of the hardest problems that currently exists in the study of the social implications of computing: how to visualize things like algorithms and databases comprehensibly. In the game, your database is cleverly visualized as a vaguely vacuum-cleaner-like object. Your network is a kind of octopus-like shape. Great stuff!

However, some of the meatiest parts of the corporate data surveillance infrastructure go unmentioned, or are at least greatly underemphasized. How about… credit cards? Browser cookies? Other things are bizarrely over-emphasized relative to the actual data surveillance ecology: celebrity endorsements, online personality tests, and poster ad campaigns.

Algorithms are not covered at all (unless you count the “import” button that automatically “integrates” different profiles into your database.)  That’s a big loss, as the model of the game implies that things like political views are existing attributes that can be harvested by (for instance) monitoring what books you buy at a bookstore. The bookstores already hold your political views in this model, and you have to buy them from there. That’s not AT ALL how political views are inferred by data mining companies, and this gameplay model falsely creates the idea that my political views remain private if I avoid loyalty cards in bookstores.

A variety of the causal claims made in the game just don’t work in real life. A health insurance company’s best source for private health information about you is not mining online dating profiles for your stated weight. By emphasizing these unlikely paths for private data disclosure, the game obscures the real process and seems to be teaching those concerned about privacy to take useless and irrelevant precautions.

The crucial missing link is the absence of any depiction of the combination of disparate data to produce new insights or situations. That’s the topic the ACLU video tackles head-on. Although the game developers know that this is important (integration is what your vacuum-cleaner is supposed to be doing), that process doesn’t exist as part of the gameplay. Data aggregation in the game is simply shopping for profiles from a batch of blue sources and selling them to different orange clients (like the NSA or a supermarket chain). Yet combination of databases is the meat of the issue.

By presenting the algorithmic combination of data invisibly, the game implies that a corporate data aggregator is like a wholesaler that connects suppliers to retailers. But this is not the value data aggregation provides, that value is all about integration.

Finally, the game is strangely interested in the criminal underworld, promoting hackers as a route that a legitimate data mining corporation would routinely use. This is just bizarre. In my game, a real estate conglomerate wanted to buy personal data so I gathered it from a hacker who tapped into an Xbox Live-like platform. I also got some from a corrupt desk clerk at a tanning salon. This completely undermines the game as a corporate critique, or as educational.

In sum, it’s great to see these hard problems tackled at all, but we deserve a better treatment of them. To be fair, this is only the demo and it may be that the missing narratives of personal data will be added. A promised addition is that you can create your own social media platform (Tracebook) although I did not see this in my demo game. I hope the missing pieces are added. (It seems more unlikely that the game’s current flawed narratives will be corrected.)

My major reaction to the game is that this situation highlights the hard problems that educational game developers face. They want to make games for change, but effective gameplay and effective education are such different goals that they often conflict. For the sake of a salable experience the developers here clearly felt they had to stake their hopes on the former and abandon the latter, abandoning reality.

(This post was cross-posted at multicast.)