The case for a social study of technical departure

Hello there. My name is Jed Brubaker and I am one of the very fortunate interns working with the Social Media Collective this year — a perk of which is a return to blogging here on the SMC blog. Typically you will find me in California where I am a PhD candidate in the school of Information and Computer Sciences at UC Irvine studying digital identity infrastructure, technological subjectivity, and sociotechnical representation. Those are kind of heady terms. In truth, I study dead people. I research how the social media systems and the accounts we maintain while we are alive continue to live and be used (by both people and machines) after we are gone.

Ever since I got to Microsoft Research I have been thinking about a different kind of leaving. Not MSR, of course — I love it here. During my internship I am conducting a study of why users choose to “leave” a technology — in this case the geosocial mobile app Grindr. Grindr is one of a newer breed of geolocative apps intended to facilitate the discovery of and interaction with other users in one’s vicinity. Grindr does this by showing you the profiles of the 100 closest users who are online at any given moment. But while most of these apps loosely operate in the online dating space, Grindr targets gay male users, complicating notions about the app’s purpose, and thus, what it means to leave. People variously insist that it is an app for “hooking up”, “finding a date”, or (quite literally) “gaydar”, while Grindr’s website ambiguously explains that it is meant to “help you meet guys while you’re on the go.”

But what does it even mean to “leave”? Along with Kate Crawford and Mike Ananny (fabo mentors extraordinaire!), this is exactly what we are trying to better understand. Afterall, while you can easily remove Grindr’s app from your phone, you can’t as easily remove yourself from the physical spaces you inhabit each day. Those guys you’d supposedly meet “while you’re on the go” are still all around you. This suggests that leaving may always be partial, incomplete, and that the social reasons people leave are just as important as the technical ways in which they go about doing it.

Listening to people’s stories of leaving feels somewhat familiar to my work on death and social media.  Leaving is a tricky concept for a computer to understand, something death illustrates well. For example, how does one close their account if they are no longer alive and able to do so? Like many email providers, Hotmail (perhaps unknowingly) efficiently solves this problem by closing your account after 270 days of inactivity. I’ve been told that the limit used to be 90 days, but that the outcry of people traveling abroad for more than three months (among others, I’m sure) indicated that perhaps 90 days was too efficient. Away for 3 months on internship, I am having similar problems with some UC Irvine systems that presume I am still in Southern California.

Even if users diligently uninstalled software and closed their unused accounts, we are left with systems that presume users make an active choice to leave, or that our clunky methods of inferring departure (e.g., inactivity) are sufficient.  Relevant to death, these systems also presume that users are able to make such a choice. Non-use, backpacking in Europe, an MSR internship, and death — to a computer, they can look very much the same.

Grindr’s particular intersection of the social, spatial, and technological nicely demonstrates the importance of taking a sociotechnical approach to studying departure. And of course, stories of leaving also help us think about arriving at, returning to, and inhabiting technological systems. What does it mean, after all, to be a “Grindr user”? Avoiding any spoilers, I can tell you that so far it has been a wild and fascinating ride.

(Image from Grindr’s presskit.)

The Ethics of Attention (Part 2): Bots for Civic Engagement

Cross-posted from the Department of Alchemy blog.

Last month, Ethan Zuckerman of the Center for Future Civic Media (MIT Media Lab) posted a great article on his blog called The Tweetbomb and the Ethics of Attention (constituting Part I to this story, so make sure you read it!), in which he calls into question the practice of flooding particular users with Twitter messages to support a particular cause. The emergence of tweetbombing as a practice on Twitter is very intriguing, particularly around the assumed norms of participation:

Ethan had written previously about “the fierce battle for attention” when covering journalistic integrity and Foxconn; the tweetbomb story, meanwhile, focuses on emergent practices around gaming attention in social media platforms (modern infrastructures for communication), away from the usual norms situated around attention in news-sharing ecosystems.

These practices relate to what Ethan calls “attention philanthropy”: if you can’t move content yourself, see if you can motivate an attention-privileged individual to do it for you.

The problem is that attention is an issue of scale: how do you get the attention of everyone?. Social capital becomes a literal currency; we exchange the value embedded in networks in an attention economy. There are a number of assumptions underlying traditional mass media technologies, like radio and television: broadcast, primetime, the mass audience; but with the internet (like with cable and satellite radio), attention is splintered, across a multitude of channels, streams, feeds.

The issue with social media platforms versus mainstream media outlets is that for the most part there are many individuals who can bring attention to content that aren’t protected by the media institution (for instance, Ethan discusses well-known BoingBoing blogger Xeni Jardin, who manages her own personal Twitter account). In the attention economy facilitated by social media, then, we potentially deal with vulnerable actors.

The Low Orbit Ion Cannon, changing human consent into a social “botnet” for distributed denial of service attacks. What if you could use a similar automated program for political gain?

But what if you don’t have powerful people or institutions to help you garner attention? Or what if you can’t convince others to help you?

Become the Gepetto of the attention economy, and make some bots.

Tim Hwang’s Pacific Social project has shown that Twitter bots can influence Twitter audiences to an astounding degree. The projects’ results show that bots successfully interact with other human users, but the bots also aid in connecting disparate groups of Twitter users together.

This leads me to ask: Can you create bots for civic engagement?

How could a bot work in favor of civic engagement? Well, civic engagement has traditionally been measured according to two factors: 1) voting, and 2) social movements. But it’s increasingly evident, especially in today’s social-media-laden world, that information circulation also helps inform citizens about critical issues and educate them about how to make change within the democracy. We see platforms like Reddit able to spread information to audiences of millions (helping to generate success for campaigns like SOPA). While many complain about “slacktivism,” it’s undeniable that mass attention can generate results.

Bots have a useful power to move information across social networks by connecting human individuals to others who care about similar topics. What if you could essentially use an automated process to optimizes online communities into stronger partisan networks by connecting those with similar affiliations who do not yet know each other? Or, perhaps, use bots to educate vast networks about particular issues? KONY 2012, for instance, utilized student social networks on Twitter as seed groups to help mobilize the spread of information about the campaign.

But there’s also potential for the manipulation of information, and while manipulating the masses is likely though complex, having an army of coordinated bots to do your bidding is much easier, especially when a peripheral effect of bot participation is the perception to human users of important information spreading.

This morning, Andrés Monroy Hernández of Microsoft Research linked me to a timely project by Iván Santiesteban called AdiosBots.

AdiosBots tracks automated Twitter bots set up by the ruling Institutional Revolutionary Party in Mexico. According to Iván’s English project page, one of the contenders from this party for the upcoming elections on July 1st has been utilizing fake Twitter accounts manipulated by bots to spread information to “publish thousands of messages praising Enrique Peña Nieto and ridiculing his opponents. They also participate in popular hashtags and try to drown them out by publishing large amounts of spam.”

In other words, they are “used to affect interaction between actual users and to deceive.” And in total, Iván has found close to 5,000 of these bots.

In this instance, there is no need for attention philanthropy: the bots act as an automated social movement mimicking positive political affiliation while denouncing the opposition’s supporters. But it’s clear that vulnerability plays a huge role in attacks on political individuals and the spread of false information. There’s also the ethical question about what users do not know: is it a problem that individuals assume bots to be human and merely helpful rather than programmed to exploit and optimize human behavior?

Bots of civic engagement also call into question the ethics around social media norms. Should people assume interaction with automatons will occur? Or is this a question of media literacy, where users should be educated enough about the ecosystems they use to be able to point out misinformation, or even find discrepancies between “organic” information and automated information (even when it’s used with beneficial motives)? What if the bots are so convincing that they can’t?

Bots for civic engagement was an idea that almost led me to apply for an annual Knight Foundation grant. If you’re interested in building this idea into a tangible project, please email me.

Alex Leavitt is a PhD student at the Annenberg School for Communication at the University of Southern California. Read more about his research at http://alexleavitt.com or find him on Twitter at http://twitter.com/alexleavitt.

Teens Text More than Adults, But They’re Still Just Teens

danah and I have a new piece in the Daily Beast. Summary: the more things change, the more they stay the same.

In the last decade, we’ve studied how technology affects how teens socialize, how they present themselves, and how they think about issues like gender and privacy. While it’s true that teens incorporate social media into many facets of their lives, and that they face new pressures their parents didn’t—from cyber-bullying to fearmongering over “online predators”—the core elements of high-school life are fundamentally the same today as they were two decades ago: friends, relationships, grades, family, and the future.

Read the full piece here.

A lot of the research that we do involving teenagers seems obvious to teenagers themselves. “Duh.” “Why would anyone study that?” “Who cares?”

Unfortunately, teenagers aren’t the ones writing news stories about how Facebook is making us lonely, Facebook is full of creepers, or teens are pressured to reveal intimate details on Facebook (note: those last two studies sponsored by a company that creates parental blocking and monitoring software). They aren’t the ones passing anti-bullying legislation, appearing on television to tell parents that teens study less and are more narcissistic than a generation ago, or implementing 3-strikes laws in public schools.

Our public-facing work aims to explain teenage practice in clear language that isn’t sensationalistic or fear-mongering. Obviously, not all scholarship lends itself to this type of writing. But given that social media is often discussed in utopian or dystopian terms in the press, research can provide a rational, sensible perspective that’s badly needed. Like, duh.

Institutions, Infrastructure and Information

I’m not exactly sure when in the last few months I first noticed Google’s subway advertising campaign, but whenever it was, I was immediately confused.   The ads relate to Google’s Good to Know initiative and focus on privacy, security and netiquette.  More than anything, the ads reminded me of seemingly well-intentioned and yet always-already obvious PSAs.  But even assuming that Google is manifesting its Don’t Be Evil mantra through giving people a heads up about things like password security and the (alleged? ostensible?) reasons for locative functionality, the ads were weird.  Were they proactively attempting to curry favor with at least semi-net savvy folk who use Google but have concerns about privacy?  Or reactively working to deflect or dispel some form of anti-Google criticism?

Google is far from the first company to launch a subway ad campaign that’s perplexed me (how weird are these Tidy Cat ads??) and it wasn’t until I happened to see the Google ads right next to a series of promotions from the MTA (simultaneously documenting and advertising progress on various station, line and service improvements) that I started to think about the ads in terms of institutions and infrastructure.  To back up for a second, I am not an urban studies scholar, but my work on immigration and information practices in city space often leads in that direction. In particular, I’ve been puzzling through issues of information, navigation and infrastructure.  A few weeks ago, my advisor and I were talking about interviews from my dissertation, and I mentioned the MTA as an NYC institution that migrational individuals learn to navigate.  Mor objected to my labeling it an institution, at which point I suggested that it was, instead, infrastructure.

The line between the institutions and infrastructure is not always all that solid.  According to Mary Douglas, “Institutional structures [are] forms of informational complexity” (p. 48). where past experience with institutions encodes expectations, thus shaping rules of behaviour and controlling (or managing) moments of uncertainty. In contrast, Bowker and Star argued that infrastructure refers to “hybrid creations of work practice and information medium” (p. 132).  In reviewing these texts to disambiguate institution from infrastructure, I was struck by the use of the term information for both. Douglas argued that “nothing else but institutions can define sameness, similarity is an institution” (p. 55), meaning that institutions are tied up in expectation, sameness and predictability, manifest (partly but significantly) through information about what an institution does and how to interact with it.  For Bowker and Star, infrastructures embed modes of practice, of informational acumen, where knowing what to do with any given piece of information speaks to the organizational infrastructure at play.

By using a form of communication that echoed a PSA and a progress report, Google and the MTA obscure the extent to which they are institutions and position on themselves as infrastructure.  We’re used to thinking about the politics of institutions, much less so (I think) than we are in the politics of infrastructure (Langdon Winner and Bruno Latour  notwithstanding). Some of the SMC folks have been working on the ethics of search engine algorithms, and it’s partly because I’ve seen some of those conversations unfold that I can’t help thinking about it here.  MTA subway ads broadcast themes of progress and improvement, but for whom?  Other than extending a general benevolence to other New Yorkers (and during a busy rush hour on the A train, I wouldn’t bank on that kind of goodwill) why do I feel a sense of satisfaction at the MTA’s announcement of renovating stations I don’t use and services I don’t need? These ads act as both a justification for the burden (of time, money and patience) placed on New Yorkers who take trains, subways and ferries as part of their daily urban lives.  When Google points out the affordances of its security precautions and its SNS as socially responsible, it elides the other kinds of ethics at stake in daily uses of Google products. In both sets of ads, the MTA and Google present information about themselves as infrastructure, rather than as institutions.

I think part of my confusion about the Google ads and part of the sneakiness of the MTA campaign is tied up in the extent to which they encourage thinking of the MTA as co-constructed with daily acts of moving through the city, and conflating Google with daily acts of being online.  What’s happening is an institutionalization of infrastructure, an obfuscation of the ways in which daily practices of information and technology are bound up in the ethical, in issues of access and in privilege.  I don’t think I’m completely satisfied with how to divide institutions from infrastructure, but I do think I’ve worked out an awareness of how the performativity of information can conflate the two.  I’m used to thinking of daily practices of technology as indicative of privilege, but tend, I think, to be less aware of institutional coopting of infrastructure’s perceived impartiality.  Not, perhaps, the goal of the Good to Know campaign nor the MTA progress ads, but useful as a researcher of information practices, social context and urban space.

P.S. Shout out to Aaron Trammell for helping me work through some of these ideas over a Cinco de Mayo beer!

Microsoft Research opens New York City lab

I am giddy with pleasure to share Jennifer Chayes’ announcement that Microsoft Research is opening a new lab in New York City that will be filled with computational social science types. The New England lab that I call home combines qualitative social science, empirical economics, machine learning, and mathematics. We’ve long noted the need for data science types who can bridge between us. And now, to my utter delight, a new lab is emerging to complement our lab. The folks who are going to serve as the founding members of the new NYC lab are computer scientists, physicists, experimental economists, and data scientists. Many of them are interested in social network analysis and big data problems but – or shall I say crucially – they all see the value in collaborating with ethnographers. In other words, we’re building a cross-lab team that’ll create new possible interdisciplinary collaborations that make my heart go pitter patter.

The new team will include Duncan Watts, David Pennock, John Langford, Jake Hofman, Dan Goldstein, Sid Suri, David Rothschild, and Sharad Goel. For the social scientists out there who were oohing and awing when we announced that MSR hired Nancy Baym, Kate Crawford, and Mary Gray, just imagine the amazing connections that can occur when you mix these computational social scientists and the great group of researchers we have at the Social Media Collective. ::giggle::bounce:: <evil grin>

Here’s to new relationships connected through Amtrak!