Is Twitter us or them? #twitterfail and living somewhere between public commitment and private investment

This is about the fourth Olympics that’s been trumpeted as the first one to embrace social media and the Internet — just as, depending on how you figure it, it’s about the fourth U.S. election in a row that’s the first to go digital. It may be in the nature of new technologies that we appear perpetually, or at least for a very long time, to be just on the cusp of something. NBC has proudly trumpeted its online video streaming, its smartphone and tablet apps, and most importantly its partnership with microblogging platform Twitter. NBC regularly displays the #Olympics hashtag on the broadcasts, their coverage includes tweets and twit pics from athletes, and their website has made room for sport-specific Twitter streams.

It feels like an odd corporate pairing, at least from one angle. Twitter users have tweeted about past Olympics, for sure. But from a user’s perspective, its not clear what we need or get from a partnership with the broadcast network that’s providing exclusive coverage of the event. Isn’t Twitter supposed to be the place we talk about the things out there, the things we experience or watch or care about? But from another angle, it makes perfect sense. Twitter needs to reinforce the perception that it is the platform where chatter and commentary about what’s important to us should occur, and convince a broader audience to try it; it gets to do so here as “official narrator” of the Games. NBC needs ways to connect its coverage to the realm of social media, but without allowing anything digital to pre-empt its broadcasts. From a corporate perspective, interdependence is a successful economic strategy; from the users’ perspective, we want more independence between the two.

This makes the recent dustup about Twitter’s suspension of the account of Guy Adams, correspondent for The Independent (so perfect!), so troubling to so many. Adams had spent the first days of the Olympics criticizing NBC’s coverage of the games, particularly for time-delaying events to suit the U.S. prime time schedule, trimming the opening ceremony, and for some of the more inane commentary from NBC’s hosts. When Adams suggested that people should complain to Gary Zenkel, executive VP at NBC Sports and director of their Olympics coverage, and included Zenkel’s NBC email address, Twitter suspended his account.

Just to play out the details of the case, from the coverage that has developed thus far, we can say a couple of things. Twitter told Adams that his account had been suspended for “posting an individual’s private information such as private email address, physical address, telephone number, or financial documents.” Twitter asserts that it only considers rule violations if there is a complaint filed about them, suggesting that NBC had complained; in response, NBC says that Twitter brought the tweet (or tweets?) to NBC’s attention, who then submitted a complaint. Twitter has since reinstated Adams’ account, and reaffirmed the care and impartiality it takes in enforcing its rules.

Much of the conversation online, including on Twitter, has focused on two things: expressions of disappointment in Twitter for the perceived crime of shutting down a journalist’s account for criticizing a corporate partner, and a debate about whether Zenkel’s email should be considered public or private, and as such, making Twitter’s decision (despite its motivation) a legitimate or illegitimate interpretation of their own rules. This second question is an interesting one: Twitter’s rules not clarify the difference between the “private email addresses” they prohibit, and whatever the opposite is. Is Zenkel’s email address public because he’s a professional acting in a professional capacity? because it has appeared before on the web? Because it can be easily figured out (by the common firstname.lastname structure of NBC’s emails addresses? Alexis Madrigal at The Atlantic has a typically well-informed take on the issue.)

But I think this question of whether Twitter was appropriately acting on its own rules, and even the broader charge of whether its actions were motivated by their economic partnership with NBC, are both founded on a deeper question: what do we expect Twitter to be? This can be posed in naïve terms, as it often is in the heat of debate: are they an honorable supporter of free speech, or are they craven corporate shills? We may know these are exaggerated or untenable positions, both of them, but they’re still so appealing they continue to frame our debates. For example, in a widely circulated critique of Twitter’s decision, Jeff Jarvis proclaims that

For this incident itself is trivial, the fight frivolous. What difference does it make to the world if we complain about NBC’s tape delays and commentators’ ignorance? But Twitter is more than that. It is a platform. It is a platform that has been used by revolutionaries to communicate and coordinate and conspire and change the world. It is a platform that is used by journalists to learn and spread the news. If it is a platform it should be used by anyone for any purpose, none prescribed or prohibited by Twitter. That is the definition of a platform.

Adams himself titled his column for The Independent about the incident, “I thought the internet age had ended this kind of censorship.”

I want Jarvis and Adams to be right, here. But the reality is not so inspiring. We know that Twiiter is neither a militant guardian of free speech nor a glorified corporate billboard, that Twitter’s relationship to NBC and other commercial partners matters but does not determine, that Twitter is attempting to be a space for contentious speech and have rules of conduct that balance a many communities, values, and legal obligations. But exactly what we expect of Twitter in real contexts is imprecise, yet it matters for how we use it and how we grapple with a decision like the suspension of Adams’ account for the comments he made. And what these expectations are help to reveal, may even constitute, or experience of digital culture as a space for public, critical, political speech.

What if we put these possible expectations on a spectrum, if only so we can step away from the extremes on either end:

  • Social media are private services; we sign up for them. Their rules can be arbitrary, capricious, and self-serving if they choose. They can partner with content providers, including priviliging that content and protecting them from criticism. Users can take a walk if they don’t like it.
  • Social media are private services; we sign up for them. Their rules can be arbitrary and self-serving, but they should be fairly enforced. They can partner with content providers, including priviliging that content and protecting them from criticism, but they should be transparent about that promotion.
  • Social media are private services used by the public; Their rules are up to them, but should be justifiable and necessary; they should be fairly enforced, though taking into account the logistical challenges. They can partner with content providers, including priviliging that content, but they should be demarcate that content from what users produce.
  • Social media are private services used by the public; because of that public trust, those rules should balance honoring the public’s fair use of the network and protecting the service’s ability to function and profit; they should be fairly enforced, despite the logistical challenges. They can partner with content providers, including priviliging that content; they should be demarcate that content from what users produce.
  • Social media are private services and public platforms; because of that public trust, those rules should impartially honor the public’s fair use of the network; they should be fairly enforced, despite the logistical challenges. They can partner with sponsors that support this public forum through advertising, but it has a journalistic commitment to allow speech, even if its critical of its partners or of itself.
  • Social media are private but have become public platforms; the only rules it can set should be in the service of adhering to the law, and protecting the public forum itself from the harm users can do to it (such as hate speech). They can partner with sponsors that support this public forum through advertising, but it has a journalistic commitment to allow speech, even if its critical of its partners or of itself.
  • Social media are public platforms; and as such must have a deep commitment to free speech. While they can curtail the most egregious content under legal obligations, they should otherwise err on the side of allowing and protecting all speech, even when it is unruly, disrespectful, political contentious, or critical of itself. Sponsors and other corporate partnerships are nearly anathema to this mission, and should be constrained to the only the most cordoned off forms of advertising.
  • Social media should facilitate all speech and block none, no matter how reprehensible, offensive, dangerous, or illegal. Any commercial partnership is a suspicious distortion of this commitment. Users can take a walk if they don’t like it.

While the possibilities on the extreme ends of this spectrum may sound theoretically defensible to some, they are easily cast aside by test cases. Even the most ardent defender of free speech would pause if a platform allowed or defended the circulation of child pornography. And even the most ardent free market capitalist would recognize that a platform solely and capriciously in the service of its advertisers would undoubtedly fail as a public medium. What we’re left with, then, is the messier negotiations and compromises in the middle. Publicly, Twitter has leaned towards the public half of this spectrum: many celebrated when the company appealed court orders requiring them to reveal the identity of users involved in the Occupy protests, and Twitter has regularly celebrated itself for its role in protests and revolutions around the world. At the same time, they do have an array of rules that govern the use of their platform, rules that range from forbidding inappropriate content, limiting harassing or abusive behavior, prohibiting technical tricks that can garner more followers, establishing best practices for automated responders, and spelling out privacy violations. Despite their nominal (and in practice substantive) commitment to protecting speech, they are a private provider, that retains the rights and responsibilities to curate their user content according to rules they choose. This is the reality of platforms that we are reluctant to, but in the end must, accept.

What may be most uncharacteristic in the Adams case, and most troubling to Twitter’s critics, is not that Twitter enforced a vague rule, or did so when Adams was criticizing their corporate partner, in a way that, while scurrilous, was not illegal. It was that Twitter proactively identified Adams as a trouble spot for NBC — whether for his specific posting the Zenkel’s email or for the whole stream of criticism — and brought it to NBC’s attention. What Twitter did was to think like a corporate partner, not like a public platform. Of course it was within Twitter’s right to do so, and to suspend Adams’ account in response. And yes, there is a some risk of lost good will and public trust. But the suspension is an indication that, while Twitter’s rhetoric leans towards the claim of a public forum, their mindset about who they are and what purpose they serve remains enmeshed with their private status and their private investments than users might hope.

This is the tension lurking in Twitter’s apology about the incident, where they acknowledge that they had in fact alerted NBC about Adams’ post and encouraged therm to complain, then acted on that complaint. “This behavior is not acceptable and undermines the trust our users have in us. We should not and cannot be in the business of proactively monitoring and flagging content, no matter who the user is — whether a business partner, celebrity or friend.” Twitter can do its best to reinstate that sense of quasi-journalistic commitment to the public. But the fact that the alert even happened suggests that this promise of public commitment, and the expectations we have of Twitter to hold to it, may not be a particularly accurate grasp of the way their public commitment is entangled with their private investment.

Cross posted at Culture Digitally.

The Problem with Crowdsourcing Crime Reporting

There has been some excitement about the idea of using technology to address the problems of the Mexican Drug War. As someone involved in technology, I find it inspiring that other techies are trying to do something to end the conflict. However, I also worry when I read ideas based on flawed assumptions. For example, the assumption that “good guys” just need a safe way to report the “bad guys” to the cops reduces the Mexican reality to a kid’s story, where lines are easily and neatly drawn.

So, here are a few reasons why building tools to enable citizens to report crime in Mexico is problematic and even dangerous.

  1. Anonymity does not depend only on encryption. Criminals do not need to rely on advanced crypto-techniques when information itself is enough to figure out who leaked it. Similar ideas are being discussed by researchers trying to figure out how to identiy future Wikileaks-like collaborators, something they call Fog Computing. The point is, the social dynamics around the Drug War in Mexico mean that people are exposed when they post something local. In an era of big data, it’s easy to piece things together, even if the source is encrypted. And, sadly, when terror is your business, getting it wrong doesn’t matter as much.
  2. Criminal organizations, law enforcement, and even citizens are not independent entities. Organized crime has co-opted individuals, from the highest levels of government down to average citizens working with them on the side– often referred to as “halcones.”
  3. Apprehensions do not lead to convictions. According to some data, “78% of crimes go unreported in Mexico, and less than 1% actually result in convictions.” Mexico is among those countries with the highest indices of impunity, even with high-profile cases such as the murder of journalists.  All this is partly because of high levels of corruption.
  4. Criminal organizations have already discovered how to manipulate law enforcement against their opponents–there is even a term for it: “calentar la plaza“– the sudden increase of extreme violence in locations controlled by the opposite group, with the sole purpose of catching the attention of the military, which eventually takes over, and weakens the enemy.

The failure of crowdsourcing became evident only a few weeks ago with a presidential election apparently plagued with irregularities. Citizens actively crowdsourced reports of electoral fraud and subsequently uploaded the evidence to YouTube, Twitter, and Facebook. Regardless of whether those incidents would affect the final result of the election, the institutions in charge seem to have largely ignored the reports. One can only imagine what would happen with the report of highly profitable crimes like drug trafficking.

Crowdsourcing is not entirely flawed in the Mexican context, though. We have seen people in various Mexican cities organize organically to alert one another of violent events, in real time. But these urban crisis management networks do not need institutions to function. However, law enforcement does, unless one is willing to accept lynching and other types of crowd-based law enforcement.

In sum, as Damien Cave mentioned, what Mexico needs is institutions, and the people willing to change the culture of impunity. Technologies that support this kind of change would be more effective than those imagined with a “first world” mindset.

Thanks to danah boyd for helping me think through some of these ideas.

Socl Data Available… for Science!

The incredible growth and presence of social technologies in all aspects of life translates into large data sets that help researchers understand human behavior, social system design, and the development of digital culture. However, as John Markoff points out in a recent NYT article, most of these data are “forbidden to researchers.”

Among the reasons for this lock down are cost, privacy, and industrial secrecy. Indeed, it is difficult to put together and maintaining these data sets from social computing services in a way that complies with those services’ privacy policies, protects competitiveness, and does not drain strained resources.

Despite these challenges, there have been several efforts by different organizations to share data with researchers. For example, Reddit, StackExchange, Yelp, and Wikipedia, have put the time and effort to release data sets for the research community.

During the Microsoft Research Faculty Summit last week, FUSE Labs announced to the participants of the Social Media Workshop that it will be releasing log or instrumentation data from Socl, a website that lets people share their interests using search. Despite having been unveiled only a few months ago, Socl already has several hundred thousand users who have contributed a large number of aesthetically pleasing posts. We hope that access to this data will help researchers investigate the birth of an online community, and that it can also help the research community engage in a conversation about open data from social media systems.

If you have ideas on how to use Socl data for your research, please get in touch at fuse-rs@microsoft.com.

What we talk about when we talk about (online) worry

I recently read a post that Richard Harper wrote earlier this year that works through discomfort with Facebook in terms of time.  Harper argues that for users in his research with Eryn Whitworth, there’s “something about the experience of Facebook affects their sense of the past, the future, of how the temporal arrangements of their doings normally are.”  This reminded me of a 2007 post by William Merrin that applies Caillois’ work on mimesis as an inability to distinguish oneself from one’s surroundings to social media interactions.  Mimesis for Caillois enabled thinking about evolution, reshaping bodies and behaviors in terms of disambiguation (or not) from one’s species-based peer group. Merrin adapted this to behavior and interactions online, writing, “In social networking this mimetic process takes several forms, from the voluntary incorporation of the self into the environment, to the forced conformity to the profile templates and the choice of applications that, more often than not, follows and mimics those that ones’ ‘friends’ have added and recommended. What this produces is a resemblant self: a self that resembles not its originator but instead all the other virtual selves. What one constructs has a far close morphological relationship with all other profiles than it does with the being outside who constructs it.”

The posts are linked in trying to locate the exhaustion produced by exposure to social media (Merrin: “The exhaustion one feels after a period of time online is not physical strain but something more: an exhaustion with one’s interests and with one’s interest in life itself. If you look at profile after profile, list after list and application after application, your own self begins to renounce its spirit.” Harper: “This present is feeble, without rich temporal colour: no subtle looking back at the present, looking at the past from the future, looking at the present from the past. And because of this, Facebook somehow tyrannizes its users. Facebook freaks people out: ‘it’s too like now’.”). Drawing on theories of affect and philosophies of time and space is (I think) a much more interesting way of talking about information overload than, say, technical management of resources.  But I’m also interested in a shift of pathology that can be mapped onto these descriptions of online communication. 

One way of positioning these two posts is in terms of space and time (I’m sort of obsessed with this division right now, for the simple reason that it’s how I’m conceptually making sense of the dissertation I’m attempting to write this summer).  Merrin’s post is about the online spaces that get produced through use of social media, resulting in a mimesis not only in terms of profiles and pages, but also, he argues, in a distance between online and offline selves.  Merrin talks about metaphors of psycasthenia and schizophrenia, noting the utility for these neuroses for theorists from Caillois to Baudrillard.  Harper’s post discusses shifting paradigms for thinking of time in terms of linear intentionality versus a layered fluidity as far as making sense of human action.  Although Harper is less explicit in terms of pathology, the emphasis on time, nowness and attention reminded me of a Jonathan Lethem quote from The Ecstasy of Influence: “I’m not terribly interested in whether real, brain-chemically-defined Asperger’s is over- or underdiagnosed, or whether it exists at all except as a metaphor. I’m interested in how vital the description feels lately. Is there any chance the Aspergerian retreat from affective risk, in favor of the role of alienated scientist-observer, might be an increasingly ‘popular’ coping stance in a world where corporations, machines, and products flourish within their own ungovernable systems?”

I think there’s a utility in tracking social explanation for behavior in terms of pathology in that it associates discomfort with a given technology with a facet of human behavior.  It’s interesting to me that these posts track discomfort with social media by oscillating between space and time precisely because it coincides with a Caillois leveraging of schizophrenia and disambiguation, and Lethem pointing to disorders of attention. What can we read into shifting metaphors of describing technologies as they affect people biologically, psychologically? It’s to be expected that as technologies evolve, so do the pathologies that we map onto those technologies as manifestations of our concerns about contingent pscyho-social consequences for their use. Sometimes, worries about how technological change is affecting human interaction are posited in terms of turning us into machines, abstracting things like human communication or compassion.  What I like about tracing metaphors of pathology is that it retains an insistence on thinking of people as people (Caillois’ implicit comparisons between people and cannibalistic slugs aside), but shifts the construction of people-ness.  (As an aside, I’m reminded of a recent New York Times article on study drugs as pinpointing worries of technology, pressure, adulthood, bodies and work.  It’s not that we’re worried about kids working too hard, per se, it’s that we’re worried that they’re coping by turning to pharmaceutical drugs.)  If there is a shift in thinking about online life from space to time, what does this say about the functions and roles and utilities of these technologies in terms of what it means to be human? To our social worlds, online and off? To encounter and use (and misuse) technologies in everyday life? To how we think about possibilities for designing and using emergent technologies?