The Ethics of Attention (Part 2): Bots for Civic Engagement

Cross-posted from the Department of Alchemy blog.

Last month, Ethan Zuckerman of the Center for Future Civic Media (MIT Media Lab) posted a great article on his blog called The Tweetbomb and the Ethics of Attention (constituting Part I to this story, so make sure you read it!), in which he calls into question the practice of flooding particular users with Twitter messages to support a particular cause. The emergence of tweetbombing as a practice on Twitter is very intriguing, particularly around the assumed norms of participation:

Ethan had written previously about “the fierce battle for attention” when covering journalistic integrity and Foxconn; the tweetbomb story, meanwhile, focuses on emergent practices around gaming attention in social media platforms (modern infrastructures for communication), away from the usual norms situated around attention in news-sharing ecosystems.

These practices relate to what Ethan calls “attention philanthropy”: if you can’t move content yourself, see if you can motivate an attention-privileged individual to do it for you.

The problem is that attention is an issue of scale: how do you get the attention of everyone?. Social capital becomes a literal currency; we exchange the value embedded in networks in an attention economy. There are a number of assumptions underlying traditional mass media technologies, like radio and television: broadcast, primetime, the mass audience; but with the internet (like with cable and satellite radio), attention is splintered, across a multitude of channels, streams, feeds.

The issue with social media platforms versus mainstream media outlets is that for the most part there are many individuals who can bring attention to content that aren’t protected by the media institution (for instance, Ethan discusses well-known BoingBoing blogger Xeni Jardin, who manages her own personal Twitter account). In the attention economy facilitated by social media, then, we potentially deal with vulnerable actors.

The Low Orbit Ion Cannon, changing human consent into a social “botnet” for distributed denial of service attacks. What if you could use a similar automated program for political gain?

But what if you don’t have powerful people or institutions to help you garner attention? Or what if you can’t convince others to help you?

Become the Gepetto of the attention economy, and make some bots.

Tim Hwang’s Pacific Social project has shown that Twitter bots can influence Twitter audiences to an astounding degree. The projects’ results show that bots successfully interact with other human users, but the bots also aid in connecting disparate groups of Twitter users together.

This leads me to ask: Can you create bots for civic engagement?

How could a bot work in favor of civic engagement? Well, civic engagement has traditionally been measured according to two factors: 1) voting, and 2) social movements. But it’s increasingly evident, especially in today’s social-media-laden world, that information circulation also helps inform citizens about critical issues and educate them about how to make change within the democracy. We see platforms like Reddit able to spread information to audiences of millions (helping to generate success for campaigns like SOPA). While many complain about “slacktivism,” it’s undeniable that mass attention can generate results.

Bots have a useful power to move information across social networks by connecting human individuals to others who care about similar topics. What if you could essentially use an automated process to optimizes online communities into stronger partisan networks by connecting those with similar affiliations who do not yet know each other? Or, perhaps, use bots to educate vast networks about particular issues? KONY 2012, for instance, utilized student social networks on Twitter as seed groups to help mobilize the spread of information about the campaign.

But there’s also potential for the manipulation of information, and while manipulating the masses is likely though complex, having an army of coordinated bots to do your bidding is much easier, especially when a peripheral effect of bot participation is the perception to human users of important information spreading.

This morning, Andrés Monroy Hernández of Microsoft Research linked me to a timely project by Iván Santiesteban called AdiosBots.

AdiosBots tracks automated Twitter bots set up by the ruling Institutional Revolutionary Party in Mexico. According to Iván’s English project page, one of the contenders from this party for the upcoming elections on July 1st has been utilizing fake Twitter accounts manipulated by bots to spread information to “publish thousands of messages praising Enrique Peña Nieto and ridiculing his opponents. They also participate in popular hashtags and try to drown them out by publishing large amounts of spam.”

In other words, they are “used to affect interaction between actual users and to deceive.” And in total, Iván has found close to 5,000 of these bots.

In this instance, there is no need for attention philanthropy: the bots act as an automated social movement mimicking positive political affiliation while denouncing the opposition’s supporters. But it’s clear that vulnerability plays a huge role in attacks on political individuals and the spread of false information. There’s also the ethical question about what users do not know: is it a problem that individuals assume bots to be human and merely helpful rather than programmed to exploit and optimize human behavior?

Bots of civic engagement also call into question the ethics around social media norms. Should people assume interaction with automatons will occur? Or is this a question of media literacy, where users should be educated enough about the ecosystems they use to be able to point out misinformation, or even find discrepancies between “organic” information and automated information (even when it’s used with beneficial motives)? What if the bots are so convincing that they can’t?

Bots for civic engagement was an idea that almost led me to apply for an annual Knight Foundation grant. If you’re interested in building this idea into a tangible project, please email me.

Alex Leavitt is a PhD student at the Annenberg School for Communication at the University of Southern California. Read more about his research at http://alexleavitt.com or find him on Twitter at http://twitter.com/alexleavitt.

One thought on “The Ethics of Attention (Part 2): Bots for Civic Engagement

  1. Interesting. Is this a kind of spam for social justice? I don’t think that anybody likes bots, and the people most vulnerable to them are often those who have the least media literacy to begin with, i.e. under-resourced and often minority populations. Presumably if the bots were sophisticated enough to never be detected as such, no harm no foul. But the people they’re most likely to fool are already getting fooled left and right for less benign reasons.

    This is, indeed, an ethical issue.

Comments are closed.