Skip to content

Can an algorithm be wrong? Twitter Trends, the specter of censorship, and our faith in the algorithms around us

October 19, 2011

The interesting question is not whether Twitter is censoring its Trends list. The interesting question is, what do we think the Trends list is, what it represents and how it works, that we can presume to hold it accountable when we think it is “wrong?” What are these algorithms, and what do we want them to be?

(Cross posted from Culture Digitally.)

It’s not the first time it has been asked. Gilad Lotan at SocialFlow (and erstwhile Microsoft UX designer), spurred by questions raised by participants and supporters of the Occupy Wall Street protests, asks the question: is Twitter censoring its Trends list to exclude #occupywallstreet and #occupyboston? While the protest movement gains traction and media coverage, and participants, observers and critics turn to Twitter to discuss it, why are these widely-known hashtags not Trending? Why are they not Trending in the very cities where protests have occurred, including New York?

The presumption, though Gilad carefully debunks it, is that Twitter is, for some reason, either removing #occupywallstreet from Trends, or has designed an algorithm to prefer banal topics like Kim Kardashian’s wedding over important contentious, political debates. Similar charges emerged around the absence of #wikileaks from Twitter’s Trends when the trove of diplomatic cables were released in December of last year, as well as around the #demo2010 student protests in the UK, the controversial execution of #TroyDavis in the state of Georgia, the Gaza #flotilla, even the death of #SteveJobs. Why, when these important points of discussion seem to spike, do they not Trend?

Despite an unshakeable undercurrent of paranoid skepticism, in the analyses and especially in the comment threads that trail off from them, most of those who have looked at the issue are reassured that Twitter is not in fact censoring these topics. Their absence on the Trends listings is a product of the particular dynamics of the algorithm that determines Trends, and the misunderstanding most users have about what exactly the Trends algorithm is designed to identify. I do not disagree with this assessment, and have no particular interest in reopening these questions. Along with Gilad’s thorough analysis, Angus Johnston has a series of posts (1, 2, 3, and 4) debunking the charge of censorship around #wikileaks. Trends has been designed (and re-designed) by Twitter not to simply measure popularity, i.e. the sheer quantity of posts using a certain word or hashtag. Instead, Twitter designed the Trends algorithm to capture topics that are enjoying a surge in popularity, rising distinctly above the normal level of chatter. To do this, their algorithm is designed to take into account not just the number of tweets, but factors such as: is the term accelerating in its use? Has it trended before? Is it being used across several networks of people, as opposed to a single, densely-interconnected cluster of users? Are the tweets different, or are they largely re-tweets of the same post? As Twitter representatives have said, they don’t want simply the most tweeted word (in which case the Trend list might read like a grammar assignment about pronouns and indefinite articles) or the topics that are always popular and seem destined to remain so (apparently this means Justin Bieber).

The charge of censorship is, on the face of it, counterintuitive. Twitter has, over the last few years, enjoyed and agreed with claims that has played a catalytic role in recent political and civil unrest, particularly in the Arab world, wearing its political importance as a red badge of courage (see Shepherd and Busch).  To censor these hot button political topics from Trends would work against their current self-proclaimed purposes and, more importantly, its marketing tactics. And, as Johnston noted, the tweets themselves are available, many highly charged – so why, and for what ends, remove #wikileaks or #occupywallstreet from the Trends list, yet  let the actual discussion of these topics run free?

On the other hand, the vigor and persistence of the charge of censorship is not surprising at all. Advocates of these political efforts want desperately for their topic to gain visibility. Those involved in the discussion likely have an exaggerated sense of how important and widely-discussed it is. And, especially with #wikileaks and #occupywallstreet, the possibility that Twitter may be censoring their efforts would fit their supporters’ ideological worldview: Twitter might be working against Wikileaks just as Amazon, Paypal, and Mastercard were; or in the case of #occupywallstreet, while the Twitter network supports the voice of the people, Twitter the corporation of course must have allegiances firmly intertwined with the fatcats of Wall Street.

But the debate about tools like Twitter Trends is, I believe, a debate we will be having more and more often. As more and more of our online public discourse takes place on a select set of private content platforms and communication networks, and these providers turn to complex algorithms to manage, curate, and organize these massive collections, there is an important tension emerging between what we expect these algorithms to be, and what they in fact are. Not only must we recognize that these algorithms are not neutral, and that they encode political choices, and that they frame information in a particular way. We must also understand what it means that we are coming to rely on these algorithms, that we want them to be neutral, we want them to be reliable, we want them to be the effective ways in which we come to know what is most important.

Twitter Trends is only the most visible of these tools. The search engine itself, whether Google or the search bar on your favorite content site (often the same engine, under the hood), is an algorithm that promises to provide a logical set of results in response to a query, but is in fact the result of an algorithm designed to take a range of criteria into account so as to serve up results that satisfy, not just the user, but the aims of the provider, their vision of relevance or newsworthiness or public import, and the particular demands of their business model. As James Grimmelmann observed, “Search engines pride themselves on being automated, except when they aren’t.” When Amazon, or YouTube, or Facebook, offer to algorithmically and in real time report on what is “most popular” or “liked” or “most viewed” or “best selling” or “most commented” or “highest rated,” it is curating a list whose legitimacy is based on the presumption that it has not been curated. And we want them to feel that way, even to the point that we are unwilling to ask about the choices and implications of the algorithms we use every day.

Peel back the algorithms, and this becomes quite apparent. Yes, a casual visit to Twitter’s home page may present Trends as an unproblematic list of terms, that might appear a simple calculation. But a cursory look at Twitter’s explanation of how Trends works – in its policies and help pages, in its company blog, in tweets, in response to press queries, even in the comment threads of the censorship discussions – Twitter lays bare the variety of weighted factors Trends takes into account, and cops to the occasional and unfortunate consequences of these algorithms. Wikileaks may not have trended when people expected it to because it had before; because the discussion of #wikileaks grew too slowly and consistently over time to have spiked enough to draw the algorithm’s attention; because the bulk of messages were retweets; or because the users tweeting about Wikileaks were already densely interconnected. When Twitter changed their algorithm significantly in May 2010 (though, undoubtedly, it has been tweaked in less noticeable ways before and after), they announced the change in their blog, explained why it was made – and even apologized directly to Justin Bieber, whose position in the Trends list would be diminished by the change. In response to charges of censorship, they have explained why they believe Trends should privilege terms that spike, terms that exceed single clusters of interconnected users, new content over retweets, new terms over already trending ones. Critics gather anecdotal evidence and conduct thorough statistical analysis, using available online tools that track the raw popularity of words in a vastly more exhaustive and catholic way than Twitter does, or at least is willing to make available to its users. The algorithms that define what is “trending” or what is “hot” or what is “most popular” are not simple measures, they are carefully designed to capture something the site providers want to capture, and to weed out the inevitable “mistakes” a simple calculation would make.

At the same time, Twitter most certainly does curate its Trends lists. It engages in traditional censorship: for example, a Twitter engineer acknowledges here that Trends excludes profanity, something that’s obvious from the relatively circuitous path that prurient attempts to push dirty words onto the Trends list must take. Twitter will remove tweets that constitute specific threats of violence, copyright or trademark violations, impersonation of others, revelations of others’ private information, or spam. (Twitter has even been criticized (1, 2) for not removing some terms from Trends, as in this user’s complaint that #reasonstobeatyourgirlfriend was permitted to appear.) Twitter also engages in softer forms of governance, by designing the algorithm so as to privilege some kinds of content and exclude others, and some users and not others. Twitter offers rules, guidelines, and suggestions for proper tweeting, in the hopes of gently moving users towards the kinds of topics that suit their site and away from the kinds of content that, were it to trend, might reflect badly on the site. For some of their rules for proper profile content, tweet content, and hashtag use, the punishment imposed on violators is that their tweets will not factor into search or Trends – thereby culling the Trends lists by culling what content is even in consideration for it. Twitter includes terms in its Trends from promotional partners, terms that were not spiking in popularity otherwise. This list, automatically calculated on the fly, is yet also the result of careful curation to decide what it should represent, what counts as “trend-ness.”

Ironically, terms like #wikileaks and #occupywallstreet are exactly the kinds of terms that, from a reasonable perspective, Twitter should want to show up as Trends. If we take the reasonable position that Twitter is benefiting from its role in the democratic uprisings of recent years, and that it is pitching itself as a vital tool for important political discussion, and that it wants to highlight terms that will support that vision and draw users to topics that strike them as relevant, #occupywallstreet seems to fit the bill. So despite carefully designing their algorithm away from the perennials of Bieber and the weeds of common language, it still cannot always successfully pluck out the vital public discussion it might want. In this, Twitter is in agreement with its critics; perhaps #wikileaks should have trended after the diplomatic cables were released. These algorithms are not perfect; they are still cudgels, where one might want scalpels. The Trends list can often look, in fact, like a study in insignificance. Not only are the interests of a few often precisely irrelevant to the rest of us, but much of what we talk about on Twitter every day is in fact quite everyday, despite their most heroic claims of political import. But, many Twitter users take it to be not just a measure of visibility but a means of visibility – whether or not the appearance of a term or #hashtag increases audience, which is not in fact clear. Trends offers to propel a topic towards greater attention, and offers proof of the attention already being paid. Or seems to.

Of course, Twitter has in its hands the biggest resource by which to improve their tool, a massive and interested user base. One could imagine “crowdsourcing” this problem, asking users to rate the quality of the Trends lists, and assessing these responses over time and a huge number of data points. But they face a dilemma: revealing the workings of their algorithm, even enough to respond to charges of censorship and manipulation, much less to share the task of improving it, risks helping those who would game the system. Everyone from spammers to political activist to 4chan tricksters to narcissists might want to “optimize” their tweets and hashtags so as to show up in the Trends. So the mechanism underneath this tool, that is meant to present a (quasi) democratic assessment of what the public finds important right now, cannot reveals its own “secret sauce.”

Which in some ways leaves us, and Twitter, in an unresolvable quandary. The algorithmic gloss of our aggregate social data practices can always be read/misread as censorship, if the results do not match what someone expects. If #occupywallstreet is not trending, does that mean (a) it is being purposefully censored? (b) it is very popular but consistently so, not a spike? (c) it is actually less popular than one might think? Broad scrapes of huge data, like Twitter Trends, are in some ways meant to show us what we know to be true, and to show us what we are unable to perceive as true because of our limited scope. And we can never really tell which it is showing us, or failing to show us. We remain trapped in an algorithmic regress, and not even Twitter can help, as it can’t risk revealing the criteria it used.

But what is most important here is not the consequences of algorithms, it is our emerging and powerful faith in them. Trends measures “trends,” a phenomena Twitter gets to define and build into its algorithm. But we are invited to treat Trends as a reasonable measure of popularity and importance, a “trend” in our understanding of the term. And we want it to be so. We want Trends to be an impartial arbiter of what’s relevant… and we want our pet topic, the one it seems certain that “everyone” is (or should be) talking about, to be duly noted by this objective measure specifically designed to do so. We want Twitter to be “right” about what is important… and sometimes we kinda want them to be wrong, deliberately wrong – because that will also fit our worldview: that when the facts are misrepresented, it’s because someone did so deliberately, not because facts are in many ways the product of how they’re manufactured.

We don’t have a sufficient vocabulary for assessing the algorithmic intervention a tool like Trends. We’re not good at comprehending the complexity required to make a tool like Trends – that seems to effortlessly identify what’s going on, that isn’t swamped by the mundane or the irrelevant. We don’t have a language for the unexpected associations algorithms make, beyond the intention (or even comprehension) of their designers. We don’t have a clear sense of how to talk about the politics of this algorithm. If Trends, as designed, does leave #occupywallstreet off the list, even when its use is surging and even when some people think it should be there: is that the algorithm correctly assessing what is happening? Is it looking for the wrong things? Has it been turned from its proper ends by interested parties? Too often, maybe in nearly every instance in which we use these platforms, we fail to ask these questions. We equate the “hot” list with our understanding of what is popular, the “trends” list with what matters. Most importantly, we may be unwilling or unable to recognize our growing dependence on these algorithmic tools, as our means of navigating the huge corpuses of data that we must, because we want so badly for these tools to perform a simple, neutral calculus, without blurry edges, without human intervention, without having to be tweaked to get it “right,” without being shaped by the interests of their providers.

21 Comments leave one →
  1. anonymous permalink
    October 19, 2011 9:55 am

    We also have no ability to view the algorithm itself; we have to take Twitter’s word for it. Somewhere amongst the code for the site there is a set of instructions that make up the algorithm. We cannot see it. This is always the problem of a closed system. And while the algorithm in its coded form would likely be impenetrable to most, it’s almost a given that someone would take the time to explain it. And so if Twitter were a FLOSS platform released, say, under the AGPL, we could figure out what the algorithm _is_ while we simultaneously examine why we are latching ourselves onto an algorithm in the first place.

    • October 20, 2011 5:47 am

      I agree, the fact that these algorithms are “closed code” is deeply problematic for the way we increasingly want them to be “democratic” measures of what people are doing. And in some ways, they have to be secret, if only to avoid the “search engine optimization” problem, that there are spammers and pranksters who want to game these systems, and would take advantage of whatever knowledge of criteria, factors, weighting, etc is going on.

  2. October 20, 2011 4:05 am

    This is extremely interesting. Can such mechanisms lead to a new incarnation of the “Spiral of Silence”? This incarnation may have much more power than the old one, exactly because of the illusion of transparency.

    • October 20, 2011 5:50 am

      Limor, I think there is a reason to think about something like a spiral of silence, in terms of what people think is being talked about. But add to the illusion of transparency is the illusion of consistency: I may hit Twitter, see the Trends list, and assume that this is the Trends list every other user is seeing right now, or all day. But as Twitter and others tailor these lists by time, region, past interests, and individual user, we are not even seeing the same list. Would that undercut a spiral of silence effect, or exaggerate it?

  3. October 20, 2011 10:24 am

    This is a fantastic post, Tarleton — extremely helpful piece to think with! In response to Limor’s comment, I agree that this phenomenon is relevant to the Spiral of Silence effect. From my memory of Noelle-Neumann’s original framing of it, I recall there being two elements to the likelihood of not expressing yourself: that you perceive your opinion to be in the minority AND that you perceive your opinion to be declining in popularity. That is, there are both snap-shot and time-based elements to the perception. Twitter’s Trending Topics widget (and other similar representations) seem to collapse these two elements of the Spiral into one. As an empirical project, it’d be interesting to see how people think about these two elements (what a majority opinion is *now* versus what it *might* be). And as a design project, we might think about interventions that could separate out these two aspects of the phenomenon. E.g., I can imagine representations of “trending” that don’t just make the algorithm transparent, but that suggest the confidence of currently perceived trends, the likelihood of future trends, and that represent shifts in trends. As your post and comment rightly points out, the key here is not only to make a critique based on transparency but to explore *how* transparency itself might be unpacked, and understand the democratic work it does. (I’m also thinking here of Jodi Dean’s critiques of transparency, e.g., her “Publicity’s Secret” book or “Why the Net is Not a Public Sphere” article.) Anyway, thanks, great stuff.

  4. October 26, 2011 11:06 am

    Like the other respondents, I too really enjoyed the piece. I think it is a great example for David Beer’s New Media & Society piece on the power of the algorithm. The idea that we are increasingly experiencing the world around us through algorithms that no one really understands is an intriguing, if disturbing one.
    In this regard I think that the idea of autonomic computing is especially relevant – to the best of my understanding, this is an attempt to get computer systems to be self-managing, given that they are becoming too complicated for people to understand and manage anyway.

  5. December 29, 2011 9:47 am

    Hey I got a great idea. Just go down to your nearest coffee shop and start talking to people about whatever you want. There’s no censorship there. You can say whatever you want and it’s not going to be posted up on the wall forever. You might even make a real friend, unless he’s a government agent provocateur.

  6. rideforever permalink
    February 11, 2013 3:14 pm

    All algorithms are wrong because they are models.

    Models are not realilty.

  7. March 21, 2013 9:19 am

    Hi outstanding blog! Does running a blog like this take a
    lot of work? I’ve very little knowledge of computer programming but I was hoping to start my own blog in the near future. Anyways, if you have any suggestions or tips for new blog owners please share. I understand this is off topic nevertheless I simply needed to ask. Appreciate it!

Trackbacks

  1. Learning Aesthetics » It’s the Algorithm Stupid!
  2. Can watching Twitter trends help predict the future? — Tech News and Analysis
  3. Can watching Twitter trends help predict the future? « Darin R. McClure – The Good Life In San Clemente
  4. Can watching Twitter trends help predict the future? ‹ QR Path – QR Codes, Tags & 2D Barcodes
  5. Occupy Wall Street: 5 conspiracy theories | DECORE GUIDE
  6. Occupy Wall Street: 5 conspiracy theories (The Week) | Breaking News Today
  7. OnlineMagazine » Blog Archive » The rise of the new information gatekeepers
  8. The rise of the new information gatekeepers | Tech News Aggregator
  9. Scrutiny » interviewed for NPR Morning Edition
  10. Scrutiny » interviewed for To The Best Of Our Knowledge
  11. A Brief Guide To User-Generated Censorship - Chris Peterson
  12. Virili-oh! | English-L 504 Practicum

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,180 other followers

%d bloggers like this: