Last Thursday and Friday, moderators of many of Reddit’s most popular discussion groups “blacked out” their subreddits, preventing access to parts of the site by Reddit subscribers and cutting off some of the company’s advertising revenue for half a day. What may not have started as a protest quickly became one, with many moderators complaining that the company needed to offer better communication and better tools to its volunteer moderators. Reddit’s management responded within hours, apparently after substantive negotiations with moderators, and promised to meet those demands.
This story was covered widely in the press last weekend, with the MediaCloud project tracking 92 articles in the mainstream media and 51 in its “tech blogs” dataset.
As a PhD candidate spending my summer researching the work of moderators on Reddit, I’ve been asked repeatedly by journalists to share my results. I’ve resisted commenting, because we often want easy answers in the heat of the moment: will Reddit survive, what do I think about Reddit CEO Ellen Pao, are moderators are exploited labor, a “product being sold” to advertisers? In my research this summer, I’m trying to go beyond these important, near-term questions to understand the work that Reddit moderators do and how they see it.
Although it’s too early to share my results, I *can* share some of what I’ve found. I hope this post is useful to journalists writing about the Reddit blackout, and I hope that Reddit moderators read this too, so you can tell me what I am getting right, what I’m misunderstanding, and what conversations I’m missing.
- Why Does This Matter?
- What Is a Subreddit?
- What Do Reddit Moderators Do?
- How Do You Become a Moderator on Reddit?
- How Many Moderators Are There?
- What Does It Mean to “Go Dark,” “Go Private,” or “Black Out” and is This A New Thing?
- How Did Moderators Decide to Take Subreddits Private?
- What Were the Consequences of Taking Subreddits Private?
- Who’s In The Majority? What Do “Reddit Users” Think?
- Final Thoughts and Next Steps
How I’m Doing My Research
In this post, I avoid linking or directly mentioning specific Reddit users or subreddits for research ethics reasons. They’ve had a hard enough week without me sending more attention their way. Read more about my methods, ethics, and promises to Reddit users here.
Even before this weekend’s controversy, I had analyzed 50 interviews with groups of subreddit moderators, constructed a historical timeline on the history of the idea of subreddits and the role of moderators, followed hundreds of job board postings where moderators apply and accept moderating roles, and watched videos about the work of moderators. I also collected summary statistics from the Reddit API to understand how many moderators there are. Finally, I have personal experience facilitating and moderating a high profile online community, The Atlantic’s Twitter book club, which I moderated from 2012-2014.
Since the blackout started, I have spent most of my waking hours archiving and reading material about the controversy, including:
- Over 500 links that appeared in the Reddit Live feed on the blackout, a feed maintained by Reddit users
- Data on which subreddits went private, and by implication, which did not
- Over a hundred of messages stating why “this subreddit is private” during the blackout
- Hundreds of discussions in subreddits debating if they should go private
- Around 50 discussions in subreddits that chose not to go private (I’m still adding to this)
- “We’re back” discussions, where moderators justified, defended, or apologized for the decision to go private
- Notable discussions in “meta-subreddits” where users from across the site reflected and responded to the issue
- Historical records of other times that moderators made subreddits private
- Limitation: I do not currently have access to the private subreddits where moderators of top subs discussed their decisions and goals, nor the conversations between mods and the company. Even though I can offer assurances of privacy, anonymity, and security in my archival of those conversations, moderators of the largest subreddits have not at this point trusted me to participate, a choice that I can understand.
- Limitation: I haven’t archived conversations off-reddit where people claiming to be moderators have discussed the issue, because I have no way of confirming that they are actually moderators. The one exception is journalistic interviews or op-eds that name the moderators.
It’s too soon for me to draw conclusions from such a wide-ranging dataset, but I mention it in case there are important conversations I’m missing. If you’re a reddit moderator, or if you mod a sub where moderators discuss these issues, please contact me at u/natematias.
1. Why Does This Matter?
Reddit is one of the world’s most popular social/content platforms, with roughly half of Twitter’s monthly active visitors and 2/5 the monthly unique visitors of Wikipedia. YouTube, Facebook, and many news sites review content through flagging systems, with large numbers of paid staff reviewing content. For example, the Huffington Post pre-moderates 450,000 comments per day, paying between $0.005 and $0.25 for every comment. When The Verge recently turned off comments, worried that “sometimes it gets too intense,” they may also be saving a *lot* of money. Reddit mostly relies on its volunteer moderators to support and maintain conversations on the site, and the company has traditionally offered them substantial autonomy in return.
My research on the Reddit moderators isn’t just about Reddit. Anyone who cares about a fair, free, and meaningful social web should be paying close attention to sites like Reddit, Meetup, Craigslist, and Wikipedia that rely mostly on user initiative. If volunteer moderators, upvoting systems, and other community-driven approaches to supporting large-scale collective projects ultimately fail (and there are many ways to fail), it will be hard to justify anything but a fully-commercial web. At the same time, platforms are also creating new categories of work that defy the boundaries and expectations of mid-20th century labor, new categories that also create new problems.
Whatever the wider issues, the blackout also matters deeply to millions of subreddit moderators and subscribers. Content, conversations and relationships on Reddit are fully a part of many people’s lives. In addition to books, jokes, porn, deals, advice, inspiration, debates, and news, people also sometimes go to Reddit to ask for feedback on intimate questions they would never dare ask anywhere else, including help with thoughts of suicide or responses to their religious and political doubts. Sometimes, when pseudonymity is not enough; users create throwaway accounts to ask especially sensitive questions.
Moderators on Reddit have a great responsibility of care for those who participate in their groups. They also have a great deal of pressure and scrutiny from their subscribers. When discussing the decision to go private, many moderators described the difficulty of weighing the cost that this choice entailed. I hope I do their work justice in this post.
2. What Is a Subreddit?
Subreddits, conversation groups on Reddit, are often compared to forums, mailing lists, and earlier bulletin board systems. Contributions can usually be up or down voted, and are then algorithmically sorted on each subreddit’s front page. Unlike earlier discussion platforms, users on Reddit can move between public subreddits without having to create new user accounts, and contributions will sometimes surface on other parts of the site based on how popular they are.
Each subreddit has its own volunteer moderation team, who have wide ranging influence over the visual style, rules, and operation of that subreddit. Importantly, many of the popular subreddits are configured so that moderators don’t pre-approve participants; instead, they tend to take a reactive approach to behavior on their “subs.”
The ease of finding, joining, and participating in a new subreddit might be one reason that many users talk about “Reddit” culture. Many moderators describe their own communities as nicer, more welcoming and supportive than the “rest of Reddit.” This impression is at least partly shaped by the flow of newcomers who arrive when a sub becomes momentarily prominent due to highly upvoted content, a special event like a live Q&A (called AMAs), or “drama” among subscribers.
The commingling and collision of different conversations on Reddit is similar to what danah boyd came to call “context collapse” in her early 2000s research on Friendster. On Friendster, boyd observed burning man attendees, gay men and geeks responding to the discovery that they were conversing on the same platform. Reddit is designed to facilitate context collapse at speed and scale, supported by popularity algorithms that tend to draw attention to upvoted content and “drama” alike.
Reddit’s algorithms were the reason Reddit created the very first subreddit in January of 2006, its “NSFW” section. Trying to use popularity and voting systems to curate the “Front Page of the Internet,” Reddit’s creators noticed that porn and other complicated material was being promoted to the top of the page. By creating an “NSFW” section (the name “subreddit” came a month later) and excluding it from the front page, the company could decide which conversations to promote without interfering with the autonomy of user voting.
Over the next two years, the company started dozens of new subreddits, mostly to separate conversations happening in different languages. Then in Jan 2008, a year and a half after its acquisition by Condé Nast, and 10 months after introducing ads, the company launched “user-controlled subreddits.” Before then, users could join official company subreddits, reporting spam and abuse directly to the company. Now they could create their own public and private subreddits, taking action themselves to “remove posts and ban users.” Although subreddits have evolved since then, the basic structure has remained much the same.
Subscribing to a subreddit does not always imply an idea of “membership” in a “community.” Many users treat subreddits as newsfeeds. The default view for logged-in users uses a news feed algorithm to create “your front page” from “hot” posts across all of your subscriptions. As with the Facebook newsfeed, users subscribing to subreddits this way will only see a few of the most prominent posts.
3. What Do Reddit Moderators Do?
Recent press coverage has focused on the work of moderators to filter the content and conversations that are posted to the site. Moderator teams do much more. They are:
- founders, entrepreneurially creating new subreddits and growing their subscriber base.
- designers, creating unique styles for their subreddits, designing ads to attract other users to their sub, writing copy for the sub’s public-facing materials as well as its wiki. Moderators also design and customize the bots that help them do their work and participate in the sub’s conversations.
- facilitators, maintaining the structure of conversation on their sub, whether through AMAs, weekly discussions, contests, or votes. Moderators also participate in discussions.
- recruiters and promoters, promoting the subreddit to subscribers, soliciting contributions, and recruiting other moderators.
- legislators and judges, discussing and defining the rules on the subreddit’s sidebar and wiki, as well as taking actions to enforce what they think the conversation ought to be.
- responders, taking actions to respond to internal “drama” and external sources of influence, which may be welcome or unwelcome.
Much of this work is made possible through special features that Reddit makes available to moderators, alongside custom software that non-employees have created, from bots to browser plugins.
Moderators are not the only people to do this work. Subscribers are often very active in these areas too, as Brian Butler observed of mailing lists in the late 90s. Moderators’ actual behavior is also not always so neatly defined or benevolent as this list implies, and they vary widely in the effort and attention they give to subs.
My own understanding of moderators’ work is evolving as I continue to read and observe their work across the site.
4. How Do You Become a Moderator on Reddit?
The simplest way to become a moderator is to start your own subreddit. Most moderators of more popular subreddits are added by other moderators, through a variety of processes:
- A friend outside Reddit asks you to do it as a favor
- You see a call for help from a moderator on a subreddit you subscribe to
- You follow the job board where moderators post moderating opportunities
- After you become known for your capability at some aspect of moderating (CSS, bots, diplomacy), you are approached by moderators to join the mod team
- Hoping to build your reputation and connections to moderators, you do an internship in one of the subreddits
Just as other moderators can add you as a moderator to a subreddit, they also have the power to remove you from the sub.
5. How Many Moderators Are There?
There are roughly as many moderator accounts as subreddits. In a random sample of 100,615 subreddits (roughly 1/6 of all public subreddits), I found 91,563 unique moderator accounts. A similar proportion of moderator accounts supports Reddit’s top conversations. A sample of the 9,880 subreddits with the greatest number of subscribers had around 9,900 moderators, with an average of 5 moderators per subreddit, after taking out bots.
Some moderator accounts are likely throwaway accounts, where a single moderator uses multiple personas to support different subreddits. Bots have their own moderation accounts. I’ve also seen numerous cases where the moderators use a single account to distinguish when they are speaking for the entire mod team and when they’re speaking in a personal capacity.
Finally, because some moderators specialize on things like bots or CSS, some users are moderators of very large numbers of subreddits.
6. What Does It Mean to “Go Dark,” “Go Private,” or “Black Out” and is This A New Thing?
Moderators have the power to make their subreddits private, which prevents anyone who is not explicitly approved from accessing or contributing o the subreddit. In a large public subreddit, this action has the effect of preventing almost everyone on Reddit, including most subscribers, from accessing or posting to the subreddit. All of the content of the subreddit also disappears from the public web, and given enough time, may also disappear from search results.
Reddit may possibly lose advertising on private subreddits, since the content is not public. However, it’s also possible that the controversy on Reddit could have attracted even more attention and revenue to the site. There is some evidence from subscription bots that subreddits that stayed up received unusually high numbers of new subscribers during the blackout. (An economist would find this question fascinating, if Reddit ever chose to share its advertising data.)
Moderators have taken subreddits private before, and while I’m still studying the history of this tactic, I’ve seen it used mostly to deal with internal or external drama.
External drama: Moderators sometime take a subreddit private to protect it from large waves of attention from elsewhere on the site. This can happen when a subreddit becomes unexpectedly promoted by algorithms to the site’s front page, when an internal controversy gets onto the “drama” subreddits, or when other subreddits try to “brigade” a group by influencing the votes of its comments. In these situations, it can be hard for moderators to deal with comments from people who don’t care or don’t yet understand the norms of their group. Moderators do have other ways to prevent or deal with this problem, like removing their subreddit from Reddit’s main feed or default listings. Making their sub private is a last line of defense.
Internal drama: Other moderators make their subreddit private to show their displeasure with subscribers.
I know of one case where making a subreddit private was used to put pressure on a company. In this case, a moderator of a gaming-related subreddit was unhappy with that company’s handling of a beta program. To pressure the company to change its policy, this moderator blacked out the fan conversation on Reddit.
Blacking out a subreddit can make its subscribers angry. On this gaming subreddit, some subscribers retaliated by “doxxing” the moderator, finding and posting the moderator’s sensitive personal information. In response to this internal drama, the moderator temporarily took the subreddit private again as a defense against their attacks.
This week, when two moderators of the IAmA subreddit claimed in the New York Times that they weren’t intending to start a protest by setting their subreddit private, it’s not unimaginable. If you’re worried about a huge influx of controversy into your subreddit due to a surpsie HR decision by the company, blacking out is one of the things a moderator can do to to gain the breathing space to respond– even if it is probably the most extreme response short of deleting the group entirely.
What made last weekend so unique was that moderators of so many subreddits blacked out on the same day, many of them expressing support for a set of demands for which they could at least find solidarity. That appears to be new.
Some subreddits are now adding timers to their sidebar, promising to black out again if Reddit doesn’t make satisfactory changes.
7. How Did Moderators Decide to Take Subreddits Private?
Over the last week, I’ve archived hundreds of conversations in subreddits as they decided if they should join or not. While many subreddits showed no evidence that moderators ever discussed the idea with their subscribers, many of them discussed it or put it to a vote.
Because the controversy and blackout happened so quickly, many moderators missed it completely. In some cases, moderators asked subscribers if they should join, only to be told that the subreddit had already blacked out and already concluded their participation. In some cases, moderators made unilateral decisions that were later reversed by other moderators, sometimes leading the original actor to lose their position.
In many other cases, moderators did often say that they had discussed the idea among themselves, often talking about their actions as a group rather than as individuals. Other moderators refer to deliberations with the company and other subreddits’ moderators, conversations that I don’t have access to.
When staying open, moderators sometimes justified their choice by describing the harm that could result, especially among subreddits that offer direct support to people with urgent needs. In several of those cases, moderators took heavy criticism from their subscribers for declining to join the protest.
8. What Were the Consequences of Taking Subreddits Private?
Although the press has focused on the pressure that Reddit is under from its moderators, those moderators have also been under great pressure from Reddit users, whose social lives they abruptly disrupted. To study these pressures, I have collected an archive of “We’re back” conversations where moderators justified and defended their blackout decisions.
In their complaints, many subscribers drew parallels between Reddit’s treatment of moderators and some moderators’ lack of communication with their own subreddits. When IAmA moderators Lynch and Swearingen wrote in the New York Times, “Our goal is not to cripple Reddit or hinder the community. We are all the community,” they echoed language that many other moderators used to win over their worried and upset subscribers.
At the same time, declining to go private also risked moderators’ legitimacy with subscribers. Many Reddit users supported the blackout, pressuring moderators to join in. Some of those supporters were opposed to Reddit’s staff and CEO in general– the Change.org petition calling for her dismissal was originally created weeks ago by subscribers who wanted the company to reinstate fat-shaming groups. Other subscribers expressed support for Victoria Taylor, the employee whose abrupt termination sparked the blackout. Moderators who declined to go private likely found their leadership questioned.
9. Who’s In The Majority? What Do “Reddit Users” Think?
I don’t think this is the right question. There is a huge variation in how different groups of moderators and subscribers handled this issue, and I’m still reading through it all. So far, my research will be based on the public conversations that moderators had with their groups, but if there are other conversations that I should know about before putting it into the scholarly record, please contact me.
10. Final Thoughts and Next Steps
I’m still growing my sense of what happened, why it matters, and what this episode can reveal about the more enduring questions of what it means to do volunteer work in online communities. I hope this post helps answer basic questions about subreddits, what moderators do, and the history of going private.
I also hope it helps Redditors understand more about the state of my research as I continue to ask questions. If you’re a reddit moderator who thinks I’m missing something, or if you mod a sub where moderators have been discussing these issues, please contact me at u/natematias.
Join us today, Thursday July 9 at 3pm ET (9pm CET), to listen to SMC’s Mary L. Gray on a special Theory of Everything‘s online discussion on life in the sharing economy. Benjamen Walker and Andrew Callaway, the official Theory of Everything instapoder, will also be hosting.
Mary is currently researching labor and the sharing economy in the Social Media Collective.
To make sure you don’t miss it, sign up here
We leave you with the latest episode of Instaserf’s, a three-part series about life in the Sharing Economy.
As we say goodbye to the *amazzzzing* Rebecca Hoffman <serious sad face emoji>, we have the solace of welcoming Andrea Alarcón to SMC’s ranks.
Andrea received her MSc degree from the Oxford Internet Institute, and her BSc in online journalism from the University of Florida. She has researched ICT4D, online language barriers and data collection by international corporations in developing nations. She has worked as a web producer and editor for the World Bank, and in social media for Discovery Channel in Latin America. She currently writes about digital culture for Colombian mainstream media.
Please join us in welcoming Andrea to MSR!
What would be a sustainable and inclusive approach to child safety online? Today at the Berkman Center, Mitali Thakor presented her research on human trafficking and moderated a discussion of how we see and respond to issues of child safety.
Mitali Thakor is a PhD student at MIT’s history and science of technology program, who studies sex work, sex trafficking, technology, and digital forensics. She uses Feminist STS and critical race studies to explore the ways in which activists, computer scientists, lawyers, and law enforcement officials negotiate their relationships to anti-trafficking via emergent technologies and discourses of carceral control.
“What is human trafficking?” asks Mitali. In this growing “industry” of activism, there are so-called abolitionist networks, alliances between evangelical abolitionist Christian organizations committed to fighting prostitution and sex work aligning with feminist organizations who fight sex work, which they see as sexual exploitation. Mitali shows us campaigns by feminist organizations and Christian organizations working together. In her research, she’s interested in the peculiar alliances and valences of this particular anti-trafficking network.
This network uses metaphors of slavery, and idealized ideas of what freedom is about. Men are often imagined as “defenders against slavery.” One organization has a campaign called “The Defenders USA,” where you get your own shield and sword to be a defender against prostitution.
What happens when evangelicals and feminist activists work together– how does that affect our trafficking policies? Mitali says that in 2001, a UN protocol on trafficking began to inform how most countries approach a wide variety of issues from trafficked labor to pornography and sex work. In the US, responses tend to be focused on sexual exploitation rather than wider labor exploitation. Although the agriculture industry dominates US trafficking, the focus on sexual exploitation is associated with a “rescue industry” and heavy involvement of law enforcement. This approach, called “carceral feminism” by some feminist scholars, often involves NGOs and the state working together.
Mitali tells us the story of Monica Jones, a black trans woman social worker in Phoenix, who was arrested by the police in collaboration with anti-trafficking organizations. The ACLU has called this “arrested for walking while trans.” A court has judged her trial unfair and opened it up for retrial. Mitali says that this is one example where carceral feminism involves the policing of sexuality and the incarceration of marginalized groups.
As a PhD student at HASTS, Mitali does extensive fieldwork with computer scientists, law enforcement, and the bureaucrats/government officials who are making decisions about child safety.
Mitali calls this collaboration between NGOs and police “para-judicial policing.” In her fieldwork with a Dutch organization, Mitali is studying these collaborations. She shows us a video by the NGO Terres Des Hommes, who go undercover to manipulate a fake girl computer model to identify “webcam sex tourists” and hand them over to the police. Mitali has spent time with this organization and the partners they have in southeast Asia.
Sweetie, this generated avatar of a girl, was created by a gaming company for Terres Des Homes. Sweetie can do 14 different movements, including her arms. She does not undress on camera, does not do any kind of sexualized acts, is just sitting, and is able to talk and move her arms. This campaign was set up, working out of a warehouse (they were worried they would be found by the people they were chatting to). They went onto webcam chats, and then used the Sweetie image in a minority of cases. They brought the conversation to the point where it seemed like the man wanted something, took whatever identifying information they could, printed a physical packet of papers, and walked the list of names to Interpol and Europol. Many law enforcement officers find this abhorrent and stupid. This is the work of the police, they say. NGO organizations describe this as a new and cutting edge model for the future of addressing these issues. Terres Des Hommes calls this “pro-active policing.”
When TDH submits these names, who’s actually arrested? The number is under 20, only people who had previous cases open. The sting operation can’t directly lead to an arrest.
Why a filipino child? After testing a variety of avatars, the company settled on her. The image is an amalgam of over 100 children that the organization works with. The organization has been working in the philippines for a long time.
Who is the organization trying to catch? Whenever there’s a non-law-enforcement effort, there’s already a pre-determined predator they’re trying to catch. The number one chatters of Sweetie were from the UK and US, but number 3 was India, and women also chatted with it. This was an unexpected outcome; they were expecting to catch European men.
Mitali also researches other visualization and imaging techniques for identifying and detecting “missing children.” She’s also interested in the “gamification of surveillance” and the use of this surveillance (whether photo tagging and image recognition or avatars) to carry out these “policing” activities.
Citing questions raised by her fieldwork, Mitali says, “I’m interested in feminist technologies, and interested in design and ending exploitation. “What is at stake in these issues? Do young people have rights? Do they have rights or sexual rights? What does it mean to talk about young people and sexual rights. Are young people’s sexual rights protected under the UN Convention for the Rights of the Child, and do law enforcement think about that? How do we think about risk, and do we see online spaces as spaces of opportunity? What is a problematic versus a dangerous situation? And finally, I’m thinking about governance and design: law enforcement, NGOs, computer scientists, and companies working together. What do these partnerships mean, who’s not at the table, and what might it mean to actually have young people involved in exploitation campaigns?” Mitali asks us to imagine speculative possibilities for ending exploitation and liberation that still uphold children’s rights.
Question: Sweetie was an amalgam of many real children. Did those children or those parents consent to this use of them? Mitali: many NGOs face this issue. Terres Des Homme works with many young people who don’t have parents or guardians. Their images were used without their consent, and the Philippines government complained, having felt targeted by this campaign. This idea of “webcam sex tourism” — which this organization coined — combines many complex ideas, and was a publicity campaign.
Question: Why did they generate computer generated children in pornographic situations? Mitali: child pornography is illegal in the US and legal in Japan, and are often met by challenges by the ACLU. In the US, we use the phrase “child pornography,” but in the EU, “child abuse images” and “child exploitation images” are the more common terms. The US has moved from a rehabilitative model to one that sets out to incarcerate people for life. As older crimes like public indecency are now tried under trafficking laws, these new laws are changing penalties for longer-standing issues.
Mary Gray: Many of these campaigns see the Internet as a “stranger-danger threat” when we know that most abuse comes from family and friends. Mitali: campaigns to address sexual exploitation tend to turn into censorship for all sexual information. What might it take to support young people to negotiate risks that they experience?
Question: You have people responding to an image that is false. How might this be considered a form of entrapment? What if people say, “I know this wasn’t a child- it didn’t look very real.”
David Larochelle: You mentioned that this was a publicity stunt. What was the organization hoping to accomplish aside from catching perpetrators? Was it trying to scare people? Raise money? Mitali: Definitely to raise money; that’s always the goal of any NGO. I think it’s more than a publicity campaign, however. They wanted to “wake up the police who weren’t doing anything.” The police said, “of course we’re always doing investigations, you just don’t hear about them.” This NGO and many organizations are reshaping themselves around this trafficking frame. Two years ago, they changed their tagline from “saving the children of the world” to “stop exploitation.” This is why I make the link to human trafficking and the anti-trafficking industry, where this is becoming their goal. Now, police are working closely with these organizations on Sweetie 2.0. The Dutch police are the number 1 employer in the Netherlands, were nationalized several years ago, and hired computer scientists and psychologists to work on their team for exploitation issues. The police psychologists responded, “if you want to believe that [sweetie] is a real child to you, it will be real enough.” What does “real enough” mean for policies around “implied,” “artificial,” and “cgi” forms of pornography.
Question: What about the effects of international organized crime? There are groups who are making a lot of money doing this, and police departments are involved because they get kickbacks. The speaker mentioned that when people tried to do work to end human trafficking, they received threats. Mitali: I don’t know too much about organized crime around trafficking, but this was a major concern of the NGO. They didn’t want this design process to get out, and they did their work from an undisclosed location. They now say, “I don’t know why we were so paranoid.” The traditional police’s fear about “proactive policing” is that
Mitali notes that Anonymous has done a lot of anti-trafficking work themselves. Operation “PedoChat” claimed to have outed a large number of people chatting with children and seeking sex online. Mitali notes that “I’m uncomfortable when we have so many entities involved in many kinds of policing. It’s this classic fear of ubiquitous surveillance. What are our fears about young people, and what happens”
Question by me: Having shared more complex cases, what directions do you find most promising in the sex trafficking space. Mitali tells us about an organization called “End child prostitution and trafficking,” and they’re interested in doing research on sexy selfies. For an NGO to be doing that kind of research is radical and maybe is thinking about inclusive design. To have organizations doing thinking about “child” and “sexuality” next to each other is a radical move.
David Larochelle: how does gender play into these debates? Mitali: with trafficking cases, it’s hard to get numbers and specific data, but some researchers in specific places have documented trafficking of boys and men, especially for sexual exploitation. When you use imagery that only shows women and children, you do a service in ignoring very real exploitation towards men and boys. Furthermore, the number one form of exploitation is of adult men and women in meat packing plants and farm work, but it’s easier and safer in the current US political context for organizations to focus on sex trafficking and women. When I talk about “carceral feminism,” we’re seeing “heavy policing” life imprisonment as strong responses to these issues, with incentives like the “war on drugs.”
Question: Viscerally, these crimes of forcing someone to do something against their will feel pretty abhorrent. Where do you see law enforcement fitting in? Mitali: one way would be a child-centered approach rather than “finding the bad guys.” Instead, we might focus on supporting the people who are missing. When children are “rescued” by these campaigns, what happens to them? People who are not citizens of the country where they are rescued, they’re often deported. A child and rehabilitative approach would focus on finding exploited children and care for them long term.
Question: How much research have you done into the conditions of the children who were doing webcam chats in the Philippines? A serious discussion of their digital rights has to be understood in the context of their access. For example, Sonia Livingstone is arguing that any discussion of digital rights for children must include analysis of access; it’s easy for people in the Global North to assume similar access for children in the Global South. Mitali: I’ve done some research in Cebu, which is where this NGO works. Internet Cafes are common physical social spaces for people to play games and also cam. Terry Senft has done research with camgirls– and we need more work with children.
Question by me: I know you’ve published whitepapers and other reports together with Microsoft; how do you think about the role your work places in these issues. Mitali: I turn the lens on people in positions of power, doing ethnography of the police and methods of policing. Other researchers have looked at children, and cultural spaces of children’s sexuality. I hope that this work can help people think about the people in positions of power, something that STS is designed to do.
Mary Gray: How do the police feel about this being the “drain” of their focus versus other kinds of policing. Mitali: it depends on the funding of policing. When you have child exploitation centers in the police, it’s not a burden. But when NGOs get involved, they tend to feel like they have to clean up other organizations’ messes. They also can be concerned when other organizations press against what they see as their borders.
Mitali: As I write a report on “child safety,” I’m trying to find links to people who involve young people in design processes. Nathan refers to Roger Hart’s work on Children’s Participation. Mary Gray refers to Hasinoff’s book Sexting Panic.
Readers with further ideas and suggestions can reach Mitali on Twiter at @mitalithakor.
The Quantified Self; Newsfeed: Created by you?; Holding Crowds Accountable To The Public; EVE Online and World of Darkness
Today at the Berkman Center, our summer PhD Interns gave a series of short talks describing our research and asking for feedback from the Berkman community. This liveblog summarizes the talks and the Q&A (special thanks to Willow Brugh for collaborating on this post).
Mary Gray, senior researcher at Microsoft Research, opened up the conversation by sharing more about the PhD internship. “We need folks who can do bridge work, who can work between university and industry settings.” Each students’ projects is taking a tack that is less common; there’s mostly a social-critical approach. Our group is particularly focused on showing the value of showing the value of methodologies are less familiar in industry settings. It’s a twelve-week program, and it doesn’t always happen in the summer. “We’re always interested in people who want to take a more critical/qualitative approach. We have labs all over the world, and each lab accepts up to 100 PhD students to do this kind of work,” Mary says.
Microsoft research is (sadly) unique in that everything a student does is open for public consumption, says Mary. PhD students are encouraged to do work that feeds academic conversations while also potentially connecting with product groups that could benefit from that insight.
Quantified Self: The Hidden Costs of Knowledge
What are the privacy ramifications of our voracious appetite for data, what are the challenges of interpreting it, and how might data be employed to widen inequality? Ifeoma Ajunwa is a 5th year PhD candidate in Sociology at Columbia University. Recurring themes in her research include inequality, data discrimination and emerging bioethics debates arising from the exploitation of Big Data.
“Almost everything we do generates data,” Ifeoma quotes Gary Wolf’s WIRED Magazine article on quantified self. And yet this kind of data collection can be a form of surveillance; companies can also often crawl this data from the Internet and use it to feed algorithms that influence our lives. In this backdrop, people are also collecting data about themselves through the Quantified Self movement– data that could also be captured by these companies and used for purposes beyond our consent.
How can our data be used against us? Kate Crawford noted in a recent Atlantic article that this data has been used in courtrooms. Ifeoma also expresses worries that this data could be used against people as companies use it to limit their own risk. The “quantified self” has a dual meaning. On one hand, it refers to the self knowledge that comes from that data. On the other, this idea could turn against people as institutions set policies based on that data that widen inequality.
In her summer research with Kate Crawford at MSR, Ifeoma is looking at the quantification of work. Unlike Taylorism, where the focus was on breaking down the job task itself, the focus now is on “the individual worker’s body” and “inducing the worker to master their own body for the benefit for the corporation.” In this “surveillance-innovation complex,” companies try to evade regulation by seeking protections for innovation. They’re looking specifically at workplace health programs that include health, diet, exercise. These programs track weight, spending habits, etc. Ifeoma is looking at what companies track and how the interpretation of this data can impact the workers it’s generated from.
She concludes by asking us, “How can we make technology work for us, rather than against us? Could we harness large and small data without it increasing divides and discrimination?
News Feed: Created By You?
How do people enact privacy? Stacy Blasiola usually asks in her research. When you’re posting something online while at a bar, are you thinking about who sees it? This focus misses out on the role that platforms play in this work, a focus she’s taking on at MSR.
Stacy Blasiola is a PhD candidate at the University of Illinois at Chicago and a National Science Foundation IGERT Fellow in Electronic Security and Privacy.
This summer, Stacy will be looking at the Facebook NewsFeed algorithm. She talks about the Facebook Tips page, where Facebook provides information on how to find out who your friends are and how the NewsFeed works. Stacy shows us several videos that they’ve posted under “NewsFeed: created by you”. These videos were promoted by Facebook across their users, and they received millions of video views.
Tim: “I made my News Feed about wellness, nutrition and living my best.” Create a News Feed that inspires you.
Posted by Facebook Tips on Tuesday, December 2, 2014
Stacy has been looking at the relationship between the videos and the comments… “Surrounding myself with… knowledge and expertise. I want to know what you know.” “I look forward to seeing my best self every day.”
According to Facebook, Tim is solely responsible for what he sees in his feed. Stacy has been looking at the discourse used by users and platforms to ask, “how do the platforms matter to the users?” When users commented on these videos, Facebook often posted official comments.
One user said: “This leads me to believe I have control over my own feed. I don’t. FB is constantly making things disappear and rearranging the timeline.”
The company’s response changes depending on the type of questions asked. For example, “Why do I keep getting old posts? Well, people are posting a lot on it, so it resurfaces”… Facebook uses linguistic gymnastics to not say “we’re doing this.”
- Stacy is at the very beginning stages of this project, and hopes to carry out the following kinds of analysis:
- How do users discuss the news feed algorithm?
- How does Facebook position the news feed to these users? Especially, where do they place responsibility?
- How do users talk about the news feed to each other?
What Does It Mean to Hold Crowds Accountable To The Public?
Nathan Matias is a PhD Candidate at the MIT Center for Civic Media/MIT Media Lab, and a Berkman fellow, where he designs and researches civic technologies for cooperation and expression.
I was onstage at this point, but here’s a basic summary of the talk. After posing the question “How do we hold crowds accountable to the public?” I described common mechanisms that we imagine as forms of accountability: pressure campaigns, boycots, and elections, legislation, etc. I then described three cases where these mechanisms seemed unable to address forms of collective power we see online:
In the case of Peer Production, people sometimes petition Jimmy Wales, somehow believing that he has the power to change things. Other times, op ed writers make public appeals to “Wikipedia” or “Wikipedians” to address some systematic problem. I described my work with Sophie Diehl on Passing On, a system that uses infographics to appeal to public disappointment and then channels that disappointment into productive change on Wikipedia (more in this article).
— Berkman Center (@berkmancenter) June 16, 2015
In the case of Social Networks, we sometimes criticize companies for things that are also partly attributable to who we accept as friends or what we personally choose. This debate is especially strong in discussions over information diversity. I shared an example from Facebook’s recent study on exposure to diverse information, outlining their attempt to differentiate between available media sources, friend recommendations, personal choices, and the NewsFeed algorithm. I also described my work with Sarah Szalavitz on FollowBias, a system for measuring and addressing these more social influences..
Finally, I described work on distributed decisionmaking, whether the decisions of digital laborers who do the work of content moderation online. I described my recent collaboration on a research project describing the process of reviewing, reporting, and responding to harassment online. I also described upcoming work to study the work of moderators on Reddit.
How Do Gaming Communities Make Sense of Their Personal Place and Role in Massive Worlds?
What is it like to be an EVE Online player? Aleena opens up by showing us videos of massive online space battles in this massively online multiplayer game.
Aleena Chia is a Ph.D. Candidate in Communication and Culture at Indiana University currently interning at Microsoft Research, where she investigates the affective politics and moral economics of participatory culture, in the context of digital and live-action game worlds.
Aleena’s research on consumer culture tends to focus on gaming activities. How they make sense of their role in these massive worlds? Her argument is that users make sense of their experience through the alignment of spectacle, alcohol, and experience at brand fests and the gameplay experience itself. They feel that they’re truly a part of something larger than themselves.
How do they make sense of their hours spent on this? They spend hours and hours each week building up empires. This experience is made sensible through reward and reputation systems, sometimes designed by the companies, and sometimes by the communities themselves. How do players make sense of the time they’ve invested into their identities as gamers in Eve but also beyond? They make sense of this through conversations about work-life balance, as well as the recognition by others that their work has cultural, economic, and social value.
At the heart of this are <strong>Compensatory drives</strong> – use things to add up and even out. Get what’s coming to them. (Re)compense is connected to an idea of balance, a moral equilibrium. These “compensatory forces” give people a connection to the intangible world, to have a sense of fairness and justice, and a sense of aesthetic, economic, and social legitimacy.
EVE Online is a hypercapitalist world with no governments — warfare, murder, and theft are sanctioned if you can get away with it. But there is also a democratically elected player council, consultants to the game company, who talk to developers and the company. The savage world is managed through civilized processes.
Can player representatives effectively consult with the company? Players have very micro concerns, while developers often have macro-level concerns for all the players. Within these systems, there are some mechanisms of accountability — if they don’t do well, they won’t get elected. Players often complain to them on forums, email, and at meet-ups. But they also don’t have much power.
To understand this, Aleena will be looking at minutes from meetings between the council members and the company, as well as the council members and the player base. She’ll also be looking at town hall meeting logs, election campaigns materials and responses. She’ll be asking, how do they see their roles in relationship to each other? She’ll also look at how players learn to be council members of terms of office by examining meeting minutes. Finally, Aleena will be mapping feedback channels, mechanisms, directions, and ruptures– both formal and informal mechanisms. Feedback doesn’t just run up the chain from players to developers through the consultants; it also runs down. Consultants have a job (either implicit or explicit) to advocate for the company and “spread goodwill to the masses.”
If the election of player councils is one example of a democratic process between audiences and brands (perhaps related to reality tv shows with audience feedback. Now we have tribunals). Is this market populism (neoliberalism at work, a replacement of authentic democratic engagement)? Might it instead be consumer co-creation (customer relations, commoditized into a pleasurable and branded experience) – not just about making the experience better, but your experience as a consumer). Lastly, designers often say that users don’t know what they want — discounts popular will.
Finally Aleena is asking, “how are these democratic mechanisms changing the means and meanings of consumption?”
Questions, Answers, Comments
Ethan : How crowds develop or are simply different from users or patients or…? Wikipedia has a crowd, but how do you distinguish from other groups.
- Nathan: You’ve likely thought of this in your dispute resolution research. We might think of individuals or institutions. Using “crowd” as a placeholder for something we don’t quite know where to apply the lever to change things. Cumulative effect of the social choices and friendships we have in a network. Or it might be more identifiable.
Rebecca : What is your model of genuine civic engagement which neoliberalism has surplanted?
- Aleena: My utopia is a participatory democracy. But my own is not official political systems, but how can the media open up space for the public to participate? Engagement with the media via certain mechanisms creates real decision making power in the system and the content.
Tarleton: How do people think about their role in relation to community. But the world is meant to be something– Eve is clear about this, as is WIkipedia, and Facebook is sort of getting there. Not just governance problems, but the narrative claim of the institution masking or distorting the style of engagement?
- Stacy: My project stacked against Nathan’s shows two different aspects of the same problem. “Algorithm” as a single thing to be tweaked to fix everything. But that is not something I know. Transparency is seen as something severely lacking. How does Facebook present in order to shape the reality. “We” in publicity, “you” in user interactions. Depends on the audiences they’re speaking to.
- Nathan: I draw inspiration from Hochschild’s research on airline attendants. There was a clear corporate brand identity concern influencing how how airline attendants were trained to respond to tough situations. Training wasn’t just about what to do, but how to be. Like Hochschild’, I’m also looking at the process of learning to be a worker. There are job boards on Reddit where people apply and chat. Reddit has basic rules overall, but each /r also has special rules. I’m looking at how moderators look at their roles in their /r as well as at Reddit.com
- Aleena: The word between corporate and users — classic customer feedback, filter them, see what makes sense, incorporate. But we also want to persuade users that we’re on the same page, no “us vs them.” It’s not just about the bottom line — want there to be engagement. Eve doesn’t just want you to be happy. They want you to strive and have troubles.
- Ifeoma: Are governance of wellness programs actually voluntary? People aren’t voting about the shape the wellness program will take, only that it will exist. It’s about shifting the responsibility onto the individual worker. No real discussion if the work infrastructure can be shaped to achieve the same thing. Corporation abdicating its responsibility for a healthier worker, putting it on the worker. We’re worried about what that means, in the structural constraints inhibiting them. What will the new workplace discrimination be? It’s perfectly ok for your employer to fire you if you’re a smoker outside the workplace. Level of coercion. Up to 30% or 50% of the program covered for smoker cessation.
Mary Gray: Across these talks, there has been an implicit appeal that a social need or desire work outside of market demands. Want to keep players playing, keep newsfeed functioning in a certain way, etc. Market demands the corporations do something external to what the players/users/etc want. Why are we seeking corporate good? What sends us to the corporation to fix these things, seeing them as the path of recourse?
- Stacy: Inherent expectations from users that Facebook be “truthful.” Christian Sandvig did work that shows users feel confused, lied to “I thought that person just didn’t like me.” Users rely on FB to maintain social connections. When those assumptions aren’t met, tehre is anger. Comments of “why are you doing this to me?” Other people say “I have to be friends with someone because of business or whatever, but I don’t want to see their posts.” Gatekeeping is not new — but we don’t know how they’re doing this process.
- Aleena: Players look to Eve for social justice because the company thinks it makes good sense. Difference between votes at face value and adapting to mass player will. Do have to come up with something new, even if it’s not the thing that was asked for.
- Nathan: Wikipedia and Reddit are sorts of counterexamples to Eve or Facebook in that participants and active contributors may feel that it is a public good. Wikimedia, as a nonprofit, is funded by donations, has elected board members, and can be thought of as accountable to its participants. But when people who are not contributors are affected by its power, they may take traditional routes (such as petitioning Jimmy Wales). Reddit is more complex. When Reddit started banning /r for specific behavior they saw instead of its traditional hands-off model, this question of interests started to crack. Advertising and Reddit Gold are perhaps competing income models — when a fundraising goal is met, they buy a new server. But Reddit the company also is starting to take more top-down responsibility for what its users do, which makes them look more like other corporations.
- Ifeoma: Relinquishing the rights of social good to corporations has to do with complex problems and simple solutions. Wellness programs try to address American lifestyles being unhealthy (sitting, eating, smoking, don’t work out) — which is complex both in lifestyle and in infrastructure. Trying to fix with something as simple as a wellness program won’t have the intended results. AND has unintended results (discrimination especially). Laws don’t cover obesity or smoking, which are stigmatized, and encroach on the rights of the worker.
Nick Seaver: What is happening to audiences, citizens, workforces, etc as different publics — are you helping to defining what each of those things means? My own work has been undermined because I didn’t define that.
- Aleena: What is the value of comparing it to things which have come in the past? Something IS different in connected space. Not just how democracy and society are changing, but how are the meanings of consumption changing? Video games even… you can do so many more things — you’re supposed to buy things and make friends and etc. “What kind of player are you?” Identities are tied to this. How does this jigsaw piece connect to the rest of it?
- Ifeoma: Historical context is really important, especially in defining what a worker is. A defining thing is technology being available. Workplace surveillance isn’t new. What is new are the advances in technology, letting us survey and track the worker in a way which wasn’t already available. Collapses the line between work and no work. Woman fired (and sued) for deleting an app on her phone — which couldn’t be turned off, and was tracking her (how fast she was driving, where she was at over the weekend, etc). Unforeseen issues — need to redefine what it means to be a worker.
- Nathan: Kevin Driscoll and I have this debate about BBSs. We talk about them with a set goal and solid definitions. People do the same for Twitter and Reddit etc. And yet, in moderation work, there are common experiences, expectations, and tools. These moderators have to figure out how to work at that intersection between what a company with <100 employees are defining as the space of their work. Postigo explored this somewhat in his research on AOL community leaders — the more AOL did to control, track, standardize community leaders’ work, the more those folk thought of themselves as a collective and like unpaid workers. So I’m still looking for the language and theories to describe this.
- Stacy: I’m interested in the idea of When Technologies Were New — how people interacted with tech when things were new. The more we do with research, the more we realize nothing is new. In that sense, my dissertation isn’t just about Facebook, about algorithms at large. Society from a larger view — how do we understand the mediations happening?
What does it take to keep online communities going? With over 550,000 public subreddits, many of which are active, the communities on the site rely on ongoing effort by a large number of volunteer moderators. In my research, I’ve made the case that caring for the communities we’re part of is an important kind of digital citizenship. For that reason, I’m excited to learn more from redditors about how they see the work of moderation, why they do it, and what is/isn’t their job.
- About this Research Project
- Ethics: Who’s This For, What am I Recording, and What am I Sharing?
- Why Do Research With Redditors?
- How Am I Going About This Research?
This spring, I’ve been reading extensively about digital labor and citizenship online, including the story of over 30,000 AOL community leaders who facilitated online communities in the 90s. With Reddit pushing for profitability and promising new policies on online harassment, I thought that potential tensions arising this summer might offer an important lens into the work of moderators, at a time when listening to mods and recognizing their work would be especially important. “The summer is likely to include substantial discussion and introspection on the nature and boundaries of moderation work on Reddit,” I wrote in my proposal mid-May.
Although I expected something, I didn’t expect that Reddit would ban a set of subreddits and mods in their attempt to carry out their new policies, or that some redditors would vigorously oppose this move. (Update July 6: I also didn’t anticipate that reddit moderators would take their subs private to advocate for changes in how they are treated). These controversies have convinced me that this research could be especially valuable right now. Press coverage is likely to focus primarily on the controversy, while I can carry out a summer-long project, in conversation with a wider sample of redditors than just those associated with this controversy.
In this post (which I will be sharing with redditors when I ask permission to speak with them) I outline my research to understand how Reddit’s moderators see and define what they do. This blog post includes details of the research, the promises I make to redditors, and the wider reasons for this project.
About This Research Project
I’m a PhD student at the MIT Media Lab / Center for Civic Media and a fellow at Harvard’s Berkman Center for Internet and Society where I research civic life online. As a PhD intern at Microsoft Research, I get to be supported this summer by amazing researchers including Tarleton Gillespie, Mary Gray, and Nancy Baym, who are advising this project.To learn more about my work you can read my MIT blog or check out my portfolio.
- Hanging out in moderator subreddits like needamod, modhelp, and others to learn more about how mods find opportunities, learn the ropes, and discuss their work
- Posting questions to some subreddits, after seeking permission from the mods, asking questions or getting feedback on my working understandings
- Collecting basic summary statistics across Reddit, from public information, to understand, on average, how many mods there are (like the above chart) and what kinds of rules different subreddits have.
- (potentially) interviewing reddit mods
- (potentially) trying my hand as a moderator
Ethics: Who’s This For, What am I Recording, and What am I Sharing?
My summer project is being done at Microsoft Research’s Social Media Collective, where I am a PhD intern. At MSR, I have the intellectual freedom to ask questions that are widely important to society and scholarship. I also expect to make my research widely accessible. Microsoft open-sourced my code when I was an intern in 2013, and Microsoft Research has an open access policy for its research.
Although I am a fellow at the DERP Institute and can, in theory, start a conversation with Reddit employees, I have not discussed this project with Reddit at all, have never received compensation from Reddit, nor am I working for them in any way. While it is possible that I may be asked in the future to share my results with the company, I will not share any of my notes or data with Reddit beyond the findings that I publish in research papers, public talks, blog posts, or open source materials.
This isn’t the first time I’ve done research about the work of moderation from the outside a powerful company. Last month, my colleagues and I published a report on Reporting, Reviewing, and Responding to Harassment on Twitter, including a section on the work of moderating alleged harassment. In that study, we treated everyone in our study with respect, including alleged harassers. Our research team did not share data with the company, we were writing independently of Twitter, and we had full editorial control over our report, even from the commissioning organization WAM!. Likewise, in my 2013 summer research at Microsoft on local community blogging, we either summarized or anonymized/modified all quotes and photos before publishing our results.
- Anyone can opt out of this research at any time by contacting me at /user/natematias. If you opt out, I will avoid quoting or mentioning you in any way in the published results.
- By default, I will anonymize any information I collect before publishing
- If a user requests that I use their username to give them appropriate credit for their work, I’ll weigh the risk/benefits and try to do right by the user
- I will keep all my notes and data secured, with secure backups that I access through encrypted connections.
Why Do Research with Redditors?
Reddit is one of the few major public platforms on the English-language web that allows/expects its users to establish and maintain their own communities, without thousands of paid content moderators and algorithms behind the scenes deciding what to keep or delete. In contrast, the Huffington Post pre-moderates 450,000 comments per day, paying between $0.005 and $0.25 for every comment that comes in. Yet Reddit mods do so much more than just delete spam. They do a huge amount of important work to create new communities, recruit participants, post content, manage subreddit settings & style, recruit new moderators, set rules for their subreddit, and monitor/manage submissions and comments. Moderators also tend to play a large role in debating and establishing wider community norms like Rediquette.
Last week, I used the Reddit API to collect data on the number of moderators who keep subreddit’s conversations going. A random sample of 100,615 subreddits (roughly 1/6 of all public subreddits) had 91,563 user accounts as moderators. While not all of these subreddits are active, each of them represents a moment of interest to try on the role. Among the 46% of subreddits with more than one subscriber, 30% of these subreddits have two or more moderators.
Communities of redditors and mods shaped some of my earliest impressions of the site six years ago, when a work colleague in invited me to join Reddit London meetups, telling me stories about their weekend and after-work gatherings. It was clear that participation meant more to many redditors than just links and comments. Later on, when I took two years to facilitate @1book140, The Atlantic’s Twitter book club, with around 140,000 subscribers, I came to learn how challenging and rewarding it can be to support a large discussion group online.
How Am I Going to Go About This Research?
Computer scientists, economists, and designers often want to ask if offering the right upvoting system or the right set of badges will filter content effectively or motivate people to contribute the greatest amount of appropriate effort to a web platform. This focus on productivity often interprets the activity of users in the language of company priorities rather than community ones. Stuart Geiger and I discussed this idea of productivity last fall at HCOMP Citizen-X, arguing that we need to understand user’s values beyond just the “productivity” of a group.
Although I often explore questions through design and data analysis, I’m taking a different approach this summer, to better understand how redditors see their own participation. My first semester at MIT taught me how important it can be to participate and observe a community rather than just measure it. Rather than spend the whole summer data-mining the Reddit API, I’m participating in subreddits and speaking to redditors. In “The Logic and Aims of Qualitative Research,” a chapter in a larger collection on communications research methods, Christians and Carey say that when researchers ask questions about human life, “we are examining a creative process whereby people produce and maintain forms of life and society and and systems of meaning and value.” They argue that qualitative research sets out to “better understand the meanings that people use to guide their activities” (358-9).
As a student in MIT’s Technologies for Creative Learning class, I was curious about how young people learning to code thought about “bugs” in the stories, art, and games they made with Scratch. In a corporate environment, where there’s a goal for everyone’s work, it’s possible to define software errors. But does the same language apply to a ten-year-old child who’s creating a story after school? Most scholarly discussion of “bugs” applied this corporate term to young people, defining strict goals for students and measuring “errors” when they diverged from pre-defined projects. When I visited schools, observed student projects, and talked to students, I saw that diverging from the teacher’s plan could be a highly creative act. Far from an error, a “glitch” could prompt new creative directions, and an “unexpected surprise” often opened learners to new understandings about code.
Code and artwork from one of my first projects on Scratch.
If I had relied entirely on the definitions and data coming from teachers or the Scratch platform, I might have been able to test statistical hypotheses about “bugs,” and I might even have developed ways to limit the number of errors per student. I would never have noticed how important these unexpected surprises were to young people’s creativity, and at worst, I might even have reduced the chances of students to experience them. By participating with students and spending time in their learning environment, I was able find new language, like “glitch,” that might move conversations beyond “errors” or “bugs.”
For my Reddit study this summer, I want to hear directly from mods about how they see their work; questions that go well beyond what can be measured. Many thanks in advance to those who welcome me into your subreddits this summer and take time to talk with me.
Update July 6, 2015. I ran into a Reddit employee at a conference last week and sent them this link, so the company is now aware of this project. I am still not working directly with Reddit in any way.
A very similar version of this blog post originally appeared in Culture Digitally on June 5, 2015.
Words Matter. As I write this in June 2015, a United Nations committee in Bonn is occupied in the massive task of editing a document overviewing global climate change. The effort to reduce 90 pages into a short(er), sensible, and readable set of facts and positions is not just a matter of editing but a battle among thousands of stakeholders and political interests, dozens of languages, and competing ideas about what is real and therefore, what should or should not be done in response to this reality.
I think about this as I complete a visiting fellowship at Microsoft Research, where over a thousand researchers worldwide study complex world problems and focus on advancing state of the art computing. In such research environments the distance between one’s work and the design of the future can feel quite small. Here, I feel like our everyday conversations and playful interactions on whiteboards has the potential to actually impact what counts as the cutting edge and what might get designed at some future point.
But in less overtly “future making” contexts, our everyday talk still matters, in that words construct meanings, which over time and usage become taken for granted ways of thinking about the way the world works. These habits of thought, writ large, shape and delimit social action, organizations, and institutional structures.
In an era of web 2.0, networked sociality, constant connectivity, smart devices, and the internet of things (IoT), how does everyday talk shape our relationship to technology, or our relationships to each other? If the theory of social construction is really a thing, are we constructing the world we really want? Who gets to decide the shape of our future? More importantly, how does everyday talk construct, feed, or resist larger discourses?
rhetoric as world-making
From a discourse-centered perspective, rhetoric is not a label for politically loaded or bombastic communication practices, but rather, a consideration of how persuasion works. Reaching back to the most classic notions of rhetoric from ancient Greek philosopher Aristotle, persuasion involves a mix of logical, emotional, and ethical appeals, which have no necessary connection to anything that might be sensible, desirable, or good to anyone, much less a majority. Persuasion works whether or not we pay attention. Rhetoric can be a product of deliberation or effort, but it can also function without either.
When we represent the techno-human or socio-technical relation through words, images, these representations function rhetorically. World making is inherently discursive at some level. And if making is about changing, this process inevitably involves some effort to influence how people describe, define, respond to, or interact with/in actual contexts of lived experience.
I have three sisters, each involved as I am in world-making, if such a descriptive phrase can be applied to the everyday acts of inquiry that prompt change in socio-technical contexts. Cathy is an organic gardener who spends considerable time improving techniques for increasing her yield each year. Louise is a project manager who designs new employee orientation programs for a large IT company. Julie is a biochemist who studies fish in high elevation waterways.
Perhaps they would not describe themselves as researchers, designers, or even makers. They’re busy carrying out their job or avocation. But if I think about what they’re doing from the perspective of world-making, they are all three, plus more. They are researchers, analyzing current phenomena. They are designers, building and testing prototypes for altering future behaviors. They are activists, putting time and energy into making changes that will influence future practices.
Their work is alternately physical and cognitive, applied for distinct purposes, targeted to very different types of stakeholders. As they go about their everyday work and lives, they are engaged in larger conversations about what matters, what is real, or what should be changed.
Everyday talk is powerful not just because it has remarkable potential to persuade others to think and act differently, but also because it operates in such unremarkable ways. Most of us don’t recognize that we’re shaping social structures when we go about the business of everyday life. Sure, a single person’s actions can become globally notable, but most of the time, any small action such as a butterfly flapping its wings in Michigan is difficult to link to a tsunami halfway around the world. But whether or not direct causality can be identified, there is a tipping point where individual choices become generalized categories. Where a playful word choice becomes a standard term in the OED. Where habitual ways of talking become structured ways of thinking.
The power of discourse: Two examples
I mention two examples that illustrate the power of discourse to shape how we think about social media, our relationship to data, and our role in the larger political economies of internet related activities. These cases are selected because they cut across different domains of digital technological design and development. I develop these cases in more depth here and here.
‘Sharing’ versus ‘surfing’
The case of ‘sharing’ illustrates how a term for describing our use of technology (using, surfing, or sharing) can influence the way we think about the relationship between humans and their data, or the rights and responsibilities of various stakeholders involved in these activities. In this case, regulatory and policy frameworks have shifted the burden of responsibility from governmental or corporate entities to individuals. This may not be directly caused by the rise in the use of the term ‘sharing’ as the primary description of what happens in social media contexts, but this term certainly reinforces a particular framework that defines what happens online. When this term is adopted on a broad scale and taken for granted, it functions invisibly, at deep structures of meaning. It can seem natural to believe that when we decide to share information, we should accept responsibility for our action of sharing it in the first place.
It is easy to accept the burden for protecting our own privacy when we accept the idea that we are ‘sharing’ rather than doing something else. The following comment seems sensible within this structure of meaning: “If you didn’t want your information to be public, you shouldn’t have shared it in the first place.” This explanation is naturalized, but is not the only way of seeing and describing this event. We could alternately say we place our personal information online like we might place our wallet on the table. When someone else steals it, we’d likely accuse the thief of wrongdoing rather than the innocent victim who trusted that their personal belongings would be safe.
A still different frame might characterize personal information as an extension of the body or even a body part, rather than an object or possession. Within this definition, disconnecting information from the person would be tantamount to cutting off an arm. As with the definition of the wallet above, accountability for the action would likely be placed on the shoulders of the ‘attacker’ rather than the individual who lost a finger or ear.
‘Data’ and quantification of human experience
With the rise of big data, we have entered (or some would say returned to) an era of quantification. Here, the trend is to describe and conceptualize all human activity as data—discrete units of information that can be collected and analyzed. Such discourse collapses and reduces human experience. Dreams are equalized with body weight; personality is something that can be categorized with a similar statistical clarity as diabetes.
The trouble of using data as the baseline unit of information is that it presents an imaginary of experience that is both impoverished and oversimplified. This conceptualization is coincidental, of course, in that it coincides with the focus on computation as the preferred mode of analysis, which is predicated on the ability to collect massive quantities of digital information from multiple sources, which can only be measured through certain tools.
“Data” is a word choice, not an inevitable nomenclature. This choice has consequence from the micro to macro, from the cultural to the ontological. This is the case because we’ve transformed life into arbitrarily defined pieces, which replace the flow of lived experience with information bits. Computational analytics makes calculations based on these information bits. This matters, in that such datafication focuses attention on that which exists as data and ignores what is outside this configuration. Indeed, data has become a frame for that which is beyond argument because it always exists, no matter how it might be interpreted (a point well developed by many including Daniel Rosenberg in his essay Data before the fact).
We can see a possible outcome of such framing in the emerging science and practice of “predictive policing.” This rapidly growing strategy in large metropolitan cities is a powerful example of how computation of tiny variables in huge datasets can link individuals to illegal behaviors. The example grows somewhat terrifying when we realize these algorithms are used to predict what is likely to occur, rather than to simply calculate what has occurred. Such predictions are based on data compiled from local and national databases, focusing attention on only those elements of human behavior that have been captured in these data sets (for more on this, see the work of Sarah Brayne)
We could alternately conceptualize human experience as a river that we can only step in once, because it continually changes as it flows through time-space. In such a Heraclitian characterization, we might then focus more attention on the larger shape and ecology of the river rather than trying to capture the specificities of the moment when we stepped into it.
Likewise, describing behavior in terms of the chemical processes in the brain, or in terms of the encompassing political situation within which it occurs will focus our attention on different aspects of an individual’s behavior or the larger situation to which or within which this behavior responds. Each alternative discourse provokes different ways of seeing and making sense of a situation.
When we stop to think about it, we know these symbolic interactions matter. Gareth Morgan’s classic work about metaphors of organization emphasizes how the frames we use will generate distinctive perspectives and more importantly, distinctive structures for organizing social and workplace activities. We might reverse engineer these structures to find a clash of rivaling symbols, only some of which survive to define the moment and create future history. Rhetorical theorist Kenneth Burke would talk about these symbolic frames as myths. In a 1935 speech to the American Writer’s Congress he notes that:
“myth” is the social tool for welding the sense of interrelationship by which [we] can work together for common social ends. In this sense, a myth that works well is as real as food, tools, and shelter are.
These myths do not just function ideologically in the present tense. As they are embedded in our everyday ways of thinking, they can become naturalized principles upon which we base models, prototypes, designs, and interfaces.
Designing better discourses
How might we design discourse to try to intervene in the shape of our future worlds? Of course, we can address this question as critical and engaged citizens. We are all researchers and designers involved in the everyday processes of world-making. Each, in our own way, are produsing the ethics that will shape our future.
This is a critical question for interaction and platform designers, software developers, and data scientists. In our academic endeavors, the impact of our efforts may or may not seem consequential on any grand scale. The outcome of our actions may have nothing to do with what we thought or desired from the outset. Surely, the butterfly neither intends nor desires to cause a tsunami.
Still, it’s worth thinking about. What impact do we have on the larger world? And should we be paying closer attention to how we’re ‘world-making’ as we engage in the mundane, the banal, the playful? When we consider the long future impact of our knowledge producing practices, or the way that technological experimentation is actualized, the answer is an obvious yes. As Laura Watts notes in her work on future archeology:
futures are made and fixed in mundane social and material practice: in timetables, in corporate roadmaps, in designers’ drawings, in standards, in advertising, in conversations, in hope and despair, in imaginaries made flesh.
It is one step to notice these social construction processes. The challenge then shifts to one of considering how we might intervene in our own and others’ processes, anticipate future causality, turn a tide that is not yet apparent, and try to impact what we might become.
Acknowledgments and references
Notably, the position I articulate here is not new or unique, but another variation on a long running theme of critical scholarship, which is well represented by members of the Social Media Collective. I am also indebted to a long list of feminist and critical scholarship. This position statement is based on my recent interests and concerns about social media platform design, the role of self-learning algorithmic logics in digital culture infrastructures, and the ethical gaps emerging from rapid technological development. It derives from my previous work in digital identity, ethnographic inquiry of user interfaces and user perceptions, and recent work training participants to use auto-ethnographic and phenomenology techniques to build reflexive critiques of their lived experience in digital culture. There are, truly, too many sources and references to list here, but as a short list of what I directly mentioned:
Kenneth L. Burke. 1935. Revolutionary symbolism in America. Speech to the American Writer’s Congress, February 1935. Reprinted in The Legacy of Kenneth Burke. Herbert W. Simons and Trevor Melia (eds). Madison: U of Wisconsin Press, 1989. Retrieved 2 June 2015 from: http://parlormultimedia.com/burke/sites/default/files/Burke-Revolutionary.pdf
Annette N. Markham. Forthcoming. From using to sharing: A story of shifting fault lines in privacy and data protection narratives. In Digital Ethics (2nd ed). Baastian Vanaker, Donald Heider (eds). Peter Lang Press, New York. Final draft available in PDF here
Annette N. Markham. 2014. Undermining data: A critical examination of a core term in scientific inquiry. First Monday, 18(10).
Gareth Morgan. 1986. Images of Organization. Sage Publications, Thousand Oaks, CA.
Laura Watts. 2015. Future archeology: Re-animating innovation in the mobile telecoms industry. In Theories of the mobile internet: Materialities and imaginaries. Andrew Herman, Jan Hadlaw, Thom Swiss (Eds). Routledge Press,