“Socially Mediated Publicness”: an open-access issue of JOBEM

I love being a scholar, but one thing that really depresses me about research is that so much of what scholars produce is rendered inaccessible to so many people who might find it valuable, inspiring, or thought-provoking. This is at the root of what drives my commitment to open-access. When Zizi Papacharissi asked Nancy Baym and I if we’d be willing to guest edit the Journal of Broadcasting & Electronic Media (JOBEM), we agreed under one condition: the issue had to be open-access (OA). Much to our surprise and delight, Taylor and Francis agreed to “test” that strange and peculiar OA phenomenon by allowing us to make this issue OA.

Nancy and I decided to organize the special issue around “socially mediated publicness,” both because we find that topic to be of great interest and because we felt like there was something fun about talking about publicness in truly public form. We weren’t sure what the response to our call would be, but were overwhelmed with phenomenal submissions and had to reject many interesting articles.

But we are completely delighted to publish a collection of articles that we think are timely, interesting, insightful, and downright awesome. If you would like to get a sense of the arguments made in these articles, make sure to check out our introduction. The seven pieces in this guest-edited issue of JOBEM are:

We hope that you’ll find them fun to read and that you’ll share them with others that might enjoy them too!

The Ethics of Attention (Part 2): Bots for Civic Engagement

Cross-posted from the Department of Alchemy blog.

Last month, Ethan Zuckerman of the Center for Future Civic Media (MIT Media Lab) posted a great article on his blog called The Tweetbomb and the Ethics of Attention (constituting Part I to this story, so make sure you read it!), in which he calls into question the practice of flooding particular users with Twitter messages to support a particular cause. The emergence of tweetbombing as a practice on Twitter is very intriguing, particularly around the assumed norms of participation:

Ethan had written previously about “the fierce battle for attention” when covering journalistic integrity and Foxconn; the tweetbomb story, meanwhile, focuses on emergent practices around gaming attention in social media platforms (modern infrastructures for communication), away from the usual norms situated around attention in news-sharing ecosystems.

These practices relate to what Ethan calls “attention philanthropy”: if you can’t move content yourself, see if you can motivate an attention-privileged individual to do it for you.

The problem is that attention is an issue of scale: how do you get the attention of everyone?. Social capital becomes a literal currency; we exchange the value embedded in networks in an attention economy. There are a number of assumptions underlying traditional mass media technologies, like radio and television: broadcast, primetime, the mass audience; but with the internet (like with cable and satellite radio), attention is splintered, across a multitude of channels, streams, feeds.

The issue with social media platforms versus mainstream media outlets is that for the most part there are many individuals who can bring attention to content that aren’t protected by the media institution (for instance, Ethan discusses well-known BoingBoing blogger Xeni Jardin, who manages her own personal Twitter account). In the attention economy facilitated by social media, then, we potentially deal with vulnerable actors.

The Low Orbit Ion Cannon, changing human consent into a social “botnet” for distributed denial of service attacks. What if you could use a similar automated program for political gain?

But what if you don’t have powerful people or institutions to help you garner attention? Or what if you can’t convince others to help you?

Become the Gepetto of the attention economy, and make some bots.

Tim Hwang’s Pacific Social project has shown that Twitter bots can influence Twitter audiences to an astounding degree. The projects’ results show that bots successfully interact with other human users, but the bots also aid in connecting disparate groups of Twitter users together.

This leads me to ask: Can you create bots for civic engagement?

How could a bot work in favor of civic engagement? Well, civic engagement has traditionally been measured according to two factors: 1) voting, and 2) social movements. But it’s increasingly evident, especially in today’s social-media-laden world, that information circulation also helps inform citizens about critical issues and educate them about how to make change within the democracy. We see platforms like Reddit able to spread information to audiences of millions (helping to generate success for campaigns like SOPA). While many complain about “slacktivism,” it’s undeniable that mass attention can generate results.

Bots have a useful power to move information across social networks by connecting human individuals to others who care about similar topics. What if you could essentially use an automated process to optimizes online communities into stronger partisan networks by connecting those with similar affiliations who do not yet know each other? Or, perhaps, use bots to educate vast networks about particular issues? KONY 2012, for instance, utilized student social networks on Twitter as seed groups to help mobilize the spread of information about the campaign.

But there’s also potential for the manipulation of information, and while manipulating the masses is likely though complex, having an army of coordinated bots to do your bidding is much easier, especially when a peripheral effect of bot participation is the perception to human users of important information spreading.

This morning, Andrés Monroy Hernández of Microsoft Research linked me to a timely project by Iván Santiesteban called AdiosBots.

AdiosBots tracks automated Twitter bots set up by the ruling Institutional Revolutionary Party in Mexico. According to Iván’s English project page, one of the contenders from this party for the upcoming elections on July 1st has been utilizing fake Twitter accounts manipulated by bots to spread information to “publish thousands of messages praising Enrique Peña Nieto and ridiculing his opponents. They also participate in popular hashtags and try to drown them out by publishing large amounts of spam.”

In other words, they are “used to affect interaction between actual users and to deceive.” And in total, Iván has found close to 5,000 of these bots.

In this instance, there is no need for attention philanthropy: the bots act as an automated social movement mimicking positive political affiliation while denouncing the opposition’s supporters. But it’s clear that vulnerability plays a huge role in attacks on political individuals and the spread of false information. There’s also the ethical question about what users do not know: is it a problem that individuals assume bots to be human and merely helpful rather than programmed to exploit and optimize human behavior?

Bots of civic engagement also call into question the ethics around social media norms. Should people assume interaction with automatons will occur? Or is this a question of media literacy, where users should be educated enough about the ecosystems they use to be able to point out misinformation, or even find discrepancies between “organic” information and automated information (even when it’s used with beneficial motives)? What if the bots are so convincing that they can’t?

Bots for civic engagement was an idea that almost led me to apply for an annual Knight Foundation grant. If you’re interested in building this idea into a tangible project, please email me.

Alex Leavitt is a PhD student at the Annenberg School for Communication at the University of Southern California. Read more about his research at http://alexleavitt.com or find him on Twitter at http://twitter.com/alexleavitt.

Teens Text More than Adults, But They’re Still Just Teens

danah and I have a new piece in the Daily Beast. Summary: the more things change, the more they stay the same.

In the last decade, we’ve studied how technology affects how teens socialize, how they present themselves, and how they think about issues like gender and privacy. While it’s true that teens incorporate social media into many facets of their lives, and that they face new pressures their parents didn’t—from cyber-bullying to fearmongering over “online predators”—the core elements of high-school life are fundamentally the same today as they were two decades ago: friends, relationships, grades, family, and the future.

Read the full piece here.

A lot of the research that we do involving teenagers seems obvious to teenagers themselves. “Duh.” “Why would anyone study that?” “Who cares?”

Unfortunately, teenagers aren’t the ones writing news stories about how Facebook is making us lonely, Facebook is full of creepers, or teens are pressured to reveal intimate details on Facebook (note: those last two studies sponsored by a company that creates parental blocking and monitoring software). They aren’t the ones passing anti-bullying legislation, appearing on television to tell parents that teens study less and are more narcissistic than a generation ago, or implementing 3-strikes laws in public schools.

Our public-facing work aims to explain teenage practice in clear language that isn’t sensationalistic or fear-mongering. Obviously, not all scholarship lends itself to this type of writing. But given that social media is often discussed in utopian or dystopian terms in the press, research can provide a rational, sensible perspective that’s badly needed. Like, duh.

Reflections on Fear in a Networked Society

I’ve been trying to work through some ideas on how fear operates in a networked society. At Webstock in New Zealand, I gave a talk called “Culture of Fear + Attention Economy = ?!?!” Building on this, I gave a talk at SXSW called “The Power of Fear in Networked Publics.” While my thinking in this arena is still relatively nascent, I wanted to make available what I’ve thought through so far in the hopes that you have feedback and critique.


Internet Blackout: SOPA, Reddit, and Networked (Political) Publics

This post has been cross-posted from Henry Jenkins’ blog.

If you don’t have time to read this article in full, the easiest way to skim information about this topic is to visit http://americancensorship.org/.

In the past year, we’ve dealt with various novel political moments around the world that have been enabled or augmented with networked technology, from Anonymous’ global “hacktivist” incidents to the numerous protests in the Middle East, topped off of course with the vibrant grassroots protests of the Occupy movement. Over the last few months, we’ve also seen another interesting case study taking place in American politics: rampant opposition to the Stop Online Piracy Act, dubbed as “the most important bill in Congress you may have never heard of” by Chris Hayes of MSNBC.com.


Watch Chris Hayes’ interview for a good introduction to the debate around SOPA.

SOPA, a bill currently making its way through the House of Representatives (along with its sibling PIPA, the Protect IP Act, currently in the Senate) has faced weeks of protest from Internet companies and users alike. Why? Well, on Google Plus, Sergey Brin — cofounder of Google — likened the potential effects of SOPA to the Internet censorship practiced in China, Iran, Libya, and Tunisia. Basically, to protect against international copyright infringement, SOPA allows the US to combat websites (such as file lockers or foreign link aggregators) that illegally distribute or even link to American-made media by blocking access to them. Theoretically, the bill has dangerous implications for websites that rely on user-generated content, from YouTube to 4chan. Many have already written about the worries that SOPA and PIPA cause, such as Alex Howard’s excellent, in-depth piece over at O’Reilly Radar. For more information on the bills, visit OpenCongress’s webpages, where you can see summaries of the legislation, which companies support and oppose them, and round-ups of by mainstream and blogged news: SOPA + PIPA. The bills are one more step in a long line of anti-piracy legislation, such as 2010’s Combatting online Infringement and Counterfeits Act (COICA).

Within the first few weeks since SOPA was introduced, http://fightforthefuture.org/ introduced the hyperbolic http://freebieber.org/ to illustrate the fears ordinary Internet users should have in relation to the legislation. In essence, SOPA would radically undermine many of the fan practices that Henry and others have analyzed on this blog. Fight for the Future also released the following video (which was my first media exposure to SOPA):

PROTECT IP / SOPA Breaks The Internet from Fight for the Future on Vimeo.

However, for the most part, criticism — or even basic coverage — of SOPA remained an online phenomenon. While there have been a few online articles written on CNN and a couple other networks, the mainstream news coverage of the bills remain fairly nonexistent, reports MediaMatters, likely due to the fact that the television networks largely support the bill. The Colbert Report featured a pair of short segments on SOPA in early December.

The Internet, though, largely worked around that problem.

In his book, Two Bits: The Cultural Significance of Free Software, UCLA anthropologist Chris Kelty describes free software programmer-activists as a recursive public. Drawing from Michael Warner’s concept of “publics and counterpublics” from Habermas’s “public sphere,” Kelty illustrates these programmers as a group that is addressed by copyright and code, and who work to make, maintain, and modify their technological networks and code as well as the discourse with which they engage as a public. This “circularity is essential to the phenomenon.”

Especially over the past two months, we’ve seen an exceptional effort on the part of online companies to engage users with the political process to oppose SOPA. For instance, on 16 November 2011, Tumblr blacked out every image, video, and word on each user’s dashboard, linking at the top of the page to http://www.tumblr.com/protect-the-net, where users could call their local representative.

The effort set of thousands of shared posts and hundreds of hours of calls.

While other companies attempted similar experiments (like Scribd on 21 December), Internet leaders joined together to spread word and inform Congress (such as with this letter from Facebook, Google, and Twitter on 15 November, and later this letter by many others on 14 December) and even political opponents of SOPA reached out on social media, like when Senator Ron Wyden asked people to sign their names at so he could read the list at a filibuster. Other experts eventually spoke up too.

But perhaps the most intriguing political effort occurred within one specific online community: Reddit.com.

Reddit, founded in 2005, is a social news and discussion website where users submit and vote on content. According to Alexa.com, Reddit is currently the 53rd most-visited site in the United States. Due to its increasing popularity, Reddit’s slogan is “the front page of the internet” — pertinent, because when a link hits the front page of Reddit, it can lend hundreds of thousands of page views. Though members at times highlight the site’s immaturity and incivility, its vibrant community — combined with the hypervisibility of the front page, has particularly thrived over the past couple of years, especially in terms of political participation and charity. Co-founded Alexis Ohanian gave a TEDtalk about Reddit’s dedication to strange things online and when that translates into a sort of political participation:


Humorously, every activist-related post on the official Reddit blog is tagged with “do it for splashy.

In terms of more prominent political activism, Reddit’s community — particularly it’s subreddit, /r/politics, and the emergent subreddit /r/SOPA — has unified around opposing SOPA, in line with the free-speech, utopian personality that pervades the site. For instance, a couple posts on /r/politics and r/technology that reached the front page [1, 2] helped bring rapid visibility to Senator Wyden’s filibuster initiative.

A more effective protest occurred in the form of a website boycott. GoDaddy, the domain register, was discovered to be a supporter of SOPA. After some discussion on Reddit, one r/politics thread reached the front page: GoDaddy supports SOPA, I’m transferring 51 domains & suggesting a move your domain day. Visibility of SOPA-related content was aided by a new subreddit, r/sopa, to which a global sidebar linked from the Reddit homepage. Less than 24 hours after the boycott started (even though, by numbers, it was deemed hardly successful), and with two more /r/politics threads that reached the front page [1, 2], GoDaddy reversed their stance and dropped support for SOPA.

SOPA debate continued to be fueled by various posts, including one by cofounder Alexis Ohanian: If SOPA existed, Steve & I never could’ve started reddit. Please help us win.. At the end of December, r/politics joined together to place pressure on SOPA-supporting Representative Paul Ryan; eventually, he reversed his position and denounced the bill.

Most notably, Alexis Ohanian recently announced on the Reddit blog that the entire site would voluntarily shut down on Wednesday 18 January 2012 for twelve hours, from 8am-8pm EST. Replacing the front page will be “a simple message about how the PIPA/SOPA legislation would shut down sites like reddit, link to resources to learn more, and suggest ways to take action.” This blacking out of Reddit coincides with a series of cybersecurity experts’ testimonies in Congress, at which Ohanian will be representing and speaking.

In reaction to SOPA (and PIPA, to which the opposition is now growing, since the SOPA vote has now been shelved), a vigorous public emerged across the web and united around discourse about the bills, particularly on Reddit.com. But to return to Kelty: is this a recursive public? Do the political users of Reddit have enough power and agency to maintain and modify their public?

I believe this question gets at a deeper question of ontology: what does political participation mean in a 1) networked, and 2) editable age? For instance, some users are able to promote their skills for discourse — eg., My friend and I wrote an application to boycott SOPA. Scan product barcodes and see if they’re made by a SOPA supporter. Enjoy. — but in certain cases, participation in technological systems becomes participation in a recursive public because that participation helps modify the system. In the case of Reddit, participation can become political when content reaches extreme visibility. And this is particularly important when we reconsider that the mass media has barely covered SOPA as a topic: due to this conflict, participation on a network platform like Reddit becomes an inherently political action.

And out of these seemingly-innocuous actions emerge more political moves. In reaction to the black out, other websites have agreed to join the effort, such as BoingBoing.net. Perhaps the decision with the most impact came on Monday, when Jimmy Wales announced that Wikipedia — which receives up to 25 million visitors per day at the English-language portal — would also shut down, but this time for a full 24 hours, after a lengthy discussion on Wales’ personal Wikipedia page. Wales responded to the announcement on Twitter by saying, “I hope Wikipedia will melt phone systems in Washington on Wednesday.”

In a recent New York Times article, Reddit’s political actions were noted. “‘It’s encouraging that we got this far against the odds, but it’s far from over,’ said Erik Martin, the general manager of Reddit.com, a social news site that has generated some of the loudest criticism of the bills. ‘We’re all still pretty scared that this might pass in one form or another. It’s not a battle between Hollywood and tech, its people who get the Internet and those who don’t.” Of course, Reddit isn’t the only platform that is part of this important recursive public, just as Twitter wasn’t the saving grace of the Arab Spring or the Iranian Revolution. The efforts of hundreds of activists around the country have contributed immensely to the anti-SOPA effort. But keep in mind that Reddit has reached a pinnacle of political participation in the last few months, and I have a feeling that — like YouTube in the 2008 presidential elections — Reddit may be the site to watch in 2012.

Alex Leavitt is a PhD student at USC Annenberg, where he studies digital culture and networked technology. Recently, his work has focused on creative participation in immense online networks, examining global participatory phenomenon like Hatsune Miku and Minecraft. You can reach him on Twitter @alexleavitt or via email at aleavitt@usc.edu; to read more about his research, visit alexleavitt.com.

Debating Privacy in a Networked World for the WSJ

Earlier this week, the Wall Street Journal posted excerpts from a debate between me, Stewart Baker, Jeff Jarvis, and Chris Soghoian on privacy. In preparation for the piece, they had us respond to a series of questions. Jeff posted the full text of his responses here. Now it’s my turn. Here are the questions that I was asked and my responses.

Part 1:

Question: How much should people care about privacy? (400 words)

People should – and do – care deeply about privacy. But privacy is not simply the control of information. Rather, privacy is the ability to assert control over a social situation. This requires that people have agency in their environment and that they are able to understand any given social situation so as to adjust how they present themselves and determine what information they share. Privacy violations occur when people have their agency undermined or lack relevant information in a social setting that’s needed to act or adjust accordingly. Privacy is not protected by complex privacy settings that create what Alessandro Acquisti calls “the illusion of control.” Rather, it’s protected when people are able to fully understand the social environment in which they are operating and have the protections necessary to maintain agency.

Social media has prompted a radical shift. We’ve moved from a world that is “private-by-default, public-through-effort” to one that is “public-by-default, private-with-effort.” Most of our conversations in a face-to-face setting are too mundane for anyone to bother recording and publicizing. They stay relatively private simply because there’s no need or desire to make them public. Online, social technologies encourage broad sharing and thus, participating on sites like Facebook or Twitter means sharing to large audiences. When people interact casually online, they share the mundane. They aren’t publicizing; they’re socializing. While socializing, people have no interest in going through the efforts required by digital technologies to make their pithy conversations more private. When things truly matter, they leverage complex social and technical strategies to maintain privacy.

The strategies that people use to assert privacy in social media are diverse and complex, but the most notable approach involves limiting access to meaning while making content publicly accessible. I’m in awe of the countless teens I’ve met who use song lyrics, pronouns, and community references to encode meaning into publicly accessible content. If you don’t know who the Lions are or don’t know what happened Friday night or don’t know why a reference to Rihanna’s latest hit might be funny, you can’t interpret the meaning of the message. This is privacy in action.

The reason that we must care about privacy, especially in a democracy, is that it’s about human agency. To systematically undermine people’s privacy – or allow others to do so – is to deprive people of freedom and liberty.

Part 2:

Question: What is the harm in not being able to control our social contexts? Do we suffer because we have to develop codes to communicate on social networks? Or are we forced offline because of our inability to develop codes? (200 words)

Social situations are not one-size-fits-all. How a man acts with his toddler son is different from how he interacts with his business partner, not because he’s trying to hide something but because what’s appropriate in each situation differs. Rolling on the floor might provoke a giggle from his toddler, but it would be strange behavior in a business meeting. When contexts collide, people must choose what’s appropriate. Often, they present themselves in a way that’s as inoffensive to as many people as possible (and particularly those with high social status), which often makes for a bored and irritable toddler.

Social media is one big context collapse, but it’s not fun to behave as though being online is a perpetual job interview. Thus, many people lower their guards and try to signal what context they want to be in, hoping others will follow suit. When that’s not enough, they encode their messages to be only relevant to a narrower audience. This is neither good, nor bad; it’s simply how people are learning to manage their lives in a networked world where they cannot assume strict boundaries between distinct contexts. Lacking spatial separation, people construct context through language and interaction.

Part 3:

Question: Jeff and Stewart seem to be arguing that privacy advocates have too much power and that they should be reined in for the good of society. What do you think of that view? Is the status quo protecting privacy enough? So we need more laws? What kind of laws? Or different social norms? In particular, I would like to hear what you think should be done to prevent turning the Internet into one long job interview, as you described. If you had one or two examples of types of usages that you think should be limited, that would be perfect. (300 words)

When it comes to creating a society in which both privacy and public life can flourish, there are no easy answers. Laws can protect, but they can also hinder. Technologies can empower, but they can also expose. I respect my esteemed colleagues’ views, but I am also concerned about what it means to have a conversation among experts. Decisions about privacy – and public life – in a networked age are being made by people who have immense social, political, and/or economic power, often at the expense of those who are less privileged. We must engender a public conversation about these issues rather than leaving the in the hands of experts.

There are significant pros and cons to all social, legal, economic, and technological decisions. Balancing individual desires with the goals of the collective is daunting. Mediated life forces us to face serious compromises and hard choices. Privacy is a value that’s dear to many people, precisely because openness is a privilege. Systems must respect privacy, but there’s no easy mechanism to inscribe this value into code or law. Thus, we must publicly grapple with these issues and put pressure on decision-makers and systems-builders to remember that their choices have consequences.

We must also switch the conversation from being about one of data collection to being one about data usage. This involves drawing on the language of abuse, violence, and victimization to think about what happens when people’s willingness to share is twisted to do them harm. Just as we have models for differentiating sex between consenting partners and rape, so too must we construct models that that separate usage that’s empowering and that which strips people of their freedoms and opportunities. For example, refusing health insurance based on search queries may make economic sense, but the social costs are far to great. Focusing on usage requires understanding who is doing what to whom and for what purposes. Limiting data collection may be structurally easier, but it doesn’t address the tensions between privacy and public-ness with which people are struggling.

Part 4:

Question: Jeff makes the point that we’re overemphasizing privacy at the expense of all the public benefits delivered by new online services. What do you think of that view? Do you think privacy is being sufficiently protected?

I think that positioning privacy and public-ness in opposition is a false dichotomy. People want privacy *and* they want to be able to participate in public. This is why I think it’s important to emphasize that privacy is not about controlling information, but about having agency and the ability to control a social situation. People want to share and they gain a lot from sharing. But that’s different than saying that people want to be exposed by others. Agency matters.

From my perspective, protecting privacy is about making certain that people have the agency they need to make informed decisions about how they engage in public. I do not think that we’ve done enough here. That said, I am opposed to approaches that protect people by disempowering them or by taking away their agency. I want to see approaches that force powerful entities to be transparent about their data practices. And I want to see approaches the put restrictions on how data can be used to harm people. For example, people should have the ability to share their medical experiences without being afraid of losing their health insurance. The answer is not to silence consumers from sharing their experiences, but rather to limit what insurers can do with information that they can access.

Question: Jeff says that young people are “likely the worst-served sector of society online”? What do you think of that? Do youth-targeted privacy safeguards prevent them from taking advantage of the benefits of the online world? Do the young have special privacy issues, and do they deserve special protections?

I _completely_ agree with Jeff on this point. In our efforts to protect youth, we often exclude them from public life. Nowhere is this more visible than with respect to the Children’s Online Privacy Protection Act (COPPA). This well-intended laws was meant to empower parents. Yet, in practice, it has prompted companies to ban any child under the age of 13 from joining general-purpose communication services and participating on social media platforms. In other words, COPPA has inadvertently locked children out of being legitimate users of Facebook, Gmail, Skype, and similar services. Interestingly, many parents help their children circumvent age restrictions. Is this a win? I don’t think so.

I don’t believe that privacy protections focused on children make any sense. Yes, children are a vulnerable population, but they’re not the only vulnerable population. Can you imagine excluding senile adults from participating on Facebook because they don’t know when they’re being manipulated? We need to develop structures that support all people while also making sure that protection does not equal exclusion.

Thanks to Julia Angwin for keeping us on task!

Why Parents Help Children Violate Facebook’s 13+ Rule

Announcing new journal article: “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, First Monday.

“At what age should I let my child join Facebook?” This is a question that countless parents have asked my collaborators and me. Often, it’s followed by the following: “I know that 13 is the minimum age to join Facebook, but is it really so bad that my 12-year-old is on the site?”

While parents are struggling to determine what social media sites are appropriate for their children, government tries to help parents by regulating what data internet companies can collect about children without parental permission. Yet, as has been the case for the last decade, this often backfires. Many general-purpose communication platforms and social media sites restrict access to only those 13+ in response to a law meant to empower parents: the Children’s Online Privacy Protection Act (COPPA). This forces parents to make a difficult choice: help uphold the minimum age requirements and limit their children’s access to services that let kids connect with family and friends OR help their children lie about their age to circumvent the age-based restrictions and eschew the protections that COPPA is meant to provide.

In order to understand how parents were approaching this dilemma, my collaborators — Eszter Hargittai (Northwestern University), Jason Schultz (University of California, Berkeley), John Palfrey (Harvard University) — and I decided to survey parents. In many ways, we were responding to a flurry of studies (e.g. Pew’s) that revealed that millions of U.S. children have violated Facebook’s Terms of Service and joined the site underage. These findings prompted outrage back in May as politicians blamed Facebook for failing to curb underage usage. Embedded in this furor was an assumption that by not strictly guarding its doors and keeping children out, Facebook was undermining parental authority and thumbing its nose at the law. Facebook responded by defending its practices — and highlighting how it regularly ejects children from its site. More controversially, Facebook’s founder Mark Zuckerberg openly questioned the value of COPPA in the first place.

While Facebook has often sparked anger over its cavalier attitudes towards user privacy, Zuckerberg’s challenge with regard to COPPA has merit. It’s imperative that we question the assumptions embedded in this policy. All too often, the public takes COPPA at face-value and politicians angle to build new laws based on it without examining its efficacy.

Eszter, Jason, John, and I decided to focus on one core question: Does COPPA actually empower parents? In order to do so, we surveyed parents about their household practices with respect to social media and their attitudes towards age restrictions online. We are proud to release our findings today, in a new paper published at First Monday called “Why parents help their children lie to Facebook about age: Unintended consequences of the ‘Children’s Online Privacy Protection Act’.” From a national sample of 1,007 U.S. parents who have children living with them between the ages of 10-14 conducted July 5-14, 2011, we found:

  • Although Facebook’s minimum age is 13, parents of 13- and 14-year-olds report that, on average, their child joined Facebook at age 12.
  • Half (55%) of parents of 12-year-olds report their child has a Facebook account, and most (82%) of these parents knew when their child signed up. Most (76%) also assisted their 12-year old in creating the account.
  • A third (36%) of all parents surveyed reported that their child joined Facebook before the age of 13, and two-thirds of them (68%) helped their child create the account.
  • Half (53%) of parents surveyed think Facebook has a minimum age and a third (35%) of these parents think that this is a recommendation and not a requirement.
  • Most (78%) parents think it is acceptable for their child to violate minimum age restrictions on online services.

The status quo is not working if large numbers of parents are helping their children lie to get access to online services. Parents do appear to be having conversations with their children, as COPPA intended. Yet, what does it mean if they’re doing so in order to violate the restrictions that COPPA engendered?

One reaction to our data might be that companies should not be allowed to restrict access to children on their sites. Unfortunately, getting the parental permission required by COPPA is technologically difficult, financially costly, and ethically problematic. Sites that target children take on this challenge, but often by excluding children whose parents lack resources to pay for the service, those who lack credit cards, and those who refuse to provide extra data about their children in order to offer permission. The situation is even more complicated for children who are in abusive households, have absentee parents, or regularly experience shifts in guardianship. General-purpose sites, including communication platforms like Gmail and Skype and social media services like Facebook and Twitter, generally prefer to avoid the social, technical, economic, and free speech complications involved.

While there is merit to thinking about how to strengthen parent permission structures, focusing on this obscures the issues that COPPA is intended to address: data privacy and online safety. COPPA predates the rise of social media. Its architects never imagined a world where people would share massive quantities of data as a central part of participation. It no longer makes sense to focus on how data are collected; we must instead question how those data are used. Furthermore, while children may be an especially vulnerable population, they are not the only vulnerable population. Most adults have little sense of how their data are being stored, shared, and sold.

COPPA is a well-intentioned piece of legislation with unintended consequences for parents, educators, and the public writ large. It has stifled innovation for sites focused on children and its implementations have made parenting more challenging. Our data clearly show that parents are concerned about privacy and online safety. Many want the government to help, but they don’t want solutions that unintentionally restrict their children’s access. Instead, they want guidance and recommendations to help them make informed decisions. Parents often want their children to learn how to be responsible digital citizens. Allowing them access is often the first step.

Educators face a different set of issues. Those who want to help youth navigate commercial tools often encounter the complexities of age restrictions. Consider the 7th grade teacher whose students are heavy Facebook users. Should she admonish her students for being on Facebook underage? Or should she make sure that they understand how privacy settings work? Where does digital literacy fit in when what children are doing is in violation of websites’ Terms of Service?

At first blush, the issues surrounding COPPA may seem to only apply to technology companies and the government, but their implications extend much further. COPPA affects parenting, education, and issues surrounding youth rights. It affects those who care about free speech and those who are concerned about how violence shapes home life. It’s important that all who care about youth pay attention to these issues. They’re complex and messy, full of good intention and unintended consequences. But rather than reinforcing or extending a legal regime that produces age-based restrictions which parents actively circumvent, we need to step back and rethink the underlying goals behind COPPA and develop new ways of achieving them. This begins with a public conversation.

We are excited to release our new study in the hopes that it will contribute to that conversation. To read our complete findings and learn more about their implications for policy makers, see “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, published in First Monday.

To learn more about the Children’s Online Privacy Protection Act (COPPA), make sure to check out the Federal Trade Commission’s website.

(Versions of this post were originally written for the Huffington Post and for the Digital Media and Learning Blog.)

Image Credit: Tim Roe

The Unintended Consequences of Cyberbullying Rhetoric

We all know that teen bullying – both online and offline – has devastating consequences. Jamey Rodemeyer’s suicide is a tragedy. He was tormented for being gay. He knew he was being bullied and he regularly talked about the fact that he was being bullied. Online, he even wrote: “I always say how bullied I am, but no one listens. What do I have to do so people will listen to me?” The fact that he could admit that he was being tormented coupled with the fact that he asked for help and folks didn’t help him should be a big wake-up call. We have a problem. And that problem is that most of us adults don’t have the foggiest clue how to help youth address bullying.

It doesn’t take a tragedy to know that we need to find a way to combat bullying. Countless regulators and educators are desperate to do something – anything – to put an end to the victimization. But in their desperation to find a solution, they often turn a blind’s eye to both research and the voices of youth.

The canonical research definition of bullying was written by Olweus and it has three components:

  • Bullying is aggressive behavior that involves unwanted, negative actions.
  • Bullying involves a pattern of behavior repeated over time.
  • Bullying involves an imbalance of power or strength.

What Rodemeyer faced was clearly bullying, but a lot of the reciprocal relational aggression that teens experience online is not actually bullying. Still, in the public eye, these concepts are blurred and so when parents and teachers and regulators talk about wanting to stop bullying, they talk about wanting to stop all forms of relational aggression too. The problem is that many teens do not – and, for good reasons, cannot – identify a lot of what they experience as bullying. Thus, all of the new fangled programs to stop bullying are often missing the mark entirely. In a new paper that Alice Marwick and I co-authored – called “The Drama! Teen Conflict, Gossip, and Bullying in Networked Publics” – we analyzed the language of youth and realized that their use the language of “drama” serves many purposes, not the least of which is to distance themselves from the perpetrator / victim rhetoric of bullying in order to save face and maintain agency.

For most teenagers, the language of bullying does not resonate. When teachers come in and give anti-bullying messages, it has little effect on most teens. Why? Because most teens are not willing to recognize themselves as a victim or as an aggressor. To do so would require them to recognize themselves as disempowered or abusive. They aren’t willing to go there. And when they are, they need support immediately. Yet, few teens have the support structures necessary to make their lives better. Rodemeyer is a case in point. Few schools have the resources to provide youth with the necessary psychological counseling to work through these issues. But if we want to help youth who are bullied, we need there to be infrastructure to help young people when they are willing to recognize themselves as victimized.

To complicate matters more, although school after school is scrambling to implement anti-bullying programs, no one is assessing the effectiveness of these programs. This is not to say that we don’t need education – we do. But we need the interventions to be tested. And my educated hunch is that we need to be focusing more on positive frames that use the language of youth rather than focusing on the negative.

I want to change the frame of our conversation because we need to change the frame if we’re going to help youth. I’ve spent the last seven years talking to youth about bullying and drama and it nearly killed me when I realized that all of the effort that adults are putting into anti-bullying campaigns are falling on deaf ears and doing little to actually address what youth are experiencing. Even hugely moving narratives like “It Gets Better” aren’t enough when a teen can make a video for other teens and then kill himself because he’s unable to make it better in his own community.

In an effort to ground the bullying conversation, Alice Marwick and I just released a draft of our new paper: “The Drama! Teen Conflict, Gossip, and Bullying in Networked Publics.” We also co-authored a New York Times Op-Ed in the hopes of reaching a wider audience: “Why Cyberbullying Rhetoric Misses the Mark.” Please read these and send us feedback or criticism. We are in this to help the youth that we spend so much time with and we’re both deeply worried that adult rhetoric is going in the wrong direction and failing to realize why it’s counterproductive.

Image from Flickr by Brandon Christopher Warren
Continue reading “The Unintended Consequences of Cyberbullying Rhetoric”

Six Provocations for Big Data

The era of “Big Data” has begun. Computer scientists, physicists, economists, mathematicians, political scientists, bio-informaticists, sociologists, and many others are clamoring for access to the massive quantities of information produced by and about people, things, and their interactions. Diverse groups argue about the potential benefits and costs of analyzing information from Twitter, Google, Verizon, 23andMe, Facebook, Wikipedia, and every space where large groups of people leave digital traces and deposit data. Significant questions emerge. Will large-scale analysis of DNA help cure diseases? Or will it usher in a new wave of medical inequality? Will data analytics help make people’s access to information more efficient and effective? Or will it be used to track protesters in the streets of major cities? Will it transform how we study human communication and culture, or narrow the palette of research options and alter what ‘research’ means? Some or all of the above?

Kate Crawford and I decided to sit down and interrogate some of the assumptions and biases embedded into the rhetoric surrounding “Big Data.” The resulting piece – “Six Provocations for Big Data” – offers a multi-discplinary social analysis of the phenomenon with the goal of sparking a conversation. This paper is intended to be presented as a keynote address at the Oxford Internet Institute’s 10th Anniversary “A Decade in Internet Time” Symposium.

Feedback is more than welcome!

Socially-Mediated Publicness: A Call for Papers

Please distribute widely!


Special Theme Issue of the Journal of Broadcasting & Electronic Media

“Socially-Mediated Publicness”

Guest Editors:

–         Nancy Baym (University of Kansas)

–         danah boyd (Microsoft Research)

Editor: Zizi Papacharissi

Social media call into question conventional understandings of what it means to “be public,” what it means to be “in a public,” and even the meaning of “public” itself. New types of publics are emerging because of the technological affordances of social media and individuals may be more visible than ever before, whether they seek this or not. This special issue will explore these issues.

We seek scholarship from an array of theoretical and methodological perspectives that critically examines how public life is reconfigured because of or in relation to social media.  We welcome articles from diverse fields, including media studies, communication, anthropology, sociology, political theory, critical theory, etc.

Possible topics include, but are not limited to:

·        Processes and practices of building and living in online publics

·        How new technologies of publicness affect celebrities, artists, musicians, and other creators

·        How mediated publics challenge social, political, and economic assumptions

·        The meaning of concepts such as “audience” and “listening” in mediated public spaces

·        How counterpublics and intimate publics are reshaped by technology

·        The relationships between being public and being part of a public

·        Degrees, boundaries, and scales of technologically-mediated publicness

·        How new types of publicness reconfigure identity and race, class, gender, sexuality, religion, and/or nationality

In order to be more public, this special issue of JOBEM will be published as an open-access issue.  All articles will be available online at the point of publication. The anticipated publication date for this issue is September 2012.

Manuscripts should conform to the guidelines of the Journal of Broadcasting & Electronic Media (www.beaweb.org/jobem) [if that link is not working, try this one].

By December 12, 2011, you should send a title, abstract, and list of 5 potential reviewers to jobem.publicness@gmail.com to help us streamline the peer review process.

Articles should be submitted no later than January 6, 2012 at: http://mc.manuscriptcentral.com/hbem (select “Special Issue: Socially Mediated Publicness” as a manuscript type).