In Defense of Friction

1903 telephone operator (John McNab on Flickr)

There is no doubt that technology has made my life much easier. I rarely share the romantic view that things were better when human beings used to do the boring tasks that machines now do. For example, I do not think there is much to gain by bringing back the old telephone operators. However, there are reasons to believe social computing systems should not automate social interactions.

In his paper about online trust, Coye Cheshire points out how automated trust systems undermine trust itself by incentivizing cooperation because of the fear of punishment rather than actual trust among people. Cheshire argues that:

strong forms of online security and assurance can supplant, rather than enhance, trust.

Leading to what he calls the trust paradox:

assurance structures designed to make interpersonal trust possible in uncertain environments undermine the need for trust in the first place

My collaborators and I found something similar when trying to automate credit-giving in the context of a creative online community. We found that automatic attribution given by a computer system, does not replace the manual credit given by another human being. Attribution, turns out, is a useful piece of information given by a system, while credit given by a person, is a signal of appreciation, one that is expected and that cannot be automated.

Slippery when icy - problems with frictionless spaces (ntr23 on Flickr)

Similarly, others have noted how Facebook’s birthday reminders have “ruined birthdays” by “commoditizing” social interactions and people’s social skills. Furthermore, some have argued that “Facebook is ruining sharing” by making it frictionless.

In many scenarios, automation is quite useful, but with social interactions, removing friction can have a harmful effect on the social bonds established through friction itself. In other cases, as Shauna points out, “social networking sites are good for relationships so tenuous they couldn’t really bear any friction at all.”

I am not sure if sharing has indeed been ruined by Facebook, but perhaps this opens new opportunities for new online services that allow people to have “friction-full” interactions.

What kind of friction would you add to existing online social systems?

Social Science PhD Internships at Microsoft Research New England (Spring & Summer 2012)

Microsoft Research New England (MSRNE) is looking for PhD interns to join the social media collective for Spring and Summer 2012. For these positions, we are looking primarily for social science PhD students (including communications, sociology, anthropology, media studies, information studies, etc.). The Social Media Collective is a collection of scholars at MSRNE who focus on socio-technical questions, primarily from a social science perspective. We are not an applied program; rather, we work on critical research questions that are important to the future of social science scholarship.

MSRNE internships are 12-week paid internships in Cambridge, Massachusetts. PhD interns at MSRNE are expected to devise and execute a research project during their internships. The expected outcome of an internship at MSRNE is a publishable scholarly paper for an academic journal or conference of the intern’s choosing. The goal of the internship is to help the intern advance their own career; interns are strongly encouraged to work towards a publication outcome that will help them on the academic job market. Interns are also expected to collaborate with full-time researchers and visitors, give short presentations, and contribute to the life of the community. While this is not an applied program, MSRNE encourages interdisciplinary collaboration with computer scientists, economists, and mathematicians. There are also opportunities to engage with product groups at Microsoft, although this is not a requirement.

Topics that are currently of interest to the social media collective include: privacy & publicity, internet public policy research, online safety (from sexting to bullying to gang activities), technology and human trafficking, transparency & surveillance, conspicuous consumption & brand culture, piracy, news & information flow, and locative media. That said, we are open to other interesting topics, particularly those that may have significant societal impact. While most of the researchers in the collective are ethnographers, we welcome social scientists of all methodological persuasions.

Applicants should have advanced to candidacy in their PhD program or be close to advancing to candidacy. (Unfortunately, there are no opportunities for Master’s students at this time.) While this internship opportunity is not strictly limited to social scientists, preference will be given to social scientists and humanists making socio-technical inquiries. (Note: While other branches of Microsoft Research focus primarily on traditional computer science research, this group does no development-driven research and is not looking for people who are focused solely on building systems at this time. We welcome social scientists with technical skills and strongly encourage social scientists to collaborate with computer scientists at MSRNE.) Preference will be given to intern candidates who work to make public and/or policy interventions with their research. Interns will benefit most from this opportunity if there are natural opportunities for collaboration with other researchers or visitors currently working at MSRNE.

Applicants from universities outside of the United States are welcome to apply.


The Social Media Collective is organized by Senior Researcher danah boyd ( and includes Postdoctoral Researchers Mike Ananny (, Alice Marwick (, and Andrés Monroy-Hernández ( Spring faculty visitors will include T.L. Taylor (IT University of Copenhangen) and Eszter Hargittai (Northwestern University). Summer visitors are TBD.

Previous interns in the collective have included Amelia Abreu (UWashington information), Scott Golder (Cornell sociology), Germaine Halegoua (U. Wisconsin, communications) Jessica Lingel (Rutgers library & info science), Laura Noren (NYU sociology), Omar Wasow (Harvard African-American studies), and Sarita Yardi (GeorgiaTech HCI). Previous and current faculty MSR visitors to the collective include: Alessandro Acquisti, Beth Coleman, Bernie Hogan, Christian Sandvig, Helen Nissenbaum, James Grimmelmann, Judith Donath, Jeff Hancock, Kate Crawford, Karrie Karahalios, Lisa Nakamura, Mary Gray, Nalini Kotamraju, Nancy Baym, Nicole Ellison, and Tarleton Gillespie.

If you are curious to know more about MSRNE, I suspect that many of these people would be happy to tell you about their experiences here. Previous interns are especially knowledgeable about how this process works.


To apply for a PhD internship with the social media collective:

1. Fill out the online application form: Make sure to indicate that you prefer Microsoft Research New England and “social media” or “social computing.” You will need to list two recommenders through this form. Make sure your recommenders respond to the request for letters.

2. Send an email to msrnejob -at- microsoft-dot-com with the subject “SMC PhD Intern Application: ” that includes the following four things:
a. A brief description of your dissertation project.
b. An academic article you have written (published or unpublished) that shows your writing skills.
c. A copy of your CV
d. A pointer to your website or other online presence (if available).
e. A short description of 1-3 projects that you might imagine doing as an intern at MSRNE.

We will begin considering internship applications on January 10 and consider applications until all social media internship positions are filled.


“The internship at Microsoft Research was all of the things I wanted it to be – personally productive, intellectually rich, quiet enough to focus, noisy enough to avoid complete hermit-like cave dwelling behavior, and full of opportunities to begin ongoing professional relationships with other scholars who I might not have run into elsewhere.”
— Laura Noren, Sociology, New York University

“If I could design my own graduate school experience, it would feel a lot like my summer at Microsoft Research. I had the chance to undertake a project that I’d wanted to do for a long time, surrounded by really supportive and engaging thinkers who could provide guidance on things to read and concepts to consider, but who could also provoke interesting questions on the ethics of ethnographic work or the complexities of building an identity as a social sciences researcher. Overall, it was a terrific experience for me as a researcher as well as a thinker.”
— Jessica Lingel, Library and Information Science, Rutgers University

“Spending the summer as an intern at MSR was an extremely rewarding learning experience. Having the opportunity to develop and work on your own projects as well as collaborate and workshop ideas with prestigious and extremely talented researchers was invaluable. It was amazing how all of the members of the Social Media Collective came together to create this motivating environment that was open, supportive, and collaborative. Being able to observe how renowned researchers streamline ideas, develop projects, conduct research, and manage the writing process was a uniquely helpful experience – and not only being able to observe and ask questions, but to contribute to some of these stages was amazing and unexpected.”
— Germaine Halegoua, Communication Arts, University of Wisconsin-Madison

“The summer I spent at Microsoft Research was one of the highlights of my time in grad school. It helped me expand my research in new directions and connect with world-class scholars. As someone with a technical bent, this internship was an amazing opportunity to meet and learn from really smart humanities and social science researchers. Finally, Microsoft Research as an organization has the best of both worlds: the academic freedom and intellectual stimulation of a university with the perks of industry.”
— Andrés Monroy-Hernández, Media, Arts and Sciences, MIT

Debating Privacy in a Networked World for the WSJ

Earlier this week, the Wall Street Journal posted excerpts from a debate between me, Stewart Baker, Jeff Jarvis, and Chris Soghoian on privacy. In preparation for the piece, they had us respond to a series of questions. Jeff posted the full text of his responses here. Now it’s my turn. Here are the questions that I was asked and my responses.

Part 1:

Question: How much should people care about privacy? (400 words)

People should – and do – care deeply about privacy. But privacy is not simply the control of information. Rather, privacy is the ability to assert control over a social situation. This requires that people have agency in their environment and that they are able to understand any given social situation so as to adjust how they present themselves and determine what information they share. Privacy violations occur when people have their agency undermined or lack relevant information in a social setting that’s needed to act or adjust accordingly. Privacy is not protected by complex privacy settings that create what Alessandro Acquisti calls “the illusion of control.” Rather, it’s protected when people are able to fully understand the social environment in which they are operating and have the protections necessary to maintain agency.

Social media has prompted a radical shift. We’ve moved from a world that is “private-by-default, public-through-effort” to one that is “public-by-default, private-with-effort.” Most of our conversations in a face-to-face setting are too mundane for anyone to bother recording and publicizing. They stay relatively private simply because there’s no need or desire to make them public. Online, social technologies encourage broad sharing and thus, participating on sites like Facebook or Twitter means sharing to large audiences. When people interact casually online, they share the mundane. They aren’t publicizing; they’re socializing. While socializing, people have no interest in going through the efforts required by digital technologies to make their pithy conversations more private. When things truly matter, they leverage complex social and technical strategies to maintain privacy.

The strategies that people use to assert privacy in social media are diverse and complex, but the most notable approach involves limiting access to meaning while making content publicly accessible. I’m in awe of the countless teens I’ve met who use song lyrics, pronouns, and community references to encode meaning into publicly accessible content. If you don’t know who the Lions are or don’t know what happened Friday night or don’t know why a reference to Rihanna’s latest hit might be funny, you can’t interpret the meaning of the message. This is privacy in action.

The reason that we must care about privacy, especially in a democracy, is that it’s about human agency. To systematically undermine people’s privacy – or allow others to do so – is to deprive people of freedom and liberty.

Part 2:

Question: What is the harm in not being able to control our social contexts? Do we suffer because we have to develop codes to communicate on social networks? Or are we forced offline because of our inability to develop codes? (200 words)

Social situations are not one-size-fits-all. How a man acts with his toddler son is different from how he interacts with his business partner, not because he’s trying to hide something but because what’s appropriate in each situation differs. Rolling on the floor might provoke a giggle from his toddler, but it would be strange behavior in a business meeting. When contexts collide, people must choose what’s appropriate. Often, they present themselves in a way that’s as inoffensive to as many people as possible (and particularly those with high social status), which often makes for a bored and irritable toddler.

Social media is one big context collapse, but it’s not fun to behave as though being online is a perpetual job interview. Thus, many people lower their guards and try to signal what context they want to be in, hoping others will follow suit. When that’s not enough, they encode their messages to be only relevant to a narrower audience. This is neither good, nor bad; it’s simply how people are learning to manage their lives in a networked world where they cannot assume strict boundaries between distinct contexts. Lacking spatial separation, people construct context through language and interaction.

Part 3:

Question: Jeff and Stewart seem to be arguing that privacy advocates have too much power and that they should be reined in for the good of society. What do you think of that view? Is the status quo protecting privacy enough? So we need more laws? What kind of laws? Or different social norms? In particular, I would like to hear what you think should be done to prevent turning the Internet into one long job interview, as you described. If you had one or two examples of types of usages that you think should be limited, that would be perfect. (300 words)

When it comes to creating a society in which both privacy and public life can flourish, there are no easy answers. Laws can protect, but they can also hinder. Technologies can empower, but they can also expose. I respect my esteemed colleagues’ views, but I am also concerned about what it means to have a conversation among experts. Decisions about privacy – and public life – in a networked age are being made by people who have immense social, political, and/or economic power, often at the expense of those who are less privileged. We must engender a public conversation about these issues rather than leaving the in the hands of experts.

There are significant pros and cons to all social, legal, economic, and technological decisions. Balancing individual desires with the goals of the collective is daunting. Mediated life forces us to face serious compromises and hard choices. Privacy is a value that’s dear to many people, precisely because openness is a privilege. Systems must respect privacy, but there’s no easy mechanism to inscribe this value into code or law. Thus, we must publicly grapple with these issues and put pressure on decision-makers and systems-builders to remember that their choices have consequences.

We must also switch the conversation from being about one of data collection to being one about data usage. This involves drawing on the language of abuse, violence, and victimization to think about what happens when people’s willingness to share is twisted to do them harm. Just as we have models for differentiating sex between consenting partners and rape, so too must we construct models that that separate usage that’s empowering and that which strips people of their freedoms and opportunities. For example, refusing health insurance based on search queries may make economic sense, but the social costs are far to great. Focusing on usage requires understanding who is doing what to whom and for what purposes. Limiting data collection may be structurally easier, but it doesn’t address the tensions between privacy and public-ness with which people are struggling.

Part 4:

Question: Jeff makes the point that we’re overemphasizing privacy at the expense of all the public benefits delivered by new online services. What do you think of that view? Do you think privacy is being sufficiently protected?

I think that positioning privacy and public-ness in opposition is a false dichotomy. People want privacy *and* they want to be able to participate in public. This is why I think it’s important to emphasize that privacy is not about controlling information, but about having agency and the ability to control a social situation. People want to share and they gain a lot from sharing. But that’s different than saying that people want to be exposed by others. Agency matters.

From my perspective, protecting privacy is about making certain that people have the agency they need to make informed decisions about how they engage in public. I do not think that we’ve done enough here. That said, I am opposed to approaches that protect people by disempowering them or by taking away their agency. I want to see approaches that force powerful entities to be transparent about their data practices. And I want to see approaches the put restrictions on how data can be used to harm people. For example, people should have the ability to share their medical experiences without being afraid of losing their health insurance. The answer is not to silence consumers from sharing their experiences, but rather to limit what insurers can do with information that they can access.

Question: Jeff says that young people are “likely the worst-served sector of society online”? What do you think of that? Do youth-targeted privacy safeguards prevent them from taking advantage of the benefits of the online world? Do the young have special privacy issues, and do they deserve special protections?

I _completely_ agree with Jeff on this point. In our efforts to protect youth, we often exclude them from public life. Nowhere is this more visible than with respect to the Children’s Online Privacy Protection Act (COPPA). This well-intended laws was meant to empower parents. Yet, in practice, it has prompted companies to ban any child under the age of 13 from joining general-purpose communication services and participating on social media platforms. In other words, COPPA has inadvertently locked children out of being legitimate users of Facebook, Gmail, Skype, and similar services. Interestingly, many parents help their children circumvent age restrictions. Is this a win? I don’t think so.

I don’t believe that privacy protections focused on children make any sense. Yes, children are a vulnerable population, but they’re not the only vulnerable population. Can you imagine excluding senile adults from participating on Facebook because they don’t know when they’re being manipulated? We need to develop structures that support all people while also making sure that protection does not equal exclusion.

Thanks to Julia Angwin for keeping us on task!

Accepting Inefficiencies and Different Scales of Change in Networked Environments

I didn’t know him personally, but I was saddened to read about Ilya Zhitomirskiy’s recent suicide.  I have no personal insight into his situation, the sources of his stress, or what brought him to take his life.  It’s tragic, full stop.

As I was reading Gawker’s story on his death, I was struck by its implicit message that some of Zhitomirskiy’s stress may have derived, in part, from his desire to “change the world.”  As Gawker says, “Did the pressure of running a struggling, much-hyped start-up—not just any start-up, but a Facebook killer—contribute to Zhitomirskiy’s death?”  He and his Diaspora co-founders set themselves the monumental task of competing with Facebook – crafting a brand new social network that challenged one of the internet’s most powerful laws — a difficult, but noble goal.

It’s the scale of this goal that stands out to me, that I want to think through here.

I often explicitly hear from tech entrepreneurs—or even just those who use platforms like Kiva and Kickstarter—that they have an explicit desire to “change the world” through their work.  Why does this seem, to me at least, to be such a dominant scale for change embedded in the projects of internet-based entrepreneurs?  Is it something about the network itself, the people drawn to the network, the rhetoric around online social movements?  Especially at a time when #Occupy protesters are calling for wide-spread societal change, it might seem disingenuous to question the very idea of world-changing work.  This is not my intention.  The world does constantly need to be rethought and reinvented.

This potential is enticing and I have deep sympathy with the promise of networked social action led by articulate, impassioned, talented individuals like Zhitomirskiy.  Who wouldn’t want to have their vision of how they think the world should be adopted by millions of people?  Who wouldn’t want to live their life motivated by a passion to make social change at a scale the internet makes possible?

My concern is about people’s need to be seen and judged publicly in order to do this kind of work, and the absence of understanding what kind of pressures such motivations, predicaments, and cultivated visibility might create.  Most people are not accustomed to making themselves seen and vulnerable in pursuit of large-scale change, embodying the responsibility for achieving that change, and judging and being judged on their ability to lead large numbers of people organized into fuzzy and dynamic networks.

Translating the ideal of large-scale networked social change into a personal goal can be exciting and motivating, but it can also be an ever-present source of self-initiated, socially constructed pressure.

We are at a tricky moment in history where networks and near-constant connectedness make visible fundamental mismatches between the scale of our experience, the scale of our imagined impact, and the scale of our actual agency.  We are victims of complex, distributed systems that are beyond our individual capacities to appreciate or control (the international banking system, environmental change).  And some of us are privileged enough to have time to ponder these systems and share our visions of how we think things should be with those we would likely never have met even 5 years ago.  But all of us are also stuck figuring out how to be the middle space between systems beyond our control and agency that’s seemingly within our grasp.  We’re being forced to figure out, simultaneously, our personal stories and our network stories – without much appreciation for how much emotional work this entails and what kind of support we need to imagine and realize potentials.

It’s often hard to know what to do with such imagined potential.  And it’s even harder to see the shape of the technical, social, economic, cultural, and political systems we inhabit — to understand them well enough to know what’s possible, what we might change at any given moment, what it’s worth investing our bodies and souls.  It’s a new kind of skill to know not just how to use networked media to create social change, but to understand, find peace with, work within, and subtly change the potentials and limits of your network, and to know the merits of working at different scales of change.

Essentially, I think we don’t understand yet the very idea of networked scales of change, how to live in relation to them — and when to shift among scales in ways that take care with our psyches.

I’m not criticizing digital, networked mediated relationships, saying that we should retreat from online networks or reifying physical, face-to-face interactions.  Much excellent work exists on how complex these shifts and distinctions are.  Nor am I talking about the potential cognitive limits on social relationships, or the need to craft new kinds of social science for making sense of big data.  This isn’t a problem of information overload or filter failure, nor is it only a critique of the idea of perfect memory.  This is about negotiating our individual emotional relationships to the lived realities and imagined potentials of big data, fast rhythms, network efficiency, and algorithmic automaticity.

I’m envisioning something more akin to a spiritual understanding of networked scales, knowing how and when to: navigate ourselves among scales, appreciate the value of different impacts at different moments, craft emotional relationships to networked potential that go beyond today’s instincts to confuse measurement with interpretation, efficiency with success, network position with personal satisfaction.

Optimizing your relationship to a search engine, building your list of Twitter followers and keeping them engaged, garnering more YouTube views, managing your Facebook threads and friend lists in a timely fashion, and imagining a future income from all this – these are exciting moves that let us experience and affect change at new and different scales.  But, un-examined, they also look like sources of self-initiated, socially constructed stress that lead to less healthy lives than we might otherwise find in our networks.

At the risk of seeming like I’m simply a curmudgeon complaining about the narcotizing dysfunction of networked media, I think that we need to have new conversations about what these scales and speeds mean to us as humans trying to live within and change a world that has gotten very large, very fast.

I used to volunteer regularly with a peer counseling hotline.  For someone who’s been embedded in cultures of technology and designing for years, it always struck me as an incredibly inefficient service.  We’d wait by the phones for people to call.  If no volunteer was available when someone called, they’d get a message asking them to wait or try again later.  You never knew if a call would take 5 minutes or an hour.  Sometimes we’d get prank phone calls that would tie up staff and prevent people with genuine calls from getting through.  Sometimes the volunteers would joke that if only we could use the downtime to make outgoing peer counseling calls (“Hi, someone said you might need to talk about something?”), our time would be used more efficiently.

I was deeply frustrated by some of the hotline’s inefficiencies (especially those that prevented people from getting help) but, over time, I grew to appreciate other aspects of the hotline’s sporadic pacing.  Specifically, it often takes time for people to open up.  Some details of people’s stories seemed trivial and needlessly time-consuming, but people seemed to interpret my patience with them as evidence that I was willing to wade through the mundane as we searched together for the meaningful.  The calls took time, they had to happen one-on-one, and my tolerance of inefficiency built a particular kind of trust.  Improvements to the system can always be made, but the counseling experience only seemed to “work” at a specific scale and rhythm that was inefficient but somehow human.

Essentially, internet life lets us see and imagine new scales of experience but, unlike previous mass media, it leaves open a question of agency – how can individuality persist and thrive in increasingly social and connected environments?  How can we and should we imagine, realize and be okay with our individual locations within networks?  What are the limits on this agency?  When are these limits inefficiencies whose scale should be embraced, and when are they constraints whose powers should be resisted?

I don’t know the answers to these questions, but my hope is that we can make room for many different scales of existence.  I hope that we can know that it’s okay to be silent, to fail, to wait, to listen, to slow down, to be alone, to enjoy success that isn’t rendered in networks or visible millions — to be patient with your mind and heart as we figure out our networks together.

3.75 Million Lawbreaking Parents

The Department of Justice would like the authority to put millions of American parents in prison. Don’t believe me? Read on.

A House Judiciary Committee hearing today considered the federal computer crime statute, the Computer Fraud and Abuse Act, known to its friends as the CFAA. Among other things, the Act punishes anyone who “exceeds authorized access, and thereby obtains . . . information.” The penalty for a first-time offense is a fine and up to a year in prison.

This provision has been used to prosecute people whose only misuse of a computer was violating a website’s terms of service. Most famously, when Lori Drew helped her daughter create a fake MySpace profile under the name “Josh Evans” to flirt with and then disparage a 13-year-old neighbor, Megan Meier. After “Josh” told Megan, “Have a shitty rest of your life. The world would be a better place without you,” Megan killed herself. When Drew was prosecuted, it wasn’t for homicide, but for exceeding authorized access to MySpace’s servers. Drew herself deserves no sympathy, but the theory that her crime was complete when she created the fake profile would make a criminal out of anyone who fails to comply with every last term in the fine print in a website’s terms of service.

Orin Kerr, the leading academic authority on the CFAA, has pointed out the absurdity of this reading of the CFAA. At the hearing today, he explained that it would make him a criminal because he lives in Arlington, Virginia but lists “Washington, D.C.” on his Facebook profile. In his written testimony, he pointed to simple statutory fixes that would draw a more sensible line between routine computer use and real computer crime.

But Richard Downing, the Deputy Chief of the the Computer Crimes and Intellectual Property Section of the Department of Justice, was having none of it. In his testimony, he explained:

We believe that Congress intended to criminalize such conduct, and we believe that deterring it continues to be important. Because of this, we are highly concerned about the effects of restricting the definition of “exceeds authorized access” in the CFAA to disallow prosecutions based upon a violation of terms of service or similar contractual agreement with an employer or provider.

Here’s an example of the severe criminal computer misuse that Downing would like the CFAA to prohibit: parents helping their children get on Facebook. Not just getting on Facebook with fake profiles to harass classmates, like in Drew. No, getting on Facebook at all.

Facebook doesn’t let users sign up unless they’re 13 or older. Facebook does this in order to comply with a 1998 law, the federal Children’s Online Privacy Protection Act. It puts stringent limits on the personal information websites can collect from children under 13. Some websites comply by going through the hard work of getting and verifying parental consent. But many others, including Facebook, comply by prohibiting children under 13 from using their site at all.

The prohibition is honored mainly in the breach. According to Consumer Reports, 7.5 million children under 13 use Facebook. One academic study found that 46% of 12-year-olds used Facebook, and other research is consistent with these findings. Pre-teens are on Facebook, notwithstanding its policies.

A recent paper by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey examined the motivations of parents whose underage children were on Facebook. Some misunderstood the minimum age, or thought it was only a recommendation, but a substantial fraction understood that it was a genuine requirement. Over three quarters of the parents they surveyed agreed there were circumstances under which they’d let their child violate a site’s age restrictions, particularly for school-related reasons or to communicate with family members. And of the half of the parents in the survey (roughly 50%) with children who had Facebook accounts, over half had helped create the account.

Facebook’s terms of service leave no room for doubt:

4. … Here are some commitments you make to us relating to registering and maintaining the security of your account: …

1. You will not provide any false personal information on Facebook, or create an account for anyone other than yourself without permission.

5. You will not use Facebook if you are under 13.

Parents who create accounts for their children violate section 4.1 by “creat[ing] an account for anyone other than yourself.” On the Department of Justice’s theory of “exceeds authorized access,” that means the parents have directly violated the CFAA. But even parents who merely help their underage children create accounts (for example by explaining how to enter a false birthdate) are in trouble. Their children “exceed[] authorized access” by violating section 4.5. That makes the parents guilty of aiding and abetting a violation of the CFAA. The punishment is the same.

boyd and her coauthors focus mainly on the failure of COPPA’s regulatory strategy, and on the moral lesson parents teach their children when they show that lying online is acceptable. But there is another, even more frightening point implied by their findings. A mother in Ohio who helps her son sign up for Facebook to keep in touch with his cousin in Arizona is exposing herself to criminal liability, if the CFAA means what the Department of Justice claims it does. Worse still, the Department of Justice wants it to make a criminal of the mother.

This ought to be a scandal. A high official in the Department of Justice is endorsing a law that could be used to put millions of parents in prison. Do something to tick off a federal prosecutor and even if you’ve otherwise lived a blameless life, they can lock you up for helping your kid use Facebook. While the Department has started to walk back from this extreme position, its mushy-mouthed disavowals would be a lot more persuasive if it hadn’t already brought a prosecution based on just such a theory. This one should be easy. Prosecutors don’t need this kind of abusive power over parents, and they should say, openly and clearly, that they don’t have it.

Microsoft Research, Social Media Postdoc Opening

The Social Media Collective at Microsoft Research New England (MSRNE) is looking for a social media postdoctoral researcher for next year. This position is an ideal opportunity for a scholar whose work touches on social media, internet studies, technology policy, and/or science and technology studies.

Application deadline: December 12, 2011.

Microsoft Research provides a vibrant multidisciplinary research environment with an open publications policy and with close links to top academic institutions around the world. Postdoc researcher positions provide an opportunity to develop your research career and to interact with some of the top minds in the research community, with the potential to have your research realized in products and services that will be used world-wide. Postdoc researchers are invited to define their own research agenda and demonstrate their ability to drive forward an effective program of research. Successful candidates will have a well-established research track record as demonstrated by journal publications and conference papers, as well as participation on program committees, editorial boards, and advisory panels.

Postdoc researchers receive a competitive salary and benefits package, and are eligible for relocation expenses. Postdoc researchers are hired for a one- or two-year fixed term appointment following the academic calendar, starting in July 2012. Applicants must have completed the requirements for a PhD, including submission of their dissertation, prior to joining Microsoft Research.

While each of the six Microsoft Research labs has openings in a variety of different disciplines, the Social Media Collective at Microsoft Research New England (located in Cambridge, MA) is especially interested in identifying social science candidates. Qualifications include a strong academic record in anthropology, communications, information science, jurisprudence, media studies, sociology, or related fields. The ideal candidate will be working on issues surrounding social media, internet studies, technology policy, and/or science and technology studies.

The Social Media Collective is comprised of full-time researchers, postdocs, visiting faculty, PhD interns, and research assistants. Current projects include:

  • How do youth make sense of networked publics? (danah boyd and Alice Marwick)
  • How do people with minimal access to the Internet use mobile media to negotiate cultural marginalization and social immobility? (Mary L. Gray)
  • How can networked information systems support a public right to hear, not only an individual right to speak? (Mike Ananny)
  • How do digital modes of self-presentation function as displays of conspicuous consumption? (Alice Marwick)
  • What is technology’s role in human trafficking? (danah boyd)
  • How do we listen to each other in networked environments, and what are the implications for intimacy, privacy and social change? (Kate Crawford)
  • How do people living in regions controlled by organized crime engage in collective action using social media? (Andrés Monroy-Hernández)
  • How does social media use affect relationships between artists and audiences in the creative industries? (Nancy Baym)

To apply for a postdoc position position at MSRNE:

  1. Submit an online application at, marking your interest as “Social Computing”:
  2. After you submit your application, send us an email (msrnejob [at] alerting us so that we can immediately request letters of reference on your behalf. Indicate that you are applying for the social science postdoc opening. Include the following in your email: 1) Two journal articles, book chapters, or an equivalent writing sample; 2) An abstract of your dissertation (one page maximum length); 3) A description of how your research agenda relates to social media (one page maximum length)

For more information, see:

To learn more about the Social Media Collective, check out our blog:

Why Parents Help Children Violate Facebook’s 13+ Rule

Announcing new journal article: “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, First Monday.

“At what age should I let my child join Facebook?” This is a question that countless parents have asked my collaborators and me. Often, it’s followed by the following: “I know that 13 is the minimum age to join Facebook, but is it really so bad that my 12-year-old is on the site?”

While parents are struggling to determine what social media sites are appropriate for their children, government tries to help parents by regulating what data internet companies can collect about children without parental permission. Yet, as has been the case for the last decade, this often backfires. Many general-purpose communication platforms and social media sites restrict access to only those 13+ in response to a law meant to empower parents: the Children’s Online Privacy Protection Act (COPPA). This forces parents to make a difficult choice: help uphold the minimum age requirements and limit their children’s access to services that let kids connect with family and friends OR help their children lie about their age to circumvent the age-based restrictions and eschew the protections that COPPA is meant to provide.

In order to understand how parents were approaching this dilemma, my collaborators — Eszter Hargittai (Northwestern University), Jason Schultz (University of California, Berkeley), John Palfrey (Harvard University) — and I decided to survey parents. In many ways, we were responding to a flurry of studies (e.g. Pew’s) that revealed that millions of U.S. children have violated Facebook’s Terms of Service and joined the site underage. These findings prompted outrage back in May as politicians blamed Facebook for failing to curb underage usage. Embedded in this furor was an assumption that by not strictly guarding its doors and keeping children out, Facebook was undermining parental authority and thumbing its nose at the law. Facebook responded by defending its practices — and highlighting how it regularly ejects children from its site. More controversially, Facebook’s founder Mark Zuckerberg openly questioned the value of COPPA in the first place.

While Facebook has often sparked anger over its cavalier attitudes towards user privacy, Zuckerberg’s challenge with regard to COPPA has merit. It’s imperative that we question the assumptions embedded in this policy. All too often, the public takes COPPA at face-value and politicians angle to build new laws based on it without examining its efficacy.

Eszter, Jason, John, and I decided to focus on one core question: Does COPPA actually empower parents? In order to do so, we surveyed parents about their household practices with respect to social media and their attitudes towards age restrictions online. We are proud to release our findings today, in a new paper published at First Monday called “Why parents help their children lie to Facebook about age: Unintended consequences of the ‘Children’s Online Privacy Protection Act’.” From a national sample of 1,007 U.S. parents who have children living with them between the ages of 10-14 conducted July 5-14, 2011, we found:

  • Although Facebook’s minimum age is 13, parents of 13- and 14-year-olds report that, on average, their child joined Facebook at age 12.
  • Half (55%) of parents of 12-year-olds report their child has a Facebook account, and most (82%) of these parents knew when their child signed up. Most (76%) also assisted their 12-year old in creating the account.
  • A third (36%) of all parents surveyed reported that their child joined Facebook before the age of 13, and two-thirds of them (68%) helped their child create the account.
  • Half (53%) of parents surveyed think Facebook has a minimum age and a third (35%) of these parents think that this is a recommendation and not a requirement.
  • Most (78%) parents think it is acceptable for their child to violate minimum age restrictions on online services.

The status quo is not working if large numbers of parents are helping their children lie to get access to online services. Parents do appear to be having conversations with their children, as COPPA intended. Yet, what does it mean if they’re doing so in order to violate the restrictions that COPPA engendered?

One reaction to our data might be that companies should not be allowed to restrict access to children on their sites. Unfortunately, getting the parental permission required by COPPA is technologically difficult, financially costly, and ethically problematic. Sites that target children take on this challenge, but often by excluding children whose parents lack resources to pay for the service, those who lack credit cards, and those who refuse to provide extra data about their children in order to offer permission. The situation is even more complicated for children who are in abusive households, have absentee parents, or regularly experience shifts in guardianship. General-purpose sites, including communication platforms like Gmail and Skype and social media services like Facebook and Twitter, generally prefer to avoid the social, technical, economic, and free speech complications involved.

While there is merit to thinking about how to strengthen parent permission structures, focusing on this obscures the issues that COPPA is intended to address: data privacy and online safety. COPPA predates the rise of social media. Its architects never imagined a world where people would share massive quantities of data as a central part of participation. It no longer makes sense to focus on how data are collected; we must instead question how those data are used. Furthermore, while children may be an especially vulnerable population, they are not the only vulnerable population. Most adults have little sense of how their data are being stored, shared, and sold.

COPPA is a well-intentioned piece of legislation with unintended consequences for parents, educators, and the public writ large. It has stifled innovation for sites focused on children and its implementations have made parenting more challenging. Our data clearly show that parents are concerned about privacy and online safety. Many want the government to help, but they don’t want solutions that unintentionally restrict their children’s access. Instead, they want guidance and recommendations to help them make informed decisions. Parents often want their children to learn how to be responsible digital citizens. Allowing them access is often the first step.

Educators face a different set of issues. Those who want to help youth navigate commercial tools often encounter the complexities of age restrictions. Consider the 7th grade teacher whose students are heavy Facebook users. Should she admonish her students for being on Facebook underage? Or should she make sure that they understand how privacy settings work? Where does digital literacy fit in when what children are doing is in violation of websites’ Terms of Service?

At first blush, the issues surrounding COPPA may seem to only apply to technology companies and the government, but their implications extend much further. COPPA affects parenting, education, and issues surrounding youth rights. It affects those who care about free speech and those who are concerned about how violence shapes home life. It’s important that all who care about youth pay attention to these issues. They’re complex and messy, full of good intention and unintended consequences. But rather than reinforcing or extending a legal regime that produces age-based restrictions which parents actively circumvent, we need to step back and rethink the underlying goals behind COPPA and develop new ways of achieving them. This begins with a public conversation.

We are excited to release our new study in the hopes that it will contribute to that conversation. To read our complete findings and learn more about their implications for policy makers, see “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, published in First Monday.

To learn more about the Children’s Online Privacy Protection Act (COPPA), make sure to check out the Federal Trade Commission’s website.

(Versions of this post were originally written for the Huffington Post and for the Digital Media and Learning Blog.)

Image Credit: Tim Roe