What does the Facebook experiment teach us?

I’m intrigued by the reaction that has unfolded around the Facebook “emotion contagion” study. (If you aren’t familiar with this, read this primer.) As others have pointed out, the practice of A/B testing content is quite common. And Facebook has a long history of experimenting on how it can influence people’s attitudes and practices, even in the realm of research. An earlier study showed that Facebook decisions could shape voters’ practices. But why is it that *this* study has sparked a firestorm?

In asking people about this, I’ve been given two dominant reasons:

  1. People’s emotional well-being is sacred.
  2. Research is different than marketing practices.

I don’t find either of these responses satisfying.

The Consequences of Facebook’s Experiment

Facebook’s research team is not truly independent of product. They have a license to do research and publish it, provided that it contributes to the positive development of the company. If Facebook knew that this research would spark the negative PR backlash, they never would’ve allowed it to go forward or be published. I can only imagine the ugliness of the fight inside the company now, but I’m confident that PR is demanding silence from researchers.

I do believe that the research was intended to be helpful to Facebook. So what was the intended positive contribution of this study? I get the sense from Adam Kramer’s comments that the goal was to determine if content sentiment could affect people’s emotional response after being on Facebook. In other words, given that Facebook wants to keep people on Facebook, if people came away from Facebook feeling sadder, presumably they’d not want to come back to Facebook again. Thus, it’s in Facebook’s better interest to leave people feeling happier. And this study suggests that the sentiment of the content influences this. This suggests that one applied take-away for product is to downplay negative content. Presumably this is better for users and better for Facebook.

We can debate all day long as to whether or not this is what that study actually shows, but let’s work with this for a second. Let’s say that pre-study Facebook showed 1 negative post for every 3 positive and now, because of this study, Facebook shows 1 negative post for every 10 positive ones. If that’s the case, was the one week treatment worth the outcome for longer term content exposure? Who gets to make that decision?

Folks keep talking about all of the potential harm that could’ve happened by the study – the possibility of suicides, the mental health consequences. But what about the potential harm of negative content on Facebook more generally? Even if we believe that there were subtle negative costs to those who received the treatment, the ongoing costs of negative content on Facebook every week other than that 1 week experiment must be more costly. How then do we account for positive benefits to users if Facebook increased positive treatments en masse as a result of this study? Of course, the problem is that Facebook is a black box. We don’t know what they did with this study. The only thing we know is what is published in PNAS and that ain’t much.

Of course, if Facebook did make the content that users see more positive, should we simply be happy? What would it mean that you’re more likely to see announcements from your friends when they are celebrating a new child or a fun night on the town, but less likely to see their posts when they’re offering depressive missives or angsting over a relationship in shambles? If Alice is happier when she is oblivious to Bob’s pain because Facebook chooses to keep that from her, are we willing to sacrifice Bob’s need for support and validation? This is a hard ethical choice at the crux of any decision of what content to show when you’re making choices. And the reality is that Facebook is making these choices every day without oversight, transparency, or informed consent.

Algorithmic Manipulation of Attention and Emotions

Facebook actively alters the content you see. Most people focus on the practice of marketing, but most of what Facebook’s algorithms do involve curating content to provide you with what they think you want to see. Facebook algorithmically determines which of your friends’ posts you see. They don’t do this for marketing reasons. They do this because they want you to want to come back to the site day after day. They want you to be happy. They don’t want you to be overwhelmed. Their everyday algorithms are meant to manipulate your emotions. What factors go into this? We don’t know.

Facebook is not alone in algorithmically predicting what content you wish to see. Any recommendation system or curatorial system is prioritizing some content over others. But let’s compare what we glean from this study with standard practice. Most sites, from major news media to social media, have some algorithm that shows you the content that people click on the most. This is what drives media entities to produce listicals, flashy headlines, and car crash news stories. What do you think garners more traffic – a detailed analysis of what’s happening in Syria or 29 pictures of the cutest members of the animal kingdom? Part of what media learned long ago is that fear and salacious gossip sell papers. 4chan taught us that grotesque imagery and cute kittens work too. What this means online is that stories about child abductions, dangerous islands filled with snakes, and celebrity sex tape scandals are often the most clicked on, retweeted, favorited, etc. So an entire industry has emerged to produce crappy click bait content under the banner of “news.”

Guess what? When people are surrounded by fear-mongering news media, they get anxious. They fear the wrong things. Moral panics emerge. And yet, we as a society believe that it’s totally acceptable for news media – and its click bait brethren – to manipulate people’s emotions through the headlines they produce and the content they cover. And we generally accept that algorithmic curators are perfectly well within their right to prioritize that heavily clicked content over others, regardless of the psychological toll on individuals or the society. What makes their practice different? (Other than the fact that the media wouldn’t hold itself accountable for its own manipulative practices…)

Somehow, shrugging our shoulders and saying that we promoted content because it was popular is acceptable because those actors don’t voice that their intention is to manipulate your emotions so that you keep viewing their reporting and advertisements. And it’s also acceptable to manipulate people for advertising because that’s just business. But when researchers admit that they’re trying to learn if they can manipulate people’s emotions, they’re shunned. What this suggests is that the practice is acceptable, but admitting the intention and being transparent about the process is not.

But Research is Different!!

As this debate has unfolded, whenever people point out that these business practices are commonplace, folks respond by highlighting that research or science is different. What unfolds is a high-browed notion about the purity of research and its exclusive claims on ethical standards.

Do I think that we need to have a serious conversation about informed consent? Absolutely. Do I think that we need to have a serious conversation about the ethical decisions companies make with user data? Absolutely. But I do not believe that this conversation should ever apply just to that which is categorized under “research.” Nor do I believe that academe is necessarily providing a golden standard.

Academe has many problems that need to be accounted for. Researchers are incentivized to figure out how to get through IRBs rather than to think critically and collectively about the ethics of their research protocols. IRBs are incentivized to protect the university rather than truly work out an ethical framework for these issues. Journals relish corporate datasets even when replicability is impossible. And for that matter, even in a post-paper era, journals have ridiculous word count limits that demotivate researchers from spelling out all of the gory details of their methods. But there are also broader structural issues. Academe is so stupidly competitive and peer review is so much of a game that researchers have little incentive to share their studies-in-progress with their peers for true feedback and critique. And the status games of academe reward those who get access to private coffers of data while prompting those who don’t to chastise those who do. And there’s generally no incentive for corporates to play nice with researchers unless it helps their prestige, hiring opportunities, or product.

IRBs are an abysmal mechanism for actually accounting for ethics in research. By and large, they’re structured to make certain that the university will not be liable. Ethics aren’t a checklist. Nor are they a universal. Navigating ethics involves a process of working through the benefits and costs of a research act and making a conscientious decision about how to move forward. Reasonable people differ on what they think is ethical. And disciplines have different standards for how to navigate ethics. But we’ve trained an entire generation of scholars that ethics equals “that which gets past the IRB” which is a travesty. We need researchers to systematically think about how their practices alter the world in ways that benefit and harm people. We need ethics to not just be tacked on, but to be an integral part of how *everyone* thinks about what they study, build, and do.

There’s a lot of research that has serious consequences on the people who are part of the study. I think about the work that some of my colleagues do with child victims of sexual abuse. Getting children to talk about these awful experiences can be quite psychologically tolling. Yet, better understanding what they experienced has huge benefits for society. So we make our trade-offs and we do research that can have consequences. But what warms my heart is how my colleagues work hard to help those children by providing counseling immediately following the interview (and, in some cases, follow-up counseling). They think long and hard about each question they ask, and how they go about asking it. And yet most IRBs wouldn’t let them do this work because no university wants to touch anything that involves kids and sexual abuse. Doing research involves trade-offs and finding an ethical path forward requires effort and risk.

It’s far too easy to say “informed consent” and then not take responsibility for the costs of the research process, just as it’s far too easy to point to an IRB as proof of ethical thought. For any study that involves manipulation – common in economics, psychology, and other social science disciplines – people are only so informed about what they’re getting themselves into. You may think that you know what you’re consenting to, but do you? And then there are studies like discrimination audit studies in which we purposefully don’t inform people that they’re part of a study. So what are the right trade-offs? When is it OK to eschew consent altogether? What does it mean to truly be informed? When it being informed not enough? These aren’t easy questions and there aren’t easy answers.

I’m not necessarily saying that Facebook made the right trade-offs with this study, but I think that the scholarly reaction of research is only acceptable with IRB plus informed consent is disingenuous. Of course, a huge part of what’s at stake has to do with the fact that what counts as a contract legally is not the same as consent. Most people haven’t consented to all of Facebook’s terms of service. They’ve agreed to a contract because they feel as though they have no other choice. And this really upsets people.

A Different Theory

The more I read people’s reactions to this study, the more that I’ve started to think that the outrage has nothing to do with the study at all. There is a growing amount of negative sentiment towards Facebook and other companies that collect and use data about people. In short, there’s anger at the practice of big data. This paper provided ammunition for people’s anger because it’s so hard to talk about harm in the abstract.

For better or worse, people imagine that Facebook is offered by a benevolent dictator, that the site is there to enable people to better connect with others. In some senses, this is true. But Facebook is also a company. And a public company for that matter. It has to find ways to become more profitable with each passing quarter. This means that it designs its algorithms not just to market to you directly but to convince you to keep coming back over and over again. People have an abstract notion of how that operates, but they don’t really know, or even want to know. They just want the hot dog to taste good. Whether it’s couched as research or operations, people don’t want to think that they’re being manipulated. So when they find out what soylent green is made of, they’re outraged. This study isn’t really what’s at stake. What’s at stake is the underlying dynamic of how Facebook runs its business, operates its system, and makes decisions that have nothing to do with how its users want Facebook to operate. It’s not about research. It’s a question of power.

I get the anger. I personally loathe Facebook and I have for a long time, even as I appreciate and study its importance in people’s lives. But on a personal level, I hate the fact that Facebook thinks it’s better than me at deciding which of my friends’ posts I should see. I hate that I have no meaningful mechanism of control on the site. And I am painfully aware of how my sporadic use of the site has confused their algorithms so much that what I see in my newsfeed is complete garbage. And I resent the fact that because I barely use the site, the only way that I could actually get a message out to friends is to pay to have it posted. My minimal use has made me an algorithmic pariah and if I weren’t technologically savvy enough to know better, I would feel as though I’ve been shunned by my friends rather than simply deemed unworthy by an algorithm. I also refuse to play the game to make myself look good before the altar of the algorithm. And every time I’m forced to deal with Facebook, I can’t help but resent its manipulations.

There’s also a lot that I dislike about the company and its practices. At the same time, I’m glad that they’ve started working with researchers and started publishing their findings. I think that we need more transparency in the algorithmic work done by these kinds of systems and their willingness to publish has been one of the few ways that we’ve gleaned insight into what’s going on. Of course, I also suspect that the angry reaction from this study will prompt them to clamp down on allowing researchers to be remotely public. My gut says that they will naively respond to this situation as though the practice of research is what makes them vulnerable rather than their practices as a company as a whole. Beyond what this means for researchers, I’m concerned about what increased silence will mean for a public who has no clue of what’s being done with their data, who will think that no new report of terrible misdeeds means that Facebook has stopped manipulating data.

Information companies aren’t the same as pharmaceuticals. They don’t need to do clinical trials before they put a product on the market. They can psychologically manipulate their users all they want without being remotely public about exactly what they’re doing. And as the public, we can only guess what the black box is doing.

There’s a lot that needs reformed here. We need to figure out how to have a meaningful conversation about corporate ethics, regardless of whether it’s couched as research or not. But it’s not so simple as saying that a lack of a corporate IRB or a lack of a golden standard “informed consent” means that a practice is unethical. Almost all manipulations that take place by these companies occur without either one of these. And they go unchecked because they aren’t published or public.

Ethical oversight isn’t easy and I don’t have a quick and dirty solution to how it should be implemented. But I do have a few ideas. For starters, I’d like to see any company that manipulates user data create an ethics board. Not an IRB that approves research studies, but an ethics board that has visibility into all proprietary algorithms that could affect users. For public companies, this could be done through the ethics committee of the Board of Directors. But rather than simply consisting of board members, I think that it should consist of scholars and users. I also think that there needs to be a mechanism for whistleblowing regarding ethics from within companies because I’ve found that many employees of companies like Facebook are quite concerned by certain algorithmic decisions, but feel as though there’s no path to responsibly report concerns without going fully public. This wouldn’t solve all of the problems, nor am I convinced that most companies would do so voluntarily, but it is certainly something to consider. More than anything, I want to see users have the ability to meaningfully influence what’s being done with their data and I’d love to see a way for their voices to be represented in these processes.

I’m glad that this study has prompted an intense debate among scholars and the public, but I fear that it’s turned into a simplistic attack on Facebook over this particular study rather than a nuanced debate over how we create meaningful ethical oversight in research and practice. The lines between research and practice are always blurred and information companies like Facebook make this increasingly salient. No one benefits by drawing lines in the sand. We need to address the problem more holistically. And, in the meantime, we need to hold companies accountable for how they manipulate people across the board, regardless of whether or not it’s couched as research. If we focus too much on this study, we’ll lose track of the broader issues at stake.

Why Snapchat is Valuable: It’s All About Attention

Most people who encounter a link to this post will never read beyond this paragraph. Heck, most people who encountered a link to this post didn’t click on the link to begin with. They simply saw the headline, took note that someone over 30 thinks that maybe Snapchat is important, and moved onto the next item in their Facebook/Twitter/RSS/you-name-it stream of media. And even if they did read it, I’ll never know it because they won’t comment or retweet or favorite this in any way.

We’ve all gotten used to wading in streams of social media content. Open up Instagram or Secret on your phone and you’ll flick on through the posts in your stream, looking for a piece of content that’ll catch your eye. Maybe you don’t even bother looking at the raw stream on Twitter. You don’t have to because countless curatorial services like digg are available to tell you what was most important in your network. Facebook doesn’t even bother letting you see your raw stream; their algorithms determine what you get access to in the first place (unless, of course, someone pays to make sure their friends see their content).

Snapchat offers a different proposition. Everyone gets hung up on how the disappearance of images may (or may not) afford a new kind of privacy. Adults fret about how teens might be using this affordance to share inappropriate (read: sexy) pictures, projecting their own bad habits onto youth. But this is isn’t what makes Snapchat utterly intriguing. What makes Snapchat matter has to do with how it treats attention.

When someone sends you an image/video via Snapchat, they choose how long you get to view the image/video. The underlying message is simple: You’ve got 7 seconds. PAY ATTENTION. And when people do choose to open a Snap, they actually stop what they’re doing and look.

In a digital world where everyone’s flicking through headshots, images, and text without processing any of it, Snapchat asks you to stand still and pay attention to the gift that someone in your network just gave you. As a result, I watch teens choose not to open a Snap the moment they get it because they want to wait for the moment when they can appreciate whatever is behind that closed door. And when they do, I watch them tune out everything else and just concentrate on what’s in front of them. Rather than serving as yet-another distraction, Snapchat invites focus.

Furthermore, in an ecosystem where people “favorite” or “like” content that is inherently unlikeable just to acknowledge that they’ve consumed it, Snapchat simply notifies the creator when the receiver opens it up. This is such a subtle but beautiful way of embedding recognition into the system. Sometimes, a direct response is necessary. Sometimes, we need nothing more than a simple nod, a way of signaling acknowledgement. And that’s precisely why the small little “opened” note will bring a smile to someone’s face even if the recipient never said a word.

Snapchat is a reminder that constraints have a social purpose, that there is beauty in simplicity, and that the ephemeral is valuable. There aren’t many services out there that fundamentally question the default logic of social media and, for that, I think that we all need to pay attention to and acknowledge Snapchat’s moves in this ecosystem.

(This post was originally published on LinkedIn. More comments can be found there.)

Keeping Teens ‘Private’ on Facebook Won’t Protect Them

(Originally written for TIME Magazine)

We’re afraid of and afraid for teenagers. And nothing brings out this dualism more than discussions of how and when teens should be allowed to participate in public life.

Last week, Facebook made changes to teens’ content-sharing options. They introduced the opportunity for those ages 13 to 17 to share their updates and images with everyone and not just with their friends. Until this change, teens could not post their content publicly even though adults could. When minors select to make their content public, they are given a notice and a reminder in order to make it very clear to them that this material will be shared publicly. “Public” is never the default for teens; they must choose to make their content public, and they must affirm that this is what they intended at the point in which they choose to publish.

Representatives of parenting organizations have responded to this change negatively, arguing that this puts children more at risk. And even though the Pew Internet & American Life Project has found that teens are quite attentive to their privacy, and many other popular sites allow teens to post publicly (e.g. Twitter, YouTube, Tumblr), privacy advocates are arguing that Facebook’s decision to give teens choices suggests that the company is undermining teens’ privacy.

But why should youth not be allowed to participate in public life? Do paternalistic, age-specific technology barriers really protect or benefit teens?

One of the most crucial aspects of coming of age is learning how to navigate public life. The teenage years are precisely when people transition from being a child to being an adult. There is no magic serum that teens can drink on their 18th birthday to immediately mature and understand the world around them. Instead, adolescents must be exposed to — and allowed to participate in — public life while surrounded by adults who can help them navigate complex situations with grace. They must learn to be a part of society, and to do so, they must be allowed to participate.

Most teens no longer see Facebook as a private place. They befriend anyone they’ve ever met, from summer-camp pals to coaches at universities they wish to attend. Yet because Facebook doesn’t allow youth to contribute to public discourse through the site, there’s an assumption that the site is more private than it is. Facebook’s decision to allow teens to participate in public isn’t about suddenly exposing youth; it’s about giving them an option to treat the site as being as public as it often is in practice.

Rather than trying to protect teens from all fears and risks that we can imagine, let’s instead imagine ways of integrating them constructively into public life. The key to doing so is not to create technologies that reinforce limitations but to provide teens and parents with the mechanisms and information needed to make healthy decisions. Some young people may be ready to start navigating broad audiences at 13; others are not ready until they are much older. But it should not be up to technology companies to determine when teens are old enough to have their voices heard publicly. Parents should be allowed to work with their children to help them navigate public spaces as they see fit. And all of us should be working hard to inform our younger citizens about the responsibilities and challenges of being a part of public life. I commend Facebook for giving teens the option and working hard to inform them of the significance of their choices.

(Originally written for TIME Magazine)

eyes on the street or creepy surveillance?

This summer, with NSA scandal after NSA scandal, the public has (thankfully) started to wake up to issues of privacy, surveillance, and monitoring. We are living in a data world and there are serious questions to ask and contend with. But part of what makes this data world messy is that it’s not so easy as to say that all monitoring is always bad. Over the last week, I’ve been asked by a bunch of folks to comment on the report that a California school district hired an online monitoring firm to watch its students. This is a great example of a situation that is complicated.

The media coverage focuses on how the posts that they are monitoring are public, suggesting that this excuses their actions because “no privacy is violated.” We should all know by now that this is a terrible justification. Just because teens’ content is publicly accessible does not mean that it is intended for universal audiences nor does it mean that the onlooker understands what they see. (Alice Marwick and I discuss youth privacy dynamics in detail in “Social Privacy in Networked Publics”.) But I want to caution against jumping to the opposite conclusion because these cases aren’t as simple as they might seem.

Consider Tess’ story. In 2007, she and her friend killed her mother. The media reported it as “girl with MySpace kills mother” so I decided to investigate the case. For 1.5 years, she documented on a public MySpace her struggles with her mother’s alcoholism and abuse, her attempts to run away, her efforts to seek help. When I reached out to her friends after she was arrested, I learned that they had reported their concerns to the school but no one did anything. Later, I learned that the school didn’t investigate because MySpace was blocked on campus so they couldn’t see what she had posted. And although the school had notified social services out of concern, they didn’t have enough evidence to move forward. What became clear in this incident – and many others that I tracked – is that there are plenty of youth crying out for help online on a daily basis. Youth who could really benefit from the fact that their material is visible and someone is paying attention.

Many youth cry out for help through social media. Publicly, often very publicly. Sometimes for an intended audience. Sometimes as a call to the wind for anyone who might be paying attention. I’ve read far too many suicide notes and abuse stories to believe that privacy is the only frame viable here. One of the most heartbreaking was from a girl who was commercially sexually exploited by her middle class father. She had gone to her school who had helped her go to the police; the police refused to help. She published every detail on Twitter about exactly what he had done to her and all of the people who failed to help her. The next day she died by suicide.  In my research, I’ve run across too many troubled youth to count. I’ve spent many a long night trying to help teens I encounter connect with services that can help them.

So here’s the question that underlies any discussion of monitoring: how do we leverage the visibility of online content to see and hear youth in a healthy way? How do we use the technologies that we have to protect them rather than focusing on punishing them?  We shouldn’t ignore youth who are using social media to voice their pain in the hopes that someone who cares might stumble across their pleas.

Urban theorist Jane Jacobs used to argue that the safest societies are those where there are “eyes on the street.” What she meant by this was that healthy communities looked out for each other, were attentive to when others were hurting, and were generally present when things went haywire. How do we create eyes on the digital street? How do we do so in a way that’s not creepy?  When is proactive monitoring valuable for making a difference in teens’ lives?  How do we make sure that these same tools aren’t abused for more malicious purposes?

What matters is who is doing the looking and for what purposes. When the looking is done by police, the frame is punitive. But when the looking is done by caring, concerned, compassionate people – even authority figures like social workers – the outcome can be quite different. However well-intended, law enforcement’s role is to uphold the law and people perceive their presence as oppressive even when they’re trying to help. And, sadly, when law enforcement is involved, it’s all too likely that someone will find something wrong. And then we end up with the kinds of surveillance that punishes.

If there’s infrastructure put into place for people to look out for youth who are in deep trouble, I’m all for it. But the intention behind the looking matters the most. When you’re looking for kids who are in trouble in order to help them, you look for cries for help that are public. If you’re looking to punish, you’ll misinterpret content, take what’s intended to be private and publicly punish, and otherwise abuse youth in a new way.

Unfortunately, what worries me is that systems that are put into place to help often get used to punish. There is often a slippery slope where the designers and implementers never intended for it to be used that way. But once it’s there….

So here’s my question to you. How can we leverage technology to provide an additional safety net for youth who are struggling without causing undue harm? We need to create a society where people are willing to check in on each other without abusing the power of visibility. We need more eyes on the street in the Jacbos-ian sense, not in the surveillance state sense. Finding this balance won’t be easy but I think that it behooves us to not jump to extremes. So what’s the path forward?

(I discuss this issue in more detail in my upcoming book “It’s Complicated: The Social Lives of Networked Teens.”  You can pre-order the book now!)

Challenges for Health in a Networked Society

In February, I had the great fortune to visit the Robert Wood Johnson Foundation as part of their “What’s Next Health” series. I gave a talk raising a series of critical questions for those working on health issues. The folks at RWJF have posted my talk, along with an infographic of some of the challenges I see coming down the pipeline.

They also asked me to write a brief blog post introducing some of my ideas, based on one of the questions that I asked in the lecture. I’ve reposted it here, but if this interests you, you should really go check out the talk over at RWJF’s page.

….

RWJF’s What’s Next Health: Who Do We Trust?

We live in a society that is more networked than our grandparents could ever have imagined. More people have information at their fingertips than ever before. It’s easy to see all of this potential and celebrate the awe-some power of the internet. But as we think about the intersection of technology and society, there are so many open questions and challenging conundrums without clear answers. One of the most pressing issues has to do with trust, particularly as people turn to the internet and social media as a source of health information. We are watching shifts in how people acquire information. But who do they trust? And is trust shifting?

Consider the recent American presidential election, which is snarkily referred to as “post-factual.” The presidential candidates spoke past one another, refusing to be pinned down. News agencies went into overdrive to fact-check each statement made by each candidate, but the process became so absurd that folks mostly just gave up trying to get clarity. Instead, they focused on more fleeting issues like whether or not they trusted the candidates.

In a world where information is flowing fast and furious, many experience aspects of this dynamic all the time. People turn to their friends for information because they do not trust what’s available online. I’ve interviewed teenagers who, thanks to conversations with their peers and abstinence-only education, genuinely believe that if they didn’t get pregnant the last time they had sex, they won’t get pregnant this time. There’s so much reproductive health information available online, but youth turn to their friends for advice because they trust those “facts” more.

The internet introduces the challenges of credibility but it also highlights the consequences of living in a world of information overload, where the issue isn’t whether or not the fact is out there and available, but how much effort a person must go through to manage making sense of so much information. Why should someone trust a source on the internet if they don’t have the tools to assess the content’s credibility? It’s often easier to turn to friends or ask acquaintances on Facebook for suggestions. People use the “lazy web” because friends are more likely to respond quickly and make sense than trying to sort out what’s available through Google.

As we look to the future, organizations that focus on the big issues — like the Robert Wood Johnson Foundation — need to think about what it means to create informed people in a digital era. How do we spread accurate information through networks? How do we get people to trust abstract entities that have no personal role in their lives?”

Questions around internet and trust are important: What people know and believe will drive what they do and this will shape their health.

The beauty of this moment, with so many open questions and challenges, is that we are in a position to help shape the future by delicately navigating these complex issues. Thus, we must be asking ourselves: How can we collectively account for different stakeholders and empower people to make the world a better place?

thoughts on Pew’s latest report: notable findings on race and privacy

Yesterday, Pew Internet and American Life Project (in collaboration with Berkman) unveiled a brilliant report about “Teens, Social Media, and Privacy.” As a researcher who’s been in the trenches on these topics for a long time now, none of their finding surprised me but it still gives me absolute delight when our data is so beautifully in synch. I want to quickly discuss two important issues that this report raise.

Race is a factor in explaining differences in teen social media use.

Pew provides important measures on shifts in social media, including the continued saturation of Facebook, the decline of MySpace, and the rise of other social media sites (e.g., Twitter, Instagram). When they drill down on race, they find notable differences in adoption. For example, they highlight data that is the source of “black Twitter” narratives: 39% of African-American teens use Twitter compared to 23% of white teens.

Most of the report is dedicated to the increase in teen sharing, but once again, we start to see some race differences. For example, 95% of white social media-using teens share their “real name” on at least one service while 77% of African-American teens do. And while 39% of African-American teens on social media say that they post fake information, only 21% of white teens say they do this.

Teens’ practices on social media also differ by race. For example, on Facebook, 48% of African-American teens befriend celebrities, athletes, or musicians while one 25% of white teen users do.

While media and policy discussions of teens tend to narrate them as an homogenous group, there are serious and significant differences in practices and attitudes among teens. Race is not the only factor, but it is a factor. And Pew’s data on the differences across race highlight this.

Of course, race isn’t actually what’s driving what we see as race differences. The world in which teens live is segregated and shaped by race. Teens are more likely to interact with people of the same race and their norms, practices, and values are shaped by the people around them. So what we’re actually seeing is a manifestation of network effects. And the differences in the Pew report point to black youth’s increased interest in being a part of public life, their heightened distrust of those who hold power over them, and their notable appreciation for pop culture. These differences are by no means new, but what we’re seeing is that social media is reflecting back at us cultural differences shaped by race that are pervasive across America.

Teens are sharing a lot of content, but they’re also quite savvy.

Pew’s report shows an increase in teens’ willingness to share all sorts of demographic, contact, and location data. This is precisely the data that makes privacy advocates anxious. At the same time, their data show that teens are well-aware of privacy settings and have changed the defaults even if they don’t choose to manage the accessibility of each content piece they share. They’re also deleting friends (74%), deleting previous posts (59%), blocking people (58%), deleting comments (53%), detagging themselves (45%), and providing fake info (26%).

My favorite finding of Pew’s is that 58% of teens cloak their messages either through inside jokes or other obscure references, with more older teens (62%) engaging in this practice than younger teens (46%). This is the practice that I’ve seen significantly rise since I first started doing work on teens’ engagement with social media. It’s the source of what Alice Marwick and I describe as “social steganography” in our paper on teen privacy practices.

While adults are often anxious about shared data that might be used by government agencies, advertisers, or evil older men, teens are much more attentive to those who hold immediate power over them – parents, teachers, college admissions officers, army recruiters, etc. To adults, services like Facebook that may seem “private” because you can use privacy tools, but they don’t feel that way to youth who feel like their privacy is invaded on a daily basis. (This, btw, is part of why teens feel like Twitter is more intimate than Facebook. And why you see data like Pew’s that show that teens on Facebook have, on average 300 friends while, on Twitter, they have 79 friends.) Most teens aren’t worried about strangers; they’re worried about getting in trouble.

Over the last few years, I’ve watched as teens have given up on controlling access to content. It’s too hard, too frustrating, and technology simply can’t fix the power issues. Instead, what they’ve been doing is focusing on controlling access to meaning. A comment might look like it means one thing, when in fact it means something quite different. By cloaking their accessible content, teens reclaim power over those who they know who are surveilling them. This practice is still only really emerging en masse, so I was delighted that Pew could put numbers to it. I should note that, as Instagram grows, I’m seeing more and more of this. A picture of a donut may not be about a donut. While adults worry about how teens’ demographic data might be used, teens are becoming much more savvy at finding ways to encode their content and achieve privacy in public.

Anyhow, I have much more to say about Pew’s awesome report, but I wanted to provide a few thoughts and invite y’all to read it. If there is data that you’re curious about or would love me to analyze more explicitly, leave a comment or drop me a note. I’m happy to dive in more deeply on their findings.

Addressing Human Trafficking: Guidelines for Technological Interventions

Two years ago, when I started working on issues related to human trafficking and technology, I was frustrated by how few people recognized the potential of technology to help address the commercial sexual exploitation of children. With the help of a few colleagues at Microsoft Research, I crafted a framework document to think through the intersection of technology and trafficking. After talking with Mark Latonero at USC (who has been writing brilliant reports on technology and human trafficking), I teamed up with folks at MSR Connections and Microsoft’s Digital Crimes Unit to help fund research in this space. Over the last year, I’ve been delighted to watch a rich scholarly community emerge that takes seriously the importance of data for understanding and intervening in human trafficking issues that involve technology.

Meanwhile, to my delight, technologists have started to recognize that they can develop innovative systems to help address human trafficking. NGOs have started working with computer scientists, companies have started working with law enforcement, and the White House has started bringing together technologists, domain experts, and policy makers to imagine how technology can be used to combat human trafficking. The potential of these initiatives tickles me pink.

Watching this unfold, one thing that I struggle with is that there’s often a disconnect between what researchers are learning and what the public thinks is happening vis-a-vis the commercial sexual exploitation of children (CSEC). On too many occasions, I’ve watched well-intentioned technologists approach the space with a naiveté that comes from only knowing about human trafficking through media portrayals. While the portraits that receive widespread attention are important for motivating people to act, understanding the nuance and pitfalls of the space are critical for building interventions that will actually make a difference.

To bridge the gap between technologists and researchers, I worked with a group of phenomenal researchers to produce a simple 4-page fact sheet intended to provide a very basic primer on issues in human trafficking and CSEC that technologists need to know before they build interventions:

How to Responsibly Create Technological Interventions to Address the Domestic Sex Trafficking of Minors

Some of the issues we address include:

  1. Youth often do not self-identify themselves as victims.
  2. “Survival sex” is one aspect of CSEC.
  3. Previous sexual abuse, homelessness, family violence, and foster care may influence youth’s risk of exploitation.
  4. Arresting victims undermines efforts to combat CSEC.
  5. Technologies should help disrupt criminal networks.
  6. Post-identification support should be in place before identification interventions are implemented.
  7. Evaluation, assessment, and accountability are critical for any intervention.
  8. Efforts need to be evidence-based.
  9. The cleanliness of data matters.
  10. Civil liberties are important considerations.

This high-level overview is intended to shed light on some of the most salient misconceptions and provide some key insights that might be useful for those who want to make a difference. By no means does it cover everything that experts know, but it provides some key touchstones that may be useful. It is limited to the issues that are most important for technologists, but those who are working with technologists may also find it to be valuable.

As researchers dedicated to addressing human trafficking and the commercial sexual exploitation of children, we want to make sure that the passion that innovative technologists are bringing to the table is directed in the most helpful ways possible. We hope that what we know can be of use to those who are also looking to end exploitation.

(Flickr image by Martin Gommel)

Is Facebook Destroying the American College Experience?

Sitting with a group of graduating high school seniors last summer, the conversation turned to college roommates. Although headed off to different schools, they had a similar experience of learning their roommate assignment and immediately turning to Facebook to investigate that person. Some had already begun developing deep, mediated friendships while others had already asked for roommate transfers. Beyond roommates, all had used Facebook to find other newly minted freshman, building relationships long before they set foot on campus.

At first blush, this seems like a win for students. Going off to college can be a scary proposition, full of uncertainty, particularly about social matters. Why not get a head start building friends from the safety of your parent’s house?

What most students (and parents) fail to realize is that the success of the American college system has less to do with the quality of the formal education than it does with the social engineering project that is quietly enacted behind the scenes each year. Roommates are structured to connect incoming students with students of different backgrounds. Dorms are organized to cross-breed the cultural diversity that exists on campus. Early campus activities are designed to help people encounter people whose approach to the world is different than theirs. This process has a lot of value because it means that students develop an appreciation for difference and build meaningful relationships that will play a significant role for years to come. The friendships and connections that form on campuses shape future job opportunities and help create communities that change the future. We hear about famous college roommates as exemplars. Heck, Facebook itself was created by a group of Harvard roommates. But the more basic story is how people learn to appreciate difference, often by suffering through the challenges of entering college together.

When pre-frosh turn to Facebook before arriving on campus, they do so to find other people who share their interests, values, and background. As such, they begin a self-segregation process that results in increased “homophily” on campuses. Homophily is a sociological concept that refers to the notion that birds of a feather stick together. In other words, teens inadvertently undermine the collegiate social engineering project of creating diverse connections through common experiences. Furthermore, because Facebook enables them to keep in touch with friends from high school, college freshman spend extensive time maintaining old ties rather than building new ones. They lose out on one of the most glorious benefits of the American collegiate system: the ability to diversify their networks.

Facebook is not itself the problem. The issue stems from how youth use Facebook and the desire that many youth have to focus on building connections to people that think like they do. Building friendships with people who have different political, cultural, religious beliefs is hard. Getting to know people whose life stories seem so foreign is hard. And yet, such relationship building across lines of difference can also be tremendously transformative.

To complicate matters more, parents and high school teachers have beaten into today’s teens’ heads that internet strangers are dangerous. As such, even when teens are turning to Facebook or other services to find future college friends, they are skittish about people who are discomforting to them because they’ve been socialized into being wary of anyone they talk with. The fear-mongering around strangers plays a subtle but powerful role in discouraging teens from doing the disorienting work of getting to know someone truly unfamiliar.

It’s high time we recognize that college isn’t just about formalized learning and skills training, but also a socialization process with significant implications for the future. The social networks that youth build in college have long-lasting implications for youth’s future prospects. One of the reasons that the American college experience is so valuable is because it often produces diverse networks that enable future opportunities. This is also precisely what makes elite colleges elite; the networks that are built through these institutions end up shaping many aspects of power. When less privileged youth get to know children of powerful families, new pathways of opportunity and tolerance are created. But when youth use Facebook to maintain existing insular networks, the potential for increased structural inequity is great.

Photo by Daniel Borman

This post was originally written for LinkedIn. Visit there for additional comments.

Networked Norms: How Tech Startups and Teen Practices Challenge Organizational Boundaries

At the ASTD TechKnowledge conference, I was asked to reflect on networked learning and how tomorrow’s workers will challenge today’s organizations. I did some reflecting on this topic and decided to draw on two strands of my research over the last decade – startup culture and youth culture – to talk about how those outside of traditional organizational culture are calling into question the norms of bounded corporate enterprises. The piece is more of a provocation than a recipe for going forward, but you might enjoy the crib of my talk none-the-less:

“Networked Norms: How Tech Startups and Teen Practices Challenge Organizational Boundaries”

(Image courtesy of victuallers2)

 

MSR Social Media Collective 2013 Summer Internships

** APPLICATION DEADLINE: JANUARY 30, 2013 ** 

Microsoft Research New England (MSRNE) is looking for PhD interns to join the social media collective for Summer 2013. For these positions, we are looking primarily for social science PhD students (including communication, sociology, anthropology, media studies, information studies, etc.). The Social Media Collective is a collection of scholars at MSRNE who focus on socio-technical questions, primarily from a social science perspective. We are not an applied program; rather, we work on critical research questions that are important to the future of social science scholarship.

MSRNE internships are 12-week paid internships in Cambridge, Massachusetts. PhD interns are expected to be on-site for the duration of their internship.

PhD interns at MSRNE are expected to devise and execute a research project during their internships. The expected outcome of an internship at MSRNE is a publishable scholarly paper for an academic journal or conference of the intern’s choosing. The goal of the internship is to help the intern advance their own career; interns are strongly encouraged to work towards a publication outcome that will help them on the academic job market. Interns are also expected to collaborate with full-time researchers and visitors, give short presentations, and contribute to the life of the community. While this is not an applied program, MSRNE encourages interdisciplinary collaboration with computer scientists, economists, and mathematicians. There are also opportunities to engage with product groups at Microsoft, although this is not a requirement.

We are looking for applicants to focus their proposals on one of the following eights areas:

  1. Big data, the politics of algorithms, and/or computational culture
  2. Entertainment and news industries and audiences
  3. Digital inequalities
  4. Mobile media and social movement/civic engagement
  5. Affective, immaterial, and other theoretical frameworks related to digital labor
  6. Urban informatics and critical geography
  7. Personal relationships and digital media
  8. Critical accounts of crisis informatics and disasters

Applicants should have advanced to candidacy in their PhD program by the time they start their internship. (Unfortunately, there are no opportunities for Master’s students or early PhD students at this time.) While this internship opportunity is not strictly limited to social scientists, preference will be given to social scientists and humanists making socio-technical inquiries. (Note: While other branches of Microsoft Research focus primarily on traditional computer science research, this group does no development-driven research and is not looking for people who are focused solely on building systems. We welcome social scientists with technical skills and strongly encourage social scientists to collaborate with computer scientists at MSRNE.) Preference will be given to intern candidates who work to make public and/or policy interventions with their research. Interns will benefit most from this opportunity if there are natural opportunities for collaboration with other researchers or visitors currently working at MSRNE.

Applicants from universities outside of the United States are welcome to apply.

PEOPLE AT MSRNE SOCIAL MEDIA COLLECTIVE

The Social Media Collective is comprised of researchers, postdocs, and visitors. This includes:

Previous interns in the collective have included Amelia Abreu (UWashington, information), Jed Brubaker (UC-Irvine, informatics), Scott Golder (Cornell, sociology), Germaine Halegoua (U. Wisconsin, communications), Airi Lampinen (HIIT, information), Jessica Lingel (Rutgers, library & info science), Alice Marwick (NYU, media culture communication), Laura Noren (NYU, sociology), Jaroslav Svelch (Charles University, media studies), Shawn Walker (UWashington, information), Omar Wasow (Harvard, African-American studies), and Sarita Yardi (GeorgiaTech, HCI).

If you are curious to know more about MSRNE, I suspect that many former interns would be happy to tell you about their experiences here. Previous interns are especially knowledgeable about how this process works.

For more information about the Social Media Collective, visit our blog: https://socialmediacollective.org/

APPLICATION PROCESS

To apply for a PhD internship with the social media collective:

1. Fill out the online application form: https://research.microsoft.com/apps/tools/jobs/intern.aspx Make sure to indicate that you prefer Microsoft Research New England and “social media” or “social computing.” You will need to list two recommenders through this form. Make sure your recommenders respond to the request for letters so that their letters are also submitted by the deadline.

2. Send an email to msrnejob -at- microsoft-dot-com with the subject “SMC PhD Intern Application: ” that includes the following four things:

  1. A brief description of your dissertation project.
  2. An academic article you have written (published or unpublished) that shows your writing skills.
  3. A copy of your CV.
  4. A pointer to your website or other online presence (if available).
  5. A short description of 1-2 projects that you propose to do while an intern at MSRNE, independently and/or in collaboration with current SMC researchers. This project must be distinct from the research for your dissertation.

We will begin considering internship applications on January 30 and will not consider late applications.

PREVIOUS INTERN TESTIMONIALS

“The internship at Microsoft Research was all of the things I wanted it to be – personally productive, intellectually rich, quiet enough to focus, noisy enough to avoid complete hermit-like cave dwelling behavior, and full of opportunities to begin ongoing professional relationships with other scholars who I might not have run into elsewhere.”
— Laura Noren, Sociology, New York University

“If I could design my own graduate school experience, it would feel a lot like my summer at Microsoft Research. I had the chance to undertake a project that I’d wanted to do for a long time, surrounded by really supportive and engaging thinkers who could provide guidance on things to read and concepts to consider, but who could also provoke interesting questions on the ethics of ethnographic work or the complexities of building an identity as a social sciences researcher. Overall, it was a terrific experience for me as a researcher as well as a thinker.”
— Jessica Lingel, Library and Information Science, Rutgers University

“Spending the summer as an intern at MSR was an extremely rewarding learning experience. Having the opportunity to develop and work on your own projects as well as collaborate and workshop ideas with prestigious and extremely talented researchers was invaluable. It was amazing how all of the members of the Social Media Collective came together to create this motivating environment that was open, supportive, and collaborative. Being able to observe how renowned researchers streamline ideas, develop projects, conduct research, and manage the writing process was a uniquely helpful experience – and not only being able to observe and ask questions, but to contribute to some of these stages was amazing and unexpected.”
— Germaine Halegoua, Communication Arts, University of Wisconsin-Madison

“The summer I spent at Microsoft Research was one of the highlights of my time in grad school. It helped me expand my research in new directions and connect with world-class scholars. As someone with a technical bent, this internship was an amazing opportunity to meet and learn from really smart humanities and social science researchers. Finally, Microsoft Research as an organization has the best of both worlds: the academic freedom and intellectual stimulation of a university with the perks of industry.”
— Andrés Monroy-Hernández, Media, Arts and Sciences, MIT