What the GPS Device on Antoine Jones’ Jeep Cherokee Means for Internet Privacy

Yesterday the Supreme Court ruled on United States vs. Jones [PDF of court opinion], a case in which the FBI/DC police placed a GPS tracking device on the Jeep Cherokee of Antoine Jones, a club owner in DC who was suspected of dealing cocaine. The cops tracked Mr. Jones for 28 days, and, based on that evidence (as well as a CCTV camera pointing at the club door, a pen register (*) and a wiretap on Jones’s cellphone), charged him with conspiracy and possession with intent. Jones appealed, saying that the GPS data should be inadmissible since it was collected without a warrant.

The Supreme Court held up the ruling of the DC Court of Appeals in a unanimous 9-0 decision, saying that a) this was a search b) a car is a person’s property, or “effects”, and thus affixing a GPS to the undercarriage of the car violates the Fourth Amendment. From the ruling:

It is important to be clear about what occurred in this case: The Government physically occupied private property for the purpose of obtaining information. We have no doubt that such a physical intrusion would have been considered a “search” within the meaning of the Fourth Amendment when it was adopted.

What’s interesting here is that there was a 5-4 split on why the Justices ruled as they did. Justice Sotomayor, writing a concurrent opinion, wrote, “When the Government physically invades personal property to gather information, a search occurs.The reaffirmation of that principle suffices to decide this case.” Since the government had invaded property, the Justices did not need to evaluate any of the other principles that this case brings up.

And there are many principles that this case brings up. Sotomayor talks about many of them: what about electronic surveillance if no property was trespassed upon? What about the chilling effects of potential long-term electronic surveillance? What about the fact that GPS monitoring gives far more specific information, and is far easier and cheaper, than traditional visual surveillance? What about the fact that this data can be stored and mined later? She writes:

I would take these attributes of GPS monitoring into account when considering the existence of a reasonable societal expectation of privacy in the sum of one’s public movements. I would ask whether people reasonably expect that their movements will be recorded and aggregated in a manner that enables the Government to ascertain, more or less at will, their political and religious beliefs, sexual habits, and so on. I do not regard as dispositive the fact that the Government might obtain the fruits of GPS monitoring through lawful conventional surveillance techniques… I would also consider the appropriateness of entrusting to the Executive, in the absence of any oversight from a coordinate branch, a tool so amenable to misuse, especially in light of the Fourth Amendment’s goa lto curb arbitrary exercises of police power to and prevent “a too permeating police surveillance.”

But most awesomely, Sotomayor then goes on to critique the third party doctrine. This says that if you disclose information to a third party (whether that’s your sister, Google, or Ma Bell), you have no reasonable expectation of privacy governing that information, and the government has a right to access it. As Sotomayor writes, “This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks” like checking email, signing up for Facebook, or buying a pair of shoes online.

In a concurring opinion, four other judges agreed with the majority ruling, but not the use of the property doctrine to decide it. Instead, Alito, Ginsberg, Breyer and Kagan seem suspicious of electronic surveillance overall. In Alito’s concurring judgment, he mentions GPS, road CCTV cameras, electronic toll collectors, and, most interestingly, cell phone location data as potential invasions of privacy. He laments that Congress and state governments have done little or nothing to regulate the use of this data by law enforcement.

I think the SCOTUS is itching for a fight on digital privacy. I’m looking forward to seeing what happens with similar cases in the future.

* Don’t get me started on pen registers. They track what numbers you call, and have the technical capability to track where your cellphone is and even your text messages. Yet the standard for ordering one is much lower than, say, wiretapping; the potential surveillee just has to be part of an ‘ongoing criminal investigation.’ Even more worryingly, Chris Soghoian has documented that law enforcement makes tens of thousands of requests to phone companies for cell phone location information. Requests to internet companies for location information are not even subject to the pen register standard; all they need is a subpoena.

How Parents Normalized Teen Password Sharing

In 2005, I started asking teenagers about their password habits. My original set of questions focused on teens’ attitudes about giving their password to their parents, but I quickly became enamored with teens’ stories of sharing passwords with friends and significant others. So I was ecstatic when Pew Internet & American Life Project decided to survey teens about their password sharing habits. Pew found that one third of online 12-17 year olds share their password with a friend or significant other and that almost half of those 14-17 do. I love when data gets reinforced.

Last week, Matt Richtel at the New York Times did a fantastic job of covering one aspect of why teens share passwords: as a show of affection. Indeed, I have lots of fun data that supports Richtel’s narrative — and complicates it. Consider Meixing’s explanation for why she shares her password with her boyfriend:

Meixing, 17, TN: It made me feel safer just because someone was there to help me out and stuff. It made me feel more connected and less lonely. Because I feel like Facebook sometimes it kind of like a lonely sport, I feel, because you’re kind of sitting there and you’re looking at people by yourself. But if someone else knows your password and stuff it just feels better.

For Meixing, sharing her password with her boyfriend is a way of being connected. But it’s precisely these kinds of narratives that have prompted all sorts of horror by adults over the last week since that NYTimes article came out. I can’t count the number of people who have gasped “How could they!?!” at me. For this reason, I feel the need to pick up on an issue that the NYTimes let out.

The idea of teens sharing passwords didn’t come out of thin air. In fact, it was normalized by adults. And not just any adult. This practice is the product of parental online safety norms. In most households, it’s quite common for young children to give their parents their passwords. With elementary and middle school youth, this is often a practical matter: children lose their passwords pretty quickly. Furthermore, most parents reasonably believe that young children should be supervised online. As tweens turn into teens, the narrative shifts. Some parents continue to require passwords be forked over, using explanations like “because I’m your mother.” But many parents use the language of “trust” to explain why teens should share their passwords with them.

There are different ways that parents address the password issue, but they almost always build on the narrative of trust. (Tangent: My favorite strategy is when parents ask children to put passwords into a piggy bank that must be broken for the paper with the password to be retrieved. Such parents often explain that they don’t want to access their teens’ accounts, but they want to have the ability to do so “in case of emergency.” A piggy bank allows a social contract to take a physical form.)

When teens share their passwords with friends or significant others, they regularly employ the language of trust, as Richtel noted in his story. Teens are drawing on experiences they’ve had in the home and shifting them into their peer groups in order to understand how their relationships make sense in a broader context. This shouldn’t be surprising to anyone because this is all-too-common for teen practices. Household norms shape peer norms.

There’s another thread here that’s important. Think back to the days in which you had a locker. If you were anything like me and my friends, you gave out your locker combination to your friends and significant others. There were varied reasons for doing so. You wanted your friends to pick up a book for you when you left early because you were sick. You were involved in a club or team where locker decorating was common. You were hoping that your significant other would leave something special for you. Or – to be completely and inappropriately honest – you left alcohol in your locker and your friends stopped by for a swig. (One of my close friends was expelled for that one.) We shared our locker combinations because they served all sorts of social purposes, from the practical to the risqué.

How are Facebook passwords significantly different than locker combos? Truth be told, for most teenagers, they’re not. Teens share their passwords so that their friends can check their messages for them when they can’t get access to a computer. They share their passwords so their friends can post the cute photos. And they share their passwords because it’s a way of signaling an intimate relationship. Just like with locker combos.

Can password sharing be abused? Of course. I’ve heard countless stories of friends “punking” one another by leveraging password access. And I’ve witnessed all sorts of teen relationship violence where mandatory password sharing is a form of surveillance and abuse. But, for most teens, password sharing is as risky as locker combo sharing. This is why, even though 1/3 of all teens share their passwords, we only hear of scattered horror stories.

I know that this practice strikes adults as seriously peculiar, but it irks me when adults get all judgmental on this teen practice, as though it’s “proof” that teens can’t properly judge how trustworthy a relationship is. First, it’s through these kinds of situations where they learn. Second, adults are dreadful at judging their own relationships (see: divorce rate) so I don’t have a lot of patience for the high and mighty approach. Third, I’m much happier with teens sharing passwords as a form of intimacy than sharing many other things.

There’s no reason to be aghast at teen password sharing. Richtel’s story is dead-on. It’s pretty darn pervasive. But it also makes complete sense given how notions of trust have been constructed for many teens.

(Image Credit: Darwin Bell)

Debating Privacy in a Networked World for the WSJ

Earlier this week, the Wall Street Journal posted excerpts from a debate between me, Stewart Baker, Jeff Jarvis, and Chris Soghoian on privacy. In preparation for the piece, they had us respond to a series of questions. Jeff posted the full text of his responses here. Now it’s my turn. Here are the questions that I was asked and my responses.

Part 1:

Question: How much should people care about privacy? (400 words)

People should – and do – care deeply about privacy. But privacy is not simply the control of information. Rather, privacy is the ability to assert control over a social situation. This requires that people have agency in their environment and that they are able to understand any given social situation so as to adjust how they present themselves and determine what information they share. Privacy violations occur when people have their agency undermined or lack relevant information in a social setting that’s needed to act or adjust accordingly. Privacy is not protected by complex privacy settings that create what Alessandro Acquisti calls “the illusion of control.” Rather, it’s protected when people are able to fully understand the social environment in which they are operating and have the protections necessary to maintain agency.

Social media has prompted a radical shift. We’ve moved from a world that is “private-by-default, public-through-effort” to one that is “public-by-default, private-with-effort.” Most of our conversations in a face-to-face setting are too mundane for anyone to bother recording and publicizing. They stay relatively private simply because there’s no need or desire to make them public. Online, social technologies encourage broad sharing and thus, participating on sites like Facebook or Twitter means sharing to large audiences. When people interact casually online, they share the mundane. They aren’t publicizing; they’re socializing. While socializing, people have no interest in going through the efforts required by digital technologies to make their pithy conversations more private. When things truly matter, they leverage complex social and technical strategies to maintain privacy.

The strategies that people use to assert privacy in social media are diverse and complex, but the most notable approach involves limiting access to meaning while making content publicly accessible. I’m in awe of the countless teens I’ve met who use song lyrics, pronouns, and community references to encode meaning into publicly accessible content. If you don’t know who the Lions are or don’t know what happened Friday night or don’t know why a reference to Rihanna’s latest hit might be funny, you can’t interpret the meaning of the message. This is privacy in action.

The reason that we must care about privacy, especially in a democracy, is that it’s about human agency. To systematically undermine people’s privacy – or allow others to do so – is to deprive people of freedom and liberty.

Part 2:

Question: What is the harm in not being able to control our social contexts? Do we suffer because we have to develop codes to communicate on social networks? Or are we forced offline because of our inability to develop codes? (200 words)

Social situations are not one-size-fits-all. How a man acts with his toddler son is different from how he interacts with his business partner, not because he’s trying to hide something but because what’s appropriate in each situation differs. Rolling on the floor might provoke a giggle from his toddler, but it would be strange behavior in a business meeting. When contexts collide, people must choose what’s appropriate. Often, they present themselves in a way that’s as inoffensive to as many people as possible (and particularly those with high social status), which often makes for a bored and irritable toddler.

Social media is one big context collapse, but it’s not fun to behave as though being online is a perpetual job interview. Thus, many people lower their guards and try to signal what context they want to be in, hoping others will follow suit. When that’s not enough, they encode their messages to be only relevant to a narrower audience. This is neither good, nor bad; it’s simply how people are learning to manage their lives in a networked world where they cannot assume strict boundaries between distinct contexts. Lacking spatial separation, people construct context through language and interaction.

Part 3:

Question: Jeff and Stewart seem to be arguing that privacy advocates have too much power and that they should be reined in for the good of society. What do you think of that view? Is the status quo protecting privacy enough? So we need more laws? What kind of laws? Or different social norms? In particular, I would like to hear what you think should be done to prevent turning the Internet into one long job interview, as you described. If you had one or two examples of types of usages that you think should be limited, that would be perfect. (300 words)

When it comes to creating a society in which both privacy and public life can flourish, there are no easy answers. Laws can protect, but they can also hinder. Technologies can empower, but they can also expose. I respect my esteemed colleagues’ views, but I am also concerned about what it means to have a conversation among experts. Decisions about privacy – and public life – in a networked age are being made by people who have immense social, political, and/or economic power, often at the expense of those who are less privileged. We must engender a public conversation about these issues rather than leaving the in the hands of experts.

There are significant pros and cons to all social, legal, economic, and technological decisions. Balancing individual desires with the goals of the collective is daunting. Mediated life forces us to face serious compromises and hard choices. Privacy is a value that’s dear to many people, precisely because openness is a privilege. Systems must respect privacy, but there’s no easy mechanism to inscribe this value into code or law. Thus, we must publicly grapple with these issues and put pressure on decision-makers and systems-builders to remember that their choices have consequences.

We must also switch the conversation from being about one of data collection to being one about data usage. This involves drawing on the language of abuse, violence, and victimization to think about what happens when people’s willingness to share is twisted to do them harm. Just as we have models for differentiating sex between consenting partners and rape, so too must we construct models that that separate usage that’s empowering and that which strips people of their freedoms and opportunities. For example, refusing health insurance based on search queries may make economic sense, but the social costs are far to great. Focusing on usage requires understanding who is doing what to whom and for what purposes. Limiting data collection may be structurally easier, but it doesn’t address the tensions between privacy and public-ness with which people are struggling.

Part 4:

Question: Jeff makes the point that we’re overemphasizing privacy at the expense of all the public benefits delivered by new online services. What do you think of that view? Do you think privacy is being sufficiently protected?

I think that positioning privacy and public-ness in opposition is a false dichotomy. People want privacy *and* they want to be able to participate in public. This is why I think it’s important to emphasize that privacy is not about controlling information, but about having agency and the ability to control a social situation. People want to share and they gain a lot from sharing. But that’s different than saying that people want to be exposed by others. Agency matters.

From my perspective, protecting privacy is about making certain that people have the agency they need to make informed decisions about how they engage in public. I do not think that we’ve done enough here. That said, I am opposed to approaches that protect people by disempowering them or by taking away their agency. I want to see approaches that force powerful entities to be transparent about their data practices. And I want to see approaches the put restrictions on how data can be used to harm people. For example, people should have the ability to share their medical experiences without being afraid of losing their health insurance. The answer is not to silence consumers from sharing their experiences, but rather to limit what insurers can do with information that they can access.

Question: Jeff says that young people are “likely the worst-served sector of society online”? What do you think of that? Do youth-targeted privacy safeguards prevent them from taking advantage of the benefits of the online world? Do the young have special privacy issues, and do they deserve special protections?

I _completely_ agree with Jeff on this point. In our efforts to protect youth, we often exclude them from public life. Nowhere is this more visible than with respect to the Children’s Online Privacy Protection Act (COPPA). This well-intended laws was meant to empower parents. Yet, in practice, it has prompted companies to ban any child under the age of 13 from joining general-purpose communication services and participating on social media platforms. In other words, COPPA has inadvertently locked children out of being legitimate users of Facebook, Gmail, Skype, and similar services. Interestingly, many parents help their children circumvent age restrictions. Is this a win? I don’t think so.

I don’t believe that privacy protections focused on children make any sense. Yes, children are a vulnerable population, but they’re not the only vulnerable population. Can you imagine excluding senile adults from participating on Facebook because they don’t know when they’re being manipulated? We need to develop structures that support all people while also making sure that protection does not equal exclusion.

Thanks to Julia Angwin for keeping us on task!

Why Parents Help Children Violate Facebook’s 13+ Rule

Announcing new journal article: “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, First Monday.

“At what age should I let my child join Facebook?” This is a question that countless parents have asked my collaborators and me. Often, it’s followed by the following: “I know that 13 is the minimum age to join Facebook, but is it really so bad that my 12-year-old is on the site?”

While parents are struggling to determine what social media sites are appropriate for their children, government tries to help parents by regulating what data internet companies can collect about children without parental permission. Yet, as has been the case for the last decade, this often backfires. Many general-purpose communication platforms and social media sites restrict access to only those 13+ in response to a law meant to empower parents: the Children’s Online Privacy Protection Act (COPPA). This forces parents to make a difficult choice: help uphold the minimum age requirements and limit their children’s access to services that let kids connect with family and friends OR help their children lie about their age to circumvent the age-based restrictions and eschew the protections that COPPA is meant to provide.

In order to understand how parents were approaching this dilemma, my collaborators — Eszter Hargittai (Northwestern University), Jason Schultz (University of California, Berkeley), John Palfrey (Harvard University) — and I decided to survey parents. In many ways, we were responding to a flurry of studies (e.g. Pew’s) that revealed that millions of U.S. children have violated Facebook’s Terms of Service and joined the site underage. These findings prompted outrage back in May as politicians blamed Facebook for failing to curb underage usage. Embedded in this furor was an assumption that by not strictly guarding its doors and keeping children out, Facebook was undermining parental authority and thumbing its nose at the law. Facebook responded by defending its practices — and highlighting how it regularly ejects children from its site. More controversially, Facebook’s founder Mark Zuckerberg openly questioned the value of COPPA in the first place.

While Facebook has often sparked anger over its cavalier attitudes towards user privacy, Zuckerberg’s challenge with regard to COPPA has merit. It’s imperative that we question the assumptions embedded in this policy. All too often, the public takes COPPA at face-value and politicians angle to build new laws based on it without examining its efficacy.

Eszter, Jason, John, and I decided to focus on one core question: Does COPPA actually empower parents? In order to do so, we surveyed parents about their household practices with respect to social media and their attitudes towards age restrictions online. We are proud to release our findings today, in a new paper published at First Monday called “Why parents help their children lie to Facebook about age: Unintended consequences of the ‘Children’s Online Privacy Protection Act’.” From a national sample of 1,007 U.S. parents who have children living with them between the ages of 10-14 conducted July 5-14, 2011, we found:

  • Although Facebook’s minimum age is 13, parents of 13- and 14-year-olds report that, on average, their child joined Facebook at age 12.
  • Half (55%) of parents of 12-year-olds report their child has a Facebook account, and most (82%) of these parents knew when their child signed up. Most (76%) also assisted their 12-year old in creating the account.
  • A third (36%) of all parents surveyed reported that their child joined Facebook before the age of 13, and two-thirds of them (68%) helped their child create the account.
  • Half (53%) of parents surveyed think Facebook has a minimum age and a third (35%) of these parents think that this is a recommendation and not a requirement.
  • Most (78%) parents think it is acceptable for their child to violate minimum age restrictions on online services.

The status quo is not working if large numbers of parents are helping their children lie to get access to online services. Parents do appear to be having conversations with their children, as COPPA intended. Yet, what does it mean if they’re doing so in order to violate the restrictions that COPPA engendered?

One reaction to our data might be that companies should not be allowed to restrict access to children on their sites. Unfortunately, getting the parental permission required by COPPA is technologically difficult, financially costly, and ethically problematic. Sites that target children take on this challenge, but often by excluding children whose parents lack resources to pay for the service, those who lack credit cards, and those who refuse to provide extra data about their children in order to offer permission. The situation is even more complicated for children who are in abusive households, have absentee parents, or regularly experience shifts in guardianship. General-purpose sites, including communication platforms like Gmail and Skype and social media services like Facebook and Twitter, generally prefer to avoid the social, technical, economic, and free speech complications involved.

While there is merit to thinking about how to strengthen parent permission structures, focusing on this obscures the issues that COPPA is intended to address: data privacy and online safety. COPPA predates the rise of social media. Its architects never imagined a world where people would share massive quantities of data as a central part of participation. It no longer makes sense to focus on how data are collected; we must instead question how those data are used. Furthermore, while children may be an especially vulnerable population, they are not the only vulnerable population. Most adults have little sense of how their data are being stored, shared, and sold.

COPPA is a well-intentioned piece of legislation with unintended consequences for parents, educators, and the public writ large. It has stifled innovation for sites focused on children and its implementations have made parenting more challenging. Our data clearly show that parents are concerned about privacy and online safety. Many want the government to help, but they don’t want solutions that unintentionally restrict their children’s access. Instead, they want guidance and recommendations to help them make informed decisions. Parents often want their children to learn how to be responsible digital citizens. Allowing them access is often the first step.

Educators face a different set of issues. Those who want to help youth navigate commercial tools often encounter the complexities of age restrictions. Consider the 7th grade teacher whose students are heavy Facebook users. Should she admonish her students for being on Facebook underage? Or should she make sure that they understand how privacy settings work? Where does digital literacy fit in when what children are doing is in violation of websites’ Terms of Service?

At first blush, the issues surrounding COPPA may seem to only apply to technology companies and the government, but their implications extend much further. COPPA affects parenting, education, and issues surrounding youth rights. It affects those who care about free speech and those who are concerned about how violence shapes home life. It’s important that all who care about youth pay attention to these issues. They’re complex and messy, full of good intention and unintended consequences. But rather than reinforcing or extending a legal regime that produces age-based restrictions which parents actively circumvent, we need to step back and rethink the underlying goals behind COPPA and develop new ways of achieving them. This begins with a public conversation.

We are excited to release our new study in the hopes that it will contribute to that conversation. To read our complete findings and learn more about their implications for policy makers, see “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, published in First Monday.

To learn more about the Children’s Online Privacy Protection Act (COPPA), make sure to check out the Federal Trade Commission’s website.

(Versions of this post were originally written for the Huffington Post and for the Digital Media and Learning Blog.)

Image Credit: Tim Roe

“Networked Privacy” (danah’s PDF talk)

Our contemporary ideas about privacy are often shaped by legal discourse that emphasizes the notion of “individual harm.” Furthermore, when we think about privacy in online contexts, the American neoliberal frame and the techno-libertarian frame once again force us to really think about the individual. In my talk at Personal Democracy Forum this year, I decided to address some of the issues of “networked privacy” precisely because I think that we need to start thinking about how privacy fits into a social context. Even with respect to the individual frame, what others say/do about us affects our privacy. And yet, more importantly, all of the issues of privacy end up having a broader set of social implications.

Anyhow, I’m very much at the beginning of thinking through these ideas, but in the meantime, I took a first pass at PDF. A crib of the talk that I gave at the conference is available here:

“Networked Privacy”

Photo Credit: Collin Key

How Teens Understand Privacy

In the fall, danah boyd and Alice Marwick went into the field to understand teens’ privacy attitudes and practices. We’ve blogged some of our thinking since then but we’re currently working on turning our thinking into a full-length article. We are lucky enough to be able to workshop our ideas at an upcoming scholarly meeting (PLSC), but we also wanted to share our work-in-progress with the public since we both know that there are all sorts of folks out there who have a lot of knowledge about this domain but with whom we don’t have the privilege of regularly interacting.

“Social Privacy in Networked Publics: Teens’ Attitudes, Practices, and Strategies”
by danah boyd and Alice Marwick

Please understand that this is an unfinished work-in-progress article, complete with all sorts of bugs that we will need to address before we submit it for publication. But… we would certainly love feedback, critiques, and suggestions for how to improve it. Given the highly interdisciplinary nature of this kind of research, it’s also quite likely that we’re missing out on all sorts of prior work that was done in this space so we’d love to also hear about any articles that we should’ve read by now. Or any thoughts you might have that might advance/complicate our thinking.