Race, Policing, and Computing: Project Green Light

Discussed in this post:

Race, Policing, and Detroit’s Project Green Light
http://esc.umich.edu/project-green-light/

Yesterday the Wayne County Prosecutor publicly apologized to the first American known to be wrongfully arrested via facial recognition algorithm: a black man arrested earlier this year by the Detroit Police.

The statement cited the unreliability of software, especially as applied to people of color. With this context in mind, some university and high school instructors teaching about technology may be interested in engaging with the Black Lives Matter protests by teaching about computing, race, policing, and surveillance.

ABOVE: A “Project Green Light” green light.

I’m delighted that thanks to the generosity of Tawana Petty and others, The ESC Center will now publicly share a module on this topic developed for my online course on “Data Science Ethics.” You are free to make use of it in your own teaching, or you might just find the materials interesting (or shocking).

Some aspects of this case study may be surprising to you if you have not been following the news about it. Detroit is presently at the forefront of automated algorithmic surveillance in the US. There are lessons here for almost anyone.

As one example from Detroit’s system, as soon as you cross the city limit into majority-Black Detroit, property owners can pay a fee to have real-time police monitoring of INDOOR cameras that have allegedly been used for automated facial recognition in the past (that is, to use computers to routinely identify people, track their movements and/or check them for outstanding warrants, rather than to investigate a particular crime). The use of facial recognition against people who are not suspects in any crime may be continuing now, according to allegations in the most recent news reports earlier this month.

The lesson consists of a case study of Detroit’s Project Green Light, the new city-wide police surveillance system that involves automated facial recognition, real-time police monitoring, very-high-resolution imagery, cameras indoors on private property, a paid “priority” response system, a public/private partnership, and other distinctive features.

ABOVE: Interview Still of Tawana Petty, Director, Data Justice Program, DCTP

The system has allegedly been deployed to target peaceful Black Lives Matter protesters. Here is the lesson: 

The lesson includes videos, readings (including yesterday’s apology), and suggested discussion questions and assessment.  With some tuning, I bet this lesson is suitable for courses in:

  • Information Science
  • Computer Science
  • Science & Technology Studies (STS)
  • Information Technology
  • Sociology
  • Criminology
  • Media Studies
  • Public Policy
  • Law
  • Urban Planning
  • Ethnic Studies
  • Applied Ethics

If you know of a mailing list or forum that would reach instructors of these courses who would be interested, please feel free to forward this or the lesson URL at the top of this post: http://esc.umich.edu/project-green-light/

ABOVE: Glowing Project Green Light sign.

This lesson is offered as part of the “Emergency ESC” initiative from the Center for Ethics, Society, and Computing (ESC). If you are an instructor and you are willing to share similar material, ESC would be happy to act as a clearinghouse for these lessons and to promote them, whether or not you are affiliated with our center.  Please e-mail esc-center@umich.edu to suggest or contribute. They will share selections on the “Emergency ESC” page in the future.

thoughts on Pew’s latest report: notable findings on race and privacy

Yesterday, Pew Internet and American Life Project (in collaboration with Berkman) unveiled a brilliant report about “Teens, Social Media, and Privacy.” As a researcher who’s been in the trenches on these topics for a long time now, none of their finding surprised me but it still gives me absolute delight when our data is so beautifully in synch. I want to quickly discuss two important issues that this report raise.

Race is a factor in explaining differences in teen social media use.

Pew provides important measures on shifts in social media, including the continued saturation of Facebook, the decline of MySpace, and the rise of other social media sites (e.g., Twitter, Instagram). When they drill down on race, they find notable differences in adoption. For example, they highlight data that is the source of “black Twitter” narratives: 39% of African-American teens use Twitter compared to 23% of white teens.

Most of the report is dedicated to the increase in teen sharing, but once again, we start to see some race differences. For example, 95% of white social media-using teens share their “real name” on at least one service while 77% of African-American teens do. And while 39% of African-American teens on social media say that they post fake information, only 21% of white teens say they do this.

Teens’ practices on social media also differ by race. For example, on Facebook, 48% of African-American teens befriend celebrities, athletes, or musicians while one 25% of white teen users do.

While media and policy discussions of teens tend to narrate them as an homogenous group, there are serious and significant differences in practices and attitudes among teens. Race is not the only factor, but it is a factor. And Pew’s data on the differences across race highlight this.

Of course, race isn’t actually what’s driving what we see as race differences. The world in which teens live is segregated and shaped by race. Teens are more likely to interact with people of the same race and their norms, practices, and values are shaped by the people around them. So what we’re actually seeing is a manifestation of network effects. And the differences in the Pew report point to black youth’s increased interest in being a part of public life, their heightened distrust of those who hold power over them, and their notable appreciation for pop culture. These differences are by no means new, but what we’re seeing is that social media is reflecting back at us cultural differences shaped by race that are pervasive across America.

Teens are sharing a lot of content, but they’re also quite savvy.

Pew’s report shows an increase in teens’ willingness to share all sorts of demographic, contact, and location data. This is precisely the data that makes privacy advocates anxious. At the same time, their data show that teens are well-aware of privacy settings and have changed the defaults even if they don’t choose to manage the accessibility of each content piece they share. They’re also deleting friends (74%), deleting previous posts (59%), blocking people (58%), deleting comments (53%), detagging themselves (45%), and providing fake info (26%).

My favorite finding of Pew’s is that 58% of teens cloak their messages either through inside jokes or other obscure references, with more older teens (62%) engaging in this practice than younger teens (46%). This is the practice that I’ve seen significantly rise since I first started doing work on teens’ engagement with social media. It’s the source of what Alice Marwick and I describe as “social steganography” in our paper on teen privacy practices.

While adults are often anxious about shared data that might be used by government agencies, advertisers, or evil older men, teens are much more attentive to those who hold immediate power over them – parents, teachers, college admissions officers, army recruiters, etc. To adults, services like Facebook that may seem “private” because you can use privacy tools, but they don’t feel that way to youth who feel like their privacy is invaded on a daily basis. (This, btw, is part of why teens feel like Twitter is more intimate than Facebook. And why you see data like Pew’s that show that teens on Facebook have, on average 300 friends while, on Twitter, they have 79 friends.) Most teens aren’t worried about strangers; they’re worried about getting in trouble.

Over the last few years, I’ve watched as teens have given up on controlling access to content. It’s too hard, too frustrating, and technology simply can’t fix the power issues. Instead, what they’ve been doing is focusing on controlling access to meaning. A comment might look like it means one thing, when in fact it means something quite different. By cloaking their accessible content, teens reclaim power over those who they know who are surveilling them. This practice is still only really emerging en masse, so I was delighted that Pew could put numbers to it. I should note that, as Instagram grows, I’m seeing more and more of this. A picture of a donut may not be about a donut. While adults worry about how teens’ demographic data might be used, teens are becoming much more savvy at finding ways to encode their content and achieve privacy in public.

Anyhow, I have much more to say about Pew’s awesome report, but I wanted to provide a few thoughts and invite y’all to read it. If there is data that you’re curious about or would love me to analyze more explicitly, leave a comment or drop me a note. I’m happy to dive in more deeply on their findings.