You’re a 16-year-old Muslim kid in America. Say your name is Mohammad Abdullah. Your schoolmates are convinced that you’re a terrorist. They keep typing in Google queries likes “is Mohammad Abdullah a terrorist?” and “Mohammad Abdullah al Qaeda.” Google’s search engine learns. All of a sudden, auto-complete starts suggesting terms like “Al Qaeda” as the next term in relation to your name. You know that colleges are looking up your name and you’re afraid of the impression that they might get based on that auto-complete. You are already getting hostile comments in your hometown, a decidedly anti-Muslim environment. You know that you have nothing to do with Al Qaeda, but Google gives the impression that you do. And people are drawing that conclusion. You write to Google but nothing comes of it. What do you do?
This is guilt through algorithmic association. And while this example is not a real case, I keep hearing about real cases. Cases where people are algorithmically associated with practices, organizations, and concepts that paint them in a problematic light even though there’s nothing on the web that associates them with that term. Cases where people are getting accused of affiliations that get produced by Google’s auto-complete. Reputation hits that stem from what people _search_ not what they _write_.
It’s one thing to be slandered by another person on a website, on a blog, in comments. It’s another to have your reputation slandered by computer algorithms. The algorithmic associations do reveal the attitudes and practices of people, but those people are invisible; all that’s visible is the product of the algorithm, without any context of how or why the search engine conveyed that information. What becomes visible is the data point of the algorithmic association. But what gets interpreted is the “fact” implied by said data point, and that gives an impression of guilt. The damage comes from creating the algorithmic association. It gets magnified by conveying it.
- What are the consequences of guilt through algorithmic association?
- What are the correction mechanisms?
- Who is accountable?
- What can or should be done?
Note: The image used here is Photoshopped. I did not use real examples so as to protect the reputations of people who told me their story.
Update: Guilt through algorithmic association is not constrained to Google. This is an issue for any and all systems that learn from people and convey collective “intelligence” back to users. All of the examples that I was given from people involved Google because Google is the dominant search engine. I’m not blaming Google. Rather, I think that this is a serious issue for all of us in the tech industry to consider. And the questions that I’m asking are genuine questions, not rhetorical ones.
8 thoughts on “Guilt Through Algorithmic Association”
Algorithms such as Google’s search algo or advertising algo (based on your search and click patterns) simply reinforce attitudes and beliefs. A weak attitude on a subject such as a name would have the tendency to be reinforced through these algos. Moreover, it ends up being a vicious circle as we feed these algos and they end up feeding our beliefs.
Interesting choice for a name, given it’s as generic as John Smith.
I’m inclined to think that if someone was going to cast judgement on you based on autocomplete, they probably already were prejudiced against you & just needed something for their confirmation bias.
This reminds me of the case of a professor accused of academic misconduct. The second entry in Google suggest when searching for her name was “Jane Smith academic misconduct” This happened *before* the charges were made public. I guess it had spread through the grapevine that enough people had searched for it. So just to add another twist to it, rumors could spread this way without even being “public”.
danah, this is very disturbing indeed. Practically it means that people can maliciously harm a person very easily. The only thing they have to do is run a few searches, preferably in more than one search engine. than as Andres wrote they can spred the news…and that’s it. It sounds like the beginning of a Harlan Coben thriller. Saddly enough it is no fiction. Thanks for a very interesting post:)
Pingback: Google auto-complete helps your prejudice « Virtual Shadows
If for some superficial reasons, a group of people start believing that a person is practicing witchcraft; in old days it would have been harder to disprove it, if possible at all. Apparently it seems that with the smart search technology, it is once again possible to defame someone so easily. But in reality, this assumption could be defended easily.
I am not advocating for google, but what I can see from google’s policy for autocompleting, it avoids hate terms, “…a narrow set of removal policies for pornography, violence, hate speech, and …” So in case of “Abdullah,” when google’s auto complete suggested “al qaeda,” the question actually remains, whether the term “al queda” should be removed or not? But in reality if a group of people start believing something, no matter whether the belief is justified or not, lot of other people will start following that crowd. And of course there is less room for an algorithm to be smarter that human with the current technology.
So, google is doing what other people are trying to do, even though it tries to address diversity. To address the questions in this discussion, I think there is no long term consequence in this particular search when it is a search from a smaller IP range. But if it is searched from the whole world, google will flow/reinforce and be reinforced by the crowd. For a smaller scale search, no correction will be necessary; google will soon discard the term to maintain variety. I think, it’s only the myth( or the group of people who believe and keep believing something without justifying their belief) that can be hold accountable!
Definitely interesting stuff. It brings to mind David Beer’s fascinating piece on “the power of the algorithm”, which is all about how different algorithms – which are by now so complex that no one really understands entirely how they work – impact on our everyday lives in subtle and not so subtle ways.
(That’s, Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6).)
Pingback: Miscellanies 31 « Tu By Tu
Comments are closed.