Awakenings of the Filtered

I was delighted to give the Robert M. Pockrass Memorial Lecture at Penn State University this year, titled “Awakenings of the Filtered: Algorithmic Personalization in Social Media and Beyond.” I used the opportunity to give a broad overview of recent work about social media filtering algorithms and personalization. Here it is:

I tried to argue that media of all kinds have been transformed to include automatic selection and ranking as a basic part of their operation, that this transformation is significant, and that it carries significant dangers that are currently not well-understood.

Some highlights: I worry that algorithmic filtering as it is currently implemented suppresses the dissemination of important news, distorts our interactions with friends and family, disproportionately deprives some people of opportunity, and that Internet platforms intentionally obscure the motives and processes by which algorithms effect these consequences.

I say that users and platforms co-produce relevance in social media. I note that the ascendant way to reason about communication and information is actuarial, which I call “actuarial media.”  I discuss “corrupt personalization,” previously a topic on this blog. I propose that we are seeing a new kind of “algorithmic determinism” where cause and effect are abandoned in reasoning about the automated curation of content.

I also mention the anti-News Feed (or anti-filtering) backlash, discuss whether or not Penn State dorms have bathrooms, and talk about how computers recognize cat faces.

Penn State was a great audience, and the excellent question and answer session is not captured here.  Thanks so much to PSU for having me, and for allowing me to post this recording. A particularly big thank you to Prof. Matthew McAllister and the Pockrass committee, and to Jenna Grzeslo for the very kind introduction.

I welcome your thoughts!


One thought on “Awakenings of the Filtered

  1. One important aspect must be underlined: all this is particularly dangerous when the Internet platforms in question refuse to provide any user control of the personalization (turning it off), making it impossible to interact with the World as is as well as a dark shadow of possible “people as product” abuse once enough “critical mass” information about individuals is accumulated.
    I’ll mention my gripe with one of the most open platforms: Twitter. Recently they went on a “personalization” spree through “suggested” tweets and “non-linear-timeline”. Fine experiment, however the most dangerous point is in their user control limits: there is no “Do not show me any of this any more”, there is sneaky “Show me less of this…”…

Comments are closed.