Skip to content

Why I Am Suing the Government

July 1, 2016

(or: I write scripts, bots, and scrapers that collect online data)

I never thought that I would sue the government. The papers went in on Wednesday, but the whole situation still seems unreal. I’m a professor at the University of Michigan and a social scientist who studies the Internet, and I ran afoul of what some have called the most hated law on the Internet.

Others call it the law that killed Aaron Swartz. It’s more formally known as the Computer Fraud and Abuse Act (CFAA), the dangerously vague federal anti-hacking law. The CFAA is so broad, you might have broken it. The CFAA has been used to indict a MySpace user for adding false information to her profile, to convict a non-programmer of “hacking,” to convict an IT administrator of deleting files he was authorized to access, and to send a dozen FBI agents to the house of a computer security researcher with their guns drawn.

Most famously, prosecutors used the CFAA to threaten Reddit co-founder and Internet activist Aaron Swartz with 50 years in jail for an act of civil disobedience — his bulk download of copyrighted scholarly articles. Facing trial, Swartz hung himself at age 26.

The CFAA is alarming. Like many researchers in computing and social science, writing scripts, bots, or scrapers that collect online data is a normal part of my work. I routinely teach my students how to do it in my classes. Now that all sorts of activities have moved online — from maps to news to grocery shopping — studying people means now means studying people online and thus gathering online data. It’s essential. 

Les raboteurs de parquet (cropped)

Image: Les raboteurs de parquet by Gustave Caillebotte (cropped)
SOURCE: Wikipedia

Yet federal charges were brought against someone who was downloading publicly available Web pages.

People might think of the CFAA as a law about hacking with side effects that are a problem for computer security researchers. But the law affects anyone who does social research, or who needs access to public information. 

I work at a public institution. My research is funded by taxes and is meant for the greater good. My results are released publicly. Lately, my research designs have been investigating illegal fraud and discrimination online, evils that I am trying to stop. But the CFAA made my research designs too risky. A chief problem is that any clause in a Web site’s terms of service can become enforceable under the CFAA.

I found that crazy. Have you ever read a terms of service agreement? Verizon’s terms of service prohibited anyone using a Verizon service from saying bad things about Verizon. As it says in the legal complaint, some terms of service prohibit you from writing things down (as in, with a pen) if you saw them on a particular — completely public — Web page.

These terms of service aren’t laws, they’re statements written by Web site owners describing what they’d like to happen if they ran the universe. But the current interpretation of the CFAA says that we must judge what is authorized on the Web by reading a site’s terms of service to see what has been prohibited. If you violate the terms of service, the current CFAA mindset is: you’re hacking.

That means anything a Web site owner writes in the terms of service effectively becomes the law, and these terms can change at any time.

Did you know that terms of service can expressly prohibit the use of a Web site by researchers? Sites effectively prohibit research by simply outlawing any saving or republication of their contents, even if they are public Web pages. Dice.com forbids “research or information gathering,” while LinkedIn says you can’t “copy profiles and information of others through any means” including “manual” means. You also can’t “[c]ollect, use, copy, or transfer any information obtained from LinkedIn,” or “use the information, content or data of others.” (This begs the question: How would the intended audience possibly use LindedIn and follow these rules? Memorization?)

As a researcher, I was appalled by the implications, once they sunk in. The complaint I filed this week has to do with my research on anti-discrimination laws, but it is not too broad to say this: The CFAA, as things stand, potentially blocks all online research. Any researcher who uses information from Web sites could be at risk from the provision in our lawsuit. That’s why others have called this case “key to the future of social science.”

If you are a researcher and you think other researchers would be interested in this information, please share this information. We need to get the word out that the present situation is untenable.

The ACLU is providing my legal representation, and in spirit I feel that they have taken this case on behalf of all researchers and journalists. If you care about this issue and you’d like to help, I urge you to contribute.

 

Want more? Here is an Op-Ed that I co-authored with my co-plaintiff Prof. Karrie Karahalios:

Most of what you do online is illegal. Let’s end the absurdity.
https://www.theguardian.com/commentisfree/2016/jun/30/cfaa-online-law-illegal-discrimination

Here is the legal complaint:

Sandvig v. Lynch
https://www.aclu.org/legal-document/sandvig-v-lynch-complaint

Here is a press release about the lawsuit:

ACLU Challenges Law Preventing Studies on “Big Data” Discrimination
https://www.aclu.org/news/aclu-challenges-law-preventing-studies-big-data-discrimination

Here is some of the news coverage:

Researchers Sue the Government Over Computer Hacking Law
https://www.wired.com/2016/06/researchers-sue-government-computer-hacking-law/

New ACLU lawsuit takes on the internet’s most hated hacking law
http://www.theverge.com/2016/6/29/12058346/aclu-cfaa-lawsuit-algorithm-research-first-amendment

Do Housing and Jobs Sites Have Racist Algorithms? Academics Sue to Find Out
http://arstechnica.com/tech-policy/2016/06/do-housing-jobs-sites-have-racist-algorithms-academics-sue-to-find-out/

When Should Hacking Be Legal?
http://www.theatlantic.com/technology/archive/2016/07/when-should-hacking-be-legal/489785/

Please note that I have filed suit as a private citizen and not as an employee of the University.

[Updated on 7/2 with additional links.]

 

Awakenings of the Filtered

June 20, 2016

I was delighted to give the Robert M. Pockrass Memorial Lecture at Penn State University this year, titled “Awakenings of the Filtered: Algorithmic Personalization in Social Media and Beyond.” I used the opportunity to give a broad overview of recent work about social media filtering algorithms and personalization. Here it is:

I tried to argue that media of all kinds have been transformed to include automatic selection and ranking as a basic part of their operation, that this transformation is significant, and that it carries significant dangers that are currently not well-understood.

Some highlights: I worry that algorithmic filtering as it is currently implemented suppresses the dissemination of important news, distorts our interactions with friends and family, disproportionately deprives some people of opportunity, and that Internet platforms intentionally obscure the motives and processes by which algorithms effect these consequences.

I say that users and platforms co-produce relevance in social media. I note that the ascendant way to reason about communication and information is actuarial, which I call “actuarial media.”  I discuss “corrupt personalization,” previously a topic on this blog. I propose that we are seeing a new kind of “algorithmic determinism” where cause and effect are abandoned in reasoning about the automated curation of content.

I also mention the anti-News Feed (or anti-filtering) backlash, discuss whether or not Penn State dorms have bathrooms, and talk about how computers recognize cat faces.

Penn State was a great audience, and the excellent question and answer session is not captured here.  Thanks so much to PSU for having me, and for allowing me to post this recording. A particularly big thank you to Prof. Matthew McAllister and the Pockrass committee, and to Jenna Grzeslo for the very kind introduction.

I welcome your thoughts!

 

The OKCupid data release fiasco: It’s time to rethink ethics education

May 18, 2016

In mid 2016, we confront another ethical crisis related to personal data, social media, the public internet, and social research. This time, it’s a release of some 70,0000 OKCupid users’ data, including some very intimate details about individuals. Responses from several communities of practice highlight the complications of using outdated modes of thinking about ethics and human subjects when considering new opportunities for research through publicly accessible or otherwise easily obtained data sets (e.g., Michael Zimmer produced a thoughtful response in Wired and Kate Crawford pointed us to her recent work with Jacob Metcalf on this topic). There are so many things to talk about in this case, but here, I’d like to weigh in on conversations about how we might respond to this issue as university educators.

The OKCupid case is just the most recent of a long list of moments that reveal how doing something because it is legal is no guarantee that it is ethical. To invoke Kate Crawford’s apt Tweet from March 3, 2016:

This is a key point of confusion, apparently. Michael Zimmer, reviewing multiple cases of ethical problems emerging when large datasets are released by researchers emphasizes the flaw in this response, noting:

This logic of “but the data is already public” is an all-too-familiar refrain used to gloss over thorny ethical concerns (in Wired).

In the most recent case, the researcher in question, Emil Kirkegaard, uses this defense in response to questions asking if he anonymized the data: “No. Data is already public.” I’d like to therefore add a line to Crawford’s simple advice:

Data comes from people. Displaying it for the world to see can cause harm.

A few days after this data was released, it was removed from the Open Science Framework, after a DMCA claim by OKCupid. Further legal action could follow. All of this is a good step toward protecting the personal data of users, but in the meantime, many already downloaded and are now sharing the dataset in other forms. As Scott Weingart, digital humanities specialist at Carnegie Mellon, warns:

As a long term university educator, a faculty member at the same university where Kirkegaard is pursuing his Masters degree, and a researcher of digital ethics, this OKCupid affair frustrates me: How is it possible that we continue to reproduce this logic, despite the multiple times “it’s publicly accessible therefore I can do whatever I want with it” has proved harmful? We must attribute some responsibility to existing education systems. Of course, the problem doesn’t start there and “education system” can be a formal institution or simply the way we learn as everyday knowledge is passed around in various forms. So there are plenty of arenas where we learn (or fail to learn) to make good choices in situations fraught with ethical complexity. Let me offer a few trajectories of thought:

What data means to regulators

The myth of “data is already public, therefore ethically fine to use for whatever” persists because traditional as well as contemporary legal and regulatory statements still make a strong distinction between public and private. This is no longer a viable distinction, if it ever was. When we define actions or information as being either in the private or the public realm, this sets up a false binary that is not true in practice or perception. Information is not a stable object that emerges in and remains located in a particular realm or sphere. Data becomes informative or is noticed only when it becomes salient for some reason. On OKCupid or elsewhere, people publish their picture, religious affiliation, or sexual preference in a dating profile as part of a performance of their identity for someone else to see. This placement of information is intended to be part of an expected pattern of interaction — someone is supposed to see and respond to this information, which might then spark conversation or a relationship. This information is not chopped up into discrete units in either a public or private realm. Rather, it is performative and relational. When we only access regulatory language, the more nuanced subtleties of context are rendered invisible.

What data means to people who produce it

Whether information or data is experienced or felt as something public or private is quite different from the information itself. Violation of privacy can be an outcome at any point. This is not related to the data, but the ways in which the data is used. From this standpoint, data can only logically exist as part of continual flows of timespace contexts; therefore, to extract data as points from one or the other static sphere is illogical. Put more simply, the expectation of privacy about one’s profile information comes into play when certain information is registered and becomes meaningful for others. Otherwise, the information would never enter into a context where ‘public’, ‘private’, ‘intimate’, ‘secret’, or any other adjective operates as a relevant descriptor.

This may not be the easiest idea for us to understand, since we generally conceptualize data as static and discrete informational units that can be observed, collected, and analyzed. In experience, this is simply not true. The treatment of personal data is important. It requires sensitivity to the context as well as an understanding of the tools that can be used to grapple with this complexity.

What good researchers know about data and ethics

Reflexive researchers know that regulations may be necessary, but they are insufficient guides for ethics. While many lessons from previous ethical breaches in scientific research find their way into regulatory guidelines or law, unique ethical dilemmas arise as a natural part of any research of any phenomenon. According to the ancient Greeks, doing the right thing is a matter of phronesis or practical wisdom whereby one can discern what would constitute the most ethical choice in any situation, an ability that grows stronger with time, experience, and reflection.

This involves much more than simply following the rules or obeying the letter of the law. Phronesis is a very difficult thing to teach, since it is a skill that emerges from a deep understand of the possible intimacy others have with what we outsiders might label ‘data.’ This reflection requires that we ask different questions than what regulatory prescriptions might require. In addition to asking the default questions such as “Is the data public or private?” or “does this research involve a ‘human subject’?” we should be asking “What is the relationship between a person and her data?” Or “How does the person feel about his relationship with his data?” These latter questions don’t generally appear in regulatory discussions about data or ethics. These questions represent contemporary issues that have emerged as a result of digitization plus the internet, an equation that illustrates information can be duplicated without limits and is swiftly and easily separated from its human origins once it disseminates or moves through the network. In a broader sense, this line of inquiry highlights the extent to which ‘data’ can be mischaracterized.

Where do we learn the ethic of accountability?

While many scholars concerned with data ethics discuss complex questions, the complexity doesn’t often end up traditional classrooms or regulatory documents. We learn to ask the tough questions when complicated situations emerge, or when a problem or ethical dilemma arises. At this point, we may question and adjust our mindset. This is a process of continual reflexive interrogation of the choices we’re making as researchers. And we get better at it over time and practice.

We might be disappointed but we shouldn’t be surprised that many people end up relying on outdated logic that says ‘if data is publicly accessible, it is fair game for whatever we want to do with it’. This thinking is so much easier and quicker than the alternative, which involves not only judgment, responsibility, and accountability, but also speculation about the potential future impact of one’s research.

Learning contemporary ethics in a digitally-saturated and globally networked epoch involves considering the potential impact of one’s decisions and then making the best choice possible. Regulators are well aware of this, which is why they (mostly) include exceptions and specific case guidance in statements about how researchers should treat data and conduct research involving human subjects.

Teaching ethics as ‘levels of impact’

So, how might we change the ways we talk and teach about ethics to better prepare researchers to take the extra step of reflecting on how their research choices matter in the bigger picture? First, we can make this an easier topic to broach by addressing ethics as being about choices we make at critical junctures; choices that will invariably have impact.

We make choices, consciously or unconsciously, throughout the research process. Simply stated, these choices matter. If we do not grapple with natural and necessary change in research practices our research will not reflect the complexities we strive to understand. — Annette Markham, 2003.

Ethics can be thus considered a matter of methods. “Doing the right thing” is an everyday activity, as we make multiple choices about how we might act. Our decisions and actions transform into habits, norms, and rules over time and repetition. Our choices carry consequences. As researchers, we carry more responsibility than users of social media platforms. Why? Because we hold more cards when we present findings of studies and make knowledge statements intended to present some truth -big or little T- about the world to others.

To dismiss our everyday choices as being only guided by extant guidelines is a naïve approach to how ethics are actually produced. Beyond our reactions to this specific situation, as Michael Zimmer emphasizes in his recent Wired article, we must address the conceptual muddles present in big data research.

This is quite a challenge when the terms are as muddled as the concepts. Take the word ‘ethics.’ Although it’s an important term that operates as an important foundation in our work as researchers, it is also abstract, vague, and daunting because it can feel like you ought to have philosophy training to talk about it. As educators, we can lower the barrier to entry into ethical concepts by taking a ‘what if’ impact approach, or discussing how we might assess the ‘creepy’ factor in our research design, data use, or technology development.

At the most basic level of an impact approach, we might ask how our methods of data collection impact humans, directly. If one is interviewing, or the data is visibly connected to a person, this is easy to see. But a distance principle might help us recognize that when the data is very distant from where it originated, it can seem disconnected from persons, or what some regulators call ‘human subjects.’ At another level, we can ask how our methods of organizing data, analytical interpretations, or findings as shared datasets are being used — or might be used — to build definitional categories or to profile particular groups in ways that could impact livelihoods or lives. Are we contributing positive or negative categorizations? At a third level of impact, we can consider the social, economic, or political changes caused by one’s research processes or products, in both the short and long term. These three levels raise different questions than those typically raised by ethics guidelines and regulations. This is because an impact approach is targeted toward the possible or probable impact, rather than the prevention of impact in the first place. It acknowledges that we change the world as we conduct even the smallest of scientific studies, and therefore, we must take some personal responsibility for our methods.

Teaching questions rather than answers

Over the six years I spent writing guidelines for the updated ‘Ethics and decision making in internet research” document for the International Association of Internet Researchers (AoIR), I realized we had shifted significantly from statements to questions in the document. This shift was driven in part by the fact that we came from many different traditions and countries and we couldn’t come to consensus about what researchers should do. Yet we quickly found that posing these questions provided the only stable anchor point as technologies, platforms, and uses of digital media were continually changing. As situations and contexts shifted, different ethical problems would arise. This seemingly endless variation required us to reconsider how we think about ethics and how we might guide researchers seeking advice. While some general ethical principles could be considered in advance, best practices emerged through rigorous self-questioning throughout the course of a study, from the outset to well after the research was completed. Questions were a form that also allowed us to emphasize the importance of active and conscious decision-making, rather than more passive adherence to legal, regulatory, or disciplinary norms.

A question-based approach emphasizes that ethical research is a continual and iterative process of both direct and tacit decision making that must be brought to the surface and consciously accounted for throughout a project. This process of questioning is most obvious when the situation or direction is unclear and decisions must be made directly. But when the questions as well as answers are embedded in and produced as part of our habits, these must be recognized for what they once were — choices at critical junctures. Then, rather than simply adopting tools as predefined options, or taking analytical paths dictated by norm or convention, we can choose anew.

This recent case of the OKCupid data release provides an opportunity for educators to revisit our pedagogical approaches and to confront this confusion head on. It’s a call to think about options that reach into the heart of the matter, which means adding something to our discussions with junior researchers to counteract the depersonalizing effects of generalized top down requirements, forms with checklists, and standardized (and therefore seemingly irrelevant) online training modules.

  • This involves questioning as well as presenting extant ethical guidelines, so that students understand more about the controversies and ongoing debates behind the scenes as laws and regulations are developed.
  • It demands that we stop treating IRB or ethics boards requirements as bureaucratic hoops to jump through, so that students can appreciate that in most studies, ethics require revisiting.
  • It means examining the assumptions underlying ethical conventions and reviewing debates about concepts like informed consent, anonymizing data, or human subjects, so that students better appreciate these as negotiable and context-dependent, rather than settled and universal concepts.
  • It involves linking ethics to everyday logistic choices made throughout a study, including how questions are framed, how studies are designed, and how data is managed and organized. In this way students can build a practice of reflection on and engagement around their research decisions as meaningful choices rather than externally prescribed procedures.
  • It asks that we understand ethics as they are embedded in broader methodological processes — perhaps by discussing how analytical categories can construct cultural definitions, how findings can impact livelihoods, or how writing choices and styles can invoke particular versions of stories. In this way, students can understand that their decisions carry over into other spheres and can have unintended or unanticipated results.
  • It requires adding positive examples to the typically negative cases, which tend to describe what we should not do, or how we can get in trouble. In this way, students can consider the (good and important) ethics of conducting research that is designed to make actual and positive transformations in the broader world.

This list is intended to spark imagination and conversation more than to explain what’s currently happening (for that, I would point to Metcalf’s 2015 review of various pedagogical approaches to ethics in the U.S.). There are obviously many ways to address or respond to this most recent case, or any of the dozens of cases that pose ethical problems.

I, for one, will continue talking more in my classrooms about how, as researchers, our work can be perceived as creepy, stalking, or harassing; exploring how our research could cause harm in the short or long term; and considering what sort of futures we are facilitating as a result of our contributions in the here and now.

For more about data and ethics, I recommend the annual Digital Ethics Symposium at Loyola University-Chicago; the growing body of work emerging from the Council for Big Data, Ethics, & Society; and the internationalAssociation of Internet Studies (AoIR) ethics documents and the work of their longstanding ethics committee members. For current discussions around how we conceptualize data in social research, one might take a look at special issues devoted to the topic, like the 2013 issue on Making Data: Big data and beyond in First Monday, or the 2014 issue on Critiquing Big Data in the International Journal of Communication.These are just the first works off the top of my head that have inspired my own thinking and research on these topics.

Algorithms, clickworkers, and the befuddled fury around Facebook Trends

May 18, 2016
fb_trending_icon_down

The controversy about the human curators behind Facebook Trends has grown, since the allegations made last week by Gizmodo. Besides being a major headache for Facebook, it has helped prod a growing discussion about the power of Facebook to shape the information we see and what we take to be most important. But we continue to fail to find the right words to describe what algorithmic systems are, who generates them, and what they should do for users and for the public. We have to get this clear.

Here’s the case so far: Gizmodo says that Facebook hired human curators to decide which topics, identified by algorithms, would be listed as trending, how they should be named and summarized; one former curator alleged that his fellow curators often overlooked or suppressed conservative topics. This came too close on the heels of a report a few weeks back that Facebook employees had asked internally if the company had a responsibility to slow Donald Trump’s momentum. Angry critics have noted that Zuckerberg, Search VP Tom Stocky, and other FB execs are liberals. Facebook has vigorously disputed the allegation, saying that they have guidelines in place to insure consistency and neutrality, asserting that there’s no evidence that it happened, distributing their guidelines for how Trending topics are selected and summarized, after they were leaked, inviting conservative leaders in for a discussion, and pointing out their conservative bona fides. The Senate’s Commerce Committee, chaired by Republican Senator John Thune, issued a letter demanding answers from Facebook about it. Some wonder if the charges may have been overstated. Other Facebook news curators have spoken up, some to downplay the allegations and defend the process that was in place, others to highlight the sexist and toxic work environment they endured.

Commentators have used the controversy to express a range of broader concerns about Facebook’s power and prominence. Some argue it is unprecedented: “When a digital media network has one billion people connected to entertainment companies, news publications, brands, and each other, the right historical analogy isn’t television, or telephones, or radio, or newspapers. The right historical analogy doesn’t exist.” Others have made the case that Facebook is now as powerful as the media corporations, which have been regulated for their influence; that their power over news organizations and how they publish is growing; that they could potentially and knowingly engage in political manipulation; that they are not transparent about their choices; that they have become an information monopoly.

This is an important public reckoning about Facebook, and about social media platforms more generally, and it should continue. We clearly don’t yet have the language to capture the kind of power we think Facebook now holds. But it would be great if, along the way, we could finally mothball some foundational and deeply misleading assumptions about Facebook and social media platforms, assumptions that have clouded our understanding of their role and responsibility. Starting with the big one:

Algorithms are not neutral. Algorithms do not function apart from people.

 

We prefer the idea that algorithms run on their own, free of the messy bias, subjectivity, and political aims of people. It’s a seductive and persistent myth, one Facebook has enjoyed and propagated. But its simply false.

I’ve already commented on this, and many of those who study the social implications of information technology have made this point abundantly clear (including Pasquale, Crawford, Ananny, Tufekci, boyd, Seaver, McKelvey, Sandvig, Bucher, and nearly every essay on this list). But it persists: in statements made by Facebook, in the explanations offered by journalists, even in the words of Facebook’s critics.

If you still think algorithms are neutral because they’re not people, here’s a list, not even an exhaustive one, of the human decisions that have to be made to produce something like Facebook’s Trending Topics (which, keep in mind, pales in scope and importance to Facebook’s larger algorothmic endeavor, the “news feed” listing your friends’ activity). Some are made by the engineers designing the algorithm, others are made by curators who turn the output of the algorithm into something presentable. If your eyes start to glaze over, that’s the point; read any three points and then move on, they’re enough to dispel the myth. Ready?

(determining what activity might potentially be seen as a trend)
– what data should be counted in this initial calculation of what’s being talked about (all Facebook users, or subset? English language only? private posts too, or just public ones?)
– what time frame should be used in this calculation — both for the amount of activity happening “now” (one minute, one hour, one day?) and to get a baseline measure of what’s typical (a week ago? a different day at the same time, or a different time on the same day? one point of comparison or several?)
– should Facebook emphasize novelty? longevity? recurrence? (e.g., if it has trended before, should it be easier or harder for it to trend again?)
– how much of a drop in activity is sufficient for a trending topic to die out?
– which posts actually represent a single topic (e.g., when do two hashtags referring to the same topic?)
– what other signals should be taken into account? what do they mean? (should Facebook measure posts only, or take into account likes? how heavily should they be weighed?)
– should certain contributors enjoy some privileged position in the count? (corporate partners, advertisers, high-value users? pay-for-play?)

(from all possible trends, choosing which should be displayed)
– should some topics be dropped, like obscenity or hate speech?
– if so, who decides what counts as obscene or hateful enough to leave off?
– what topics should be left off because they’re too generic? (Facebook noted that it didn’t include “junk topics” that do not correlate to a real world event. What counts as junk, case by case?)

(designing how trends are displayed to the users)
– who should do this work? what expertise should they have? who hires them?
– how should a trend be presented? (word? title? summary?)
– what should clicking on a trend headline lead to? (some form of activity on Facebook? some collection of relevant posts? an article off the platform, and if so, which one?)
– should trends be presented in single list, or broken into categories? if so, can the same topic appear in more than one category?
– what are the boundaries of those categories (i.e. what is or isn’t “politics”?)
– should trends be grouped regionally or not? if so, what are the boundaries of each region?
– should trends lists be personalized, or not? If so, what criteria about the user are used to make that decision?

(what to do if the list is deemed to be broken or problematic in particular ways)
– who looks at this project to assess how its doing? how often, and with what power to change it?
– what counts as the list being broken, or off the mark, or failing to meet the needs of users or of Facebook?
– what is the list being judged against, to know when its off (as tested against other measures of Facebook activity? as compared to Twitter? to major news sites?)
– should they re-balance a Trends list that appears unbalanced, or leave it? (e.g. what if all the items in the list at this moment are all sports, or all celebrity scandal, or all sound “liberal”?)
– should they inject topics that aren’t trending, but seem timely and important?
– if so, according to what criteria? (news organizations? which ones? how many? US v. international? partisan vs not? online vs off?)
– should topics about Facebook itself be included?

These are all human choices. Sometimes they’re made in the design of the algorithm, sometimes around it. The result we see, a changing list of topics, is not the output of “an algorithm” by itself, but rather of an effort that combined human activity and computational analysis, together, to produce it.

So algorithms are in fact full of people and the decisions they make. When we let ourselves believe that they’re not, we let everyone — Zuckerberg, his software engineers, regulators, and the rest of us — off the hook for actually thinking out how they should work, leaving us all unprepared when they end up in the tall grass of public contention. “Any algorithm that has to make choices has criteria that are specified by its designers. And those criteria are expressions of human values. Engineers may think they are “neutral”, but long experience has shown us they are babes in the woods of politics, economics and ideology.” Calls for more public accountability, like this one from my colleague danah boyd, can only proceed once we completely jettison the idea that algorithms are neutral — and replace it with a different language that can assess the work that people and systems do together.

The problem is not algorithms, it’s that Facebook is trying to clickwork the news.

 

It is certainly in Facebook’s interest to obscure all the people involved, so users can keep believing that a computer program is fairly and faithfully hard at work. Dismantling this myth raises the kind of hard questions Facebook is fielding. But, once we jettison this myth, what’s left? It’s easy to despair that with so many human decisions involved, how could we ever get a fair and impartial measure of what matters? And forget the handful of people that designed the algorithm and the handful of people that select and summarize from it: Trends are themselves a measure of the activity of Facebook users. These trending topics aren’t produced by dozens of people but millions. Their judgment of what’s worth talking about, in each case and in the aggregate, may be so distressingly incomplete, biased, skewed, and vulnerable to manipulation, that it’s absurd to pretend it can tell us anything at all.

But political bias doesn’t come from the mere presence of people. It comes from how those people organized to do what they’re asked to do. Along with our assumption that algorithms are neutral is a matching and equally misleading assumption that people are always and irretrievably biased. But human endeavors are organized affairs, and can organized to work against bias. Journalism is full of people too, making all sorts of just as opaque, limited, and self-interested decisions. What we hope keeps journalism from slipping into bias and error is the well-established professional norms and thoughtful oversight.

The real problem here is not the liberal leanings of Facebook’s news curators. If conservative news topics were overlooked, it’s only a symptom of the underlying problem. Facebook wanted to take surges of activity that its algorithms could identify and turn them into news-like headlines. But it treated this as an information processing problem, not an editorial one. They’re “clickworking” the news.

Clickwork begins with the recognition that computers are good at some kinds of tasks, and humans others. The answer, it suggests, is to break the task at hand down into components and parcel them out to each accordingly. For Facebook’s trending topics, the algorithm is good at scanning an immense amount of data and identifying surges of activity, but not at giving those surges a name and a coherent description. That is handled by people — in industry parlance, this is the “human computation” part. These identified surges of activities are delivered to a team of curators, each one tasked with following a set of procedures to identify and summarize them. The work is segmented into simple and repetitive tasks, and governed by a set of procedures such that, even though different people are doing it, their output will look the same. In effect, the humans are given tasks that only humans can do, but they are not invited to do them in a human way: they are “programmed” by the modularized work flow and the detailed procedures so that they do the work like computers would. As Lilly Irani put it, clickwork “reorganizes digital workers to fit them both materially and symbolically within existing cultures of new media work.”

This is apparent in the guidelines that Facebook gives to their Trends curators. The documents, leaked to The Guardian then released by Facebook, did not reveal some bombshell about political manipulation, nor did they do much to demonstrate careful guidance on the part of Facebook around the issue of political bias. What’s most striking is that they are mind-numbingly banal: “Write the description up style, capitalizing the first letter of all major words…” “Do not copy another outlet’s headline…” “Avoid all spoilers for descriptions of scripted shows…” “After identifying the correct angle for a topic, click into the dropdown menu underneath the Unique Keyword fielding select the Unique Keyword that best fits the topic…” “Mark a topic as ‘National Story’ importance if it is among the 1-3 top stories of the day. We measure this by checking if it is leading at least 5 of the following 10 news websites…”  “Sports games: rewrite the topic name to include both teams…” This is not the news room, it’s the secretarial pool.

Moreover, these workers were kept separate from the rest of the full-time employees, worked under quotas for how many trends to identify and summarize that were increased as the project went on. As one curator noted, “The team prioritizes scale over editorial quality and forces contractors to work under stressful conditions of meeting aggressive numbers coupled with poor scheduling and miscommunication. If a curator is underperforming, they’ll receive an email from a supervisor comparing their numbers to another curator.” All were hourly contractors, were kept under non-disclose agreements and asked not to mention that they worked for Facebook. “’It was degrading as a human being,’ said another. ‘We weren’t treated as individuals. We were treated in this robot way.’” A new piece in The Guardian from one such news curator insists that it was also a toxic work environment, especially for women. These “data janitors” are rendered so invisible in the images of Silicon Valley and how tech works that, when we suddenly hear from one, we’re surprised.

Their work was organized to quickly produce capsule descriptions of bits of information that are styled the same — as if they were produced by an algorithm. (this lines up with other concerns about the use of algorithms and clickworkers to produce cheap journalism at scale, and the increasing influence of audience metrics about what’s popular on news judgment.)  It was not, however, organized to thoughtfully assemble a vital information resource that some users treat as the headlines of the day. It was not organized to help these news curators develop experience together on how to do this work well, or handle contentious topics, or reflect on the possible political biases in their choices. It was not likely to foster a sense of community and shared ambitions with Facebook, which might lead frustrated and over-worked news curators to indulge in their own political preferences. And I suspect it was not likely to funnel any insights they had about trending topics back to the designers of the algorithms they depended on.

Trends are not the same as news, but Facebook kinda wants them to be.

 

Part of why charges of bias are so compelling is that we have a longstanding concern about the problem of bias in news. For more than a century we’ve fretted about the individual bias of reporters, the slant of news organizations, and the limits of objectivity [http://jou.sagepub.com/content/2/2/149.abstract]. But is a list of trending topics a form of news? Are the concerns we have about balance and bias in the news relevant for trends?

“Trends” is a great word, the best word to have emerged amidst the social media zeitgeist. In a cultural moment obsessed with quantification, defended as being the product of an algorithm, “trends” is a powerfully and deliberately vague term that does not reveal what it measures. Commentators poke at Facebook for clearer explanations of how they choose trends, but “trends” could mean such a wide array of things, from the most activity to the most rapidly rising to a completely subjective judgment about what’s popular.

But however they are measured and curated, Facebook’s Trends are, at their core, measures of activity on the site. So, at least in principle, they are not news, they are expressions of interest. Facebook users are talking about some things, a lot, for some reason. This has little to do with “news” which implies an attention to events in the world and some judgment of importance. Of course, many things Facebook users talk about, though not all, are public events. And it seems reasonable to assume that talking about a topic represents some judgment of its importance, however minimal. Facebook takes these identifiable surges of activity as proxies for importance. Facebook users “surface” the news… approximately. The extra step and “injecting” stories drawn from the news that were for whatever reason not surging among Facebook users goes a step further, to turn their proxy of the news into a simulation of it. Clearly this was an attempt to best Twitter, may also have played into their effort to persuade news organizations to partner with them and take advantage of their platform as a means of distribution. But it also encouraged us to hold Trends accountable for news-like concerns, like liberal bias.

We could think about Trends differently, not as approximating the news but as taking the public’s pulse. If Trends were designed to strictly represent “what are Facebook users talking about a lot,” presumably there is some scientific value, or at least cultural interest, in knowing what (that many) people are actually talking about. If that were its understood value, we might still worry about the intervention of human curators and their political preferences, but not because their choices would shape users’  political knowledge or attitudes, but because e’d want this scientific glimpse to be unvarnished by misrepresentation.

But that is not how Facebook has invited us to think about its Trending topics, and it couldn’t do so if it wanted: its interest in Trending topics is neither as a form of news production nor as a pulse of the public, but as a means to keep users on the site and involved. The proof of this, and the detail that so often gets forgotten in these debates, is that the Trending Topics are personalized. Here’s Facebook’s own explanation: “Trending shows you a list of topics and hashtags that have recently spiked in popularity on Facebook. This list is personalized based on a number of factors, including Pages you’ve liked, your location and what’s trending across Facebook.” Knowing what has “spiked in popularity” is not the same as news; a list “personalized based on… Pages you’ve liked” is no longer a site-wide measure of popular activity; an injected topic is no longer just what an algorithm identified.

As I’ve said elsewhere, “trends” are not a barometer of popular activity but a hieroglyph, making provocative but oblique and fleeting claims about “us” but invariably open to interpretation. Today’s frustration with Facebook, focused for the moment on the role their news curators might have played in producing these Trends, is really a discomfort with the power Facebook seems to exert — a kind of power that’s hard to put a finger on, a kind of power that our traditional vocabulary fails to capture. But across the controversies that seem to flare again and again, a connecting thread is Facebook’s insistence on colonizing more and more components of social life (friendship, community, sharing, memory, journalism), and turning the production of shared meaning so vital to sociality into the processing of information so essential to their own aims.

Facebook Trending: It’s made of people!! (but we should have already known that)

May 9, 2016

Gizmodo has released two important articles (1, 2) about the people who were hired to manage Facebook’s “Trending” list. The first reveals not only how Trending topics are selected and packaged on Facebook, but also the peculiar working conditions this team experienced, the lack of guidance or oversight they were provided, and the directives they received to avoid news that addressed Facebook itself. The second makes a more pointed allegation: that along the way, conservative topics were routinely ignored, meaning the trending algorithm had identified user activity around a particular topic, but the team of curators chose not to publish it as a trend.

This is either a boffo revelation, or an unsurprising look at how the sausage always gets made, depending on your perspective. The promise of “trends” is a powerful one. Even as the public gets more and more familiar with the way social media platforms work with data, and even with more pointed scrutiny of trends in particular, it is still easy to think that “trends” means an algorithm is systematically and impartially uncovering genuine patterns of user activity. So, to discover that a handful of j-school graduates were tasked with surveying all the topics the algorithm identified, choosing just a handful of them, and dressing them up with names and summaries, feels like a unwelcome intrusion of human judgment into what we wish were analytic certainty. Who are these people? What incredible power they have to dictate what is and is not displayed, what is and is not presented as important! Wasn’t this  supposed to be just a measure of what users were doing, what the people important! Downplaying conservative news is the most damning charge possible, since it has long been a commonplace accusation leveled at journalists. But the revelation is that there’s people in the algorithm at all.

But the plain fact of information algorithms like the ones used to identify “trends” is that they do not work alone, they cannot work alone — in so many ways that we must simply discard the fantasy that they do, or ever will. In fact, algorithms do surprisingly little, they just do it really quickly and with a whole lot of data. Here’s some of what they can’t do:

Trending algorithms identify patterns in data, but they can’t make sense of it. The raw data is Facebook posts, likes, and hashtags. Looking at this data, there will certainly be surges of activity that can be identified and quantified: words that show up more than other words, posts that get more likes than other posts. But there is so much more to figure out
(1) What is a topic? To decide how popular a topic is, Facebook must decide which posts are about that topic. When do two posts or two hashtags represent the same story, such that they should be counted together? An algorithm can only do so much to say whether a post about Beyonce and a post about Bey and a post about Lemonade and a post about QueenB and the hashtag BeyHive are all the same topic. And that’s an easy one, a superstar with a distinctive name, days after a major public event. Imagine trying to determine algorithmically if people are talking about European tax reform, enough to warrant calling it a trend.
(2) Topics are also composed of smaller topics, endlessly down to infinity. Is the Republican nomination process a trending topic, or the Indiana primary, or Trump’s win in Indiana, or Paul Ryan’s response to Trump’s win in Indiana? According to one algorithmic threshold these would be grouped together, by another would be separate. The problem is not that an algorithm can’t tell. It’s that it can tell both interpretations, all interpretations equally well. So, an algorithm could be programmed to decide,to impose a particular threshold for the granularity of topics. But would that choice make sense to readers, would it map onto their own sense of what’s important, and would it work for the next topic, and the next?
(3) How should a topic be named and described, in a way that Facebook users would appreciate or even understand? Computational attempts to summarize are notoriously clunky, and often produce the kind of phrasing and grammar that scream “a computer wrote this.”
What trending algorithms can identify isn’t always what a platform wants to identify. Facebook, unlike Twitter, chose to display trends that identify topics, rather than single hashtags. This was already a move weighted towards identifying “news” rather than topics. It already strikes an uneasy balance between the kind of information they have — billions and posts and likes surging through their system — and the kind they’d like to display — a list of the most relevant topics. And it already sets up an irreconcilable tension: what should they do when user activity is not a good measure of public importance? It is not surprising the, that they’d try to focus on articles being circulated and commented on, and from the most reputable sources, as a way to lean on their curation and authority to pre-identify topics. Which opens up, as Gizmodo identifies, the tendency to discount some sources as non-reputable, which can have unintentionally partisan implications.
“Trending” is also being asked to do a lot of things for Facebook: capture the most relevant issues being discussed on Facebook, and conveniently map onto the most relevant topics in the worlds of news and entertainment, and keep users on the site longer, and keep up with Twitter, and keep advertisers happy. In many ways, a trending algorithm can be an enormous liability, if allowed to be: it could generate a list of dreadful or depressing topics; it could become a playground for trolls who want to fill it with nonsense and profanity; it could reveal how little people use Facebook to talk about matters of public importance; it could reveal how depressingly little people care about matters of public importance; and it could help amplify a story critical of Facebook itself. It would take a whole lot of bravado to set that loose on a system like Facebook, and let it show what it shows unmanaged. Clearly, Facebook has a lot more at stake in producing a trending list that, while it should look like an unvarnished report of what users are discussing, must also massage it into something that represents Facebook well at the same time.

So: people are in the algorithm because how could they not be? People produce the Facebook activity being measured, people design the algorithms and set their evaluative criteria, people decide what counts as a trend, people name and summarize them, and people look to game the algorithm with their next posts.

The thing is, these human judgments are all part of traditional news gathering as well. Choosing what to report in the news, how to describe it and feature it, and how to honor both the interests of the audience and the sense of importance, has always been a messy, subjective process, full of gaps in which error, bias, self-interest, and myopia can enter. The real concern here is not that there are similar gaps in Facebook’s process as well, or that Facebook hasn’t yet invented an algorithm that can close those gaps. The real worry is that Facebook is being so unbelievably cavalier about it.

Traditional news organizations face analogous problems and must make analogous choices, and can make analogous missteps. And they do. But two countervailing forces work against this, keep them more honest than not, more on target than not: a palpable and institutionalized commitment to news itself, and competition. I have no desire to glorify the current news landscape, which in many ways produces news that is disheartening less than what journalism should be. But there is at least a public, shared, institutionally rehearsed, and historical sense of purpose and mission, or at least there’s one available. Journalism schools teach their students about not just how to determine and deliver the news, but why. They offer up professional guidelines and heroic narratives that position the journalist as a provider of political truths and public insight. They provide journalists with frames that help them identify the way news can suffer when it overlaps with public relations, spin, infotainment, and advertising. There are buffers in place to protect journalists from the pressures that can come from the upper management, advertisers, or newsmakers themselves, because of a belief that independence is an important foundation for newsgathering. Journalists recognize that their choices have consequences, and they discuss those choices. And there are stakeholders for regularly checking these efforts for possible bias and self-interest: public editors and ombudspeople, newswatch organizations and public critics,  all trying to keep the process honest. Most of all, there are competitors who would gleefully point out a news organization’s mistakes and failures, which gives editors and managers real incentive to work against the temptations to produce news that is self-serving, politically slanted, or commercially craven.

Facebook seemed to have thought of absolutely none of these. Based on the revelations in the two Gizmodo articles, it’s clear that they hired a shoestring team, lashed them to the algorithm, offered little guidance for what it meant to make curatorial choices, provided no ongoing oversight as the project progressed, imposed self-interested guidelines to protect the company, and kept the entire process inscrutable to the public, cloaked in the promise of an algorithm doing its algorithm thing.

The other worry here is that Facebook is engaged in a labor practice increasingly common among Silicon Valley: hiring information workers through third parties, under precarious conditions and without access to the institutional support or culture their full-time employees enjoy, and imposing time and output demands on them that can only fail a task that warrants more time, care, expertise, and support. This is the troubling truth about information workers in Silicon Valley and around the world, who find themselves “automated” by the gig economy — not just clickworkers on Mechanical Turk and drivers on Uber, but even “inside” the biggest and most established companies on the plant. It also is a dangerous tendency for the kind and scale of information projects that tech companies are willing to take on, without having the infrastructure and personnel to adequately support them. It is not uncommon now for a company to debut a new feature or service, only weeks in development and supported only by its design team, with the assumption that it can quickly hire and train a team of independent, hourly workers. Not only does this put a huge onus on those workers, but it means that, if the service finds users and begins to scales up quickly, little preparation was in place, and the overworked team must quickly make some ad hoc decisions about what are often tricky cases with real, public ramifications.

Trending algorithms are undeniably becoming part of the cultural landscape, and revelations like Gizmodo’s are helpful steps in helping us shed the easy notions of what they are and how they work, notions the platforms have fostered. Social media platforms must come to fully realize that they are newsmakers and gatekeepers, whether they intend to be or not, whether they want to be or not. And while algorithms can chew on a lot of data, it is still a substantial, significant, and human process to turn that data into claims about importance that get fed back to millions of users. This is not a realization that they will ever reach on their own — which suggests to me that they need the two countervailing forces that journalism has: a structural commitment to the public, imposed if not inherent, and competition to force them to take such obligations seriously.

Addendum: Techcrunch is reporting that Facebook has responded to Gizmodo’s allegations, suggesting that it has “rigorous guidelines in place for the review team to ensure consistency and neutrality.” This makes sense. But consistency and neutrality are fine as concepts, but they’re vague and insufficient in practice. There could have been Trending curators at Facebook who deliberately tanked conservative topics and knew that doing so violated policy. But (and this has long been known in the sociology of news) the greater challenge in producing the news, whether generating it or just curating it, is how to deal with the judgments that happen while being consistent and neutral. Making the news always requires judgments, and judgements always incorporate premises for assessing the relevance, legitimacy, and coherence of a topic. Recognizing bias in our own choices or across an institution is extremely difficult, but knowing whether you have produced a biased representation of reality is nearly impossible, as there’s nothing to compare it to — even setting aside that Facebook is actually trying to do something even harder, produce a representation of the collective representations of reality of their users, and ensure that somehow it also represents reality, as other reality-representers (be they CNN or Twitter users) have represented it. Were social media platforms willing to acknowledge that they constitute public life rather than hosting or reflecting it, they might look to those who produce news, educate journalists, and study news as a sociological phenomenon, for help thinking through these challenges.

Addendum 2 (May 9): The Senate Committee on Commerce, Science, and Transportation has just filed an inquiry with Facebook, raising concerns about their Trending Topics based on the allegations in the Gizmodo report. The letter of inquiry is available here, and has been reported by Gizmodo and elsewhere. In the letter they ask Mark Zuckerberg and Facebook to respond to a series of questions about how Trending Topics works, what kind of guidelines and oversight they provided, and whether specific topics were sidelined or injected. Gizmodo and other sites are highlighting the fact that this Committee is run by a conservative and has a majority of members who are conservative. But the questions posed are thoughtful ones. What they make so clear is that we simply do not have a vocabulary with which to hold these services accountable. For instance, they ask “Have Facebook news curators in fact manipulated the content of the Trending Topics section, either by targeting news stories related to conservative views for exclusion or by injecting non-trending content?” Look at the verbs. “Manipulated” is tricky, as it’s not exactly clear what the unmanipulated Trending Topics even are. “Targeting” sounds like they excluded stories, when what Gizmodo reports is that some stories were not selected as trending, or not recognized as stories. If trending algorithms can only highlight possible topics surging in popularity, but Facebook and its news curators constitute that data into a list of topics, then language that takes trending to be a natural phenomenon, that Facebook either accurately reveals or manipulates, can’t quite grip how this works and why it is so important. It is worth noting, though, that the inquiry pushes on how (whether) Facebook is keeping records of what is selected: “Does Facebook maintain a record ,of curators’ decisions to inject a story into the Trending Topics section or target a story for removal? If such a record. is not maintained, can such decisions be reconstructed or determined based on an analysis of the Trending Topics product? a. If so, how many stories have curators excluded that represented conservative viewpoints or topics of interest to conservatives? How many stories did curators inject that were not, in fact, trending? b. Please provide a list of all news stories removed from or injected into the Trending Topics section since January 2014.” This approach I think does emphasize to Facebook that these choices are significant, enough so that they should be treated as part of the public record and open to scrutiny by policymakers or the courts. This is a way of demanding Facebook take role in this regard more seriously.

Astro Noise: A survival guide for living under total surveillance

May 9, 2016

Documentary film maker Laura Poitra’s exhibit in the Whitney Museum presented an immersive installation covering issues of mass surveillance, the war on terror, Guantánamo Bay, occupation, the US drone program and torture. Some of these issues have been investigated in her films, including Citizenfour, which won the 2015 Academy Award for Best Documentary, and in her reporting, which was awarded a 2014 Pulitzer Prize.

With that came Astastronoisero Noise: A Survival Guide For Living Under Total Surveillance, where Poitras invited authors ranging from artists and novelists to technologists and academics to respond to the modern-day state of mass surveillance. Among them are author Dave Eggers, artist Ai Weiwei, the former Guantanamo Bay detainee Lakhdar Boumediene,  MSR SMC researcher Kate Crawford, and Edward Snowden. Some contributors worked directly with Poitras and the archive of documents leaked by Snowden; others contributed fictional reinterpretations of spycraft. The result is a “how-to” guide for living in a society that collects extraordinary amounts of information on individuals. A few excerpts by the different collaborators:

———————————————————————-

Laura Poitras –> Her chapter is called “Berlin Journal,” which she wrote between 2012 and 2013, when she had relocated to Europe so she could work easily without fear of having her material taken when she went into the US.

Feb 11. 2013

I read the news for fear of an arrest. It still could be  a shakedown targeting Julian or Jake. Watching what i’ll do with the material. It really is a drama to understand the possible motivations/goals. I take it at face value, but why? He could have approached the NYT or the Washington Post for maximun exposure. Why reach out to a filmmaker? Because I’ve been targeted? Because he has already gone down other paths? Because he doesn’t have what he claims?  (p. 86)

Kate Crawford—> Asking the Oracle

Kate compares the ancient Greek Delphic Oracle, which had restrictions for acquiring knowledge, to the unrestricted vastness of information provided by total surveillance.

So the Oracle, as a technology, set up particular restrictions and limitations. The information flow was restricted by the number of people who could visit the Oracle, by how many questions they could ask, and by the cryptic nature of the responses they received. In this sense there is  strange similarity with the Snowden archive. The person seated before the search box must decide what to ask next and try to exercise restraint so as not to be drawn into thousands of documents and stories and systems. But in another sense, when analysts consult the database inside the fortresses of the NSA and the GCHQ, there seems to be little respect for limits beyond the stictures of policy. Everything that can be captured will be. The archive is an epic testament to information acquisition, overreach, and confidence. It’s as though the guiding principles of Delphi were reversed. Know Everyone. Everything in Excess. Just keep pledgdin that all the necessary protections ar ein place. (p 143)

Edward Snowden –> Astro Noise

With the right antenna, we can hear the universe’s radio noises. The stars themselves (or so it’s been theorized) can provide us an unpredictable source of information that will never be heard again in the same way. As the world turns, our antenna sweeps the vastness of the universe at a given point in time. The signals that we receive constitute an ever-changing key forged from the sky itself. Such a key could only be imitated by an agent listening from that exact same place, in that same direction, at the same time, to those exact same stars. (p. 121)

Cory Doctorow –> The Adventure of the Extraordinary Rendition

In his chapter, Cory Doctorow explores a story of Sherlock Holmes in the times of the NSA.

It’s life in prison if I go public, Mr. Holmes. These kids, their parents are in the long-term XKeyscore retention, all their communications, and they’re frantic. I read their emails to their relatives and each other, and I can only think of how I’d feel if my son had gone missing without a trace. These parents, they’re thinking that their kids have been snatched by pedos and are getting the Daily Mail front-page treatment. The truth, if they knew it, might terrify them even more. Far as I can work out, the NSA sent them to a cIA black site, the kind of place you wouldn’t wish on your worst enemy. The kind of place you build for revenge, not for intelligence.

 

 

We’re all selfish superficial and too fat? Tedx talk by Kat Tiidenberg

May 2, 2016
by

This is a video and the transcript of my Ted talk at Ted x TTU in April 2016. It’s about body image, consumer economy and selfies.

*

I have some sayings here; let’s do a show of hands if you’ve heard these: “don’t judge a book by its cover” or “beauty is only skin deep.” The point seems to be that we shouldn’t be judged based on how we look, is that true? That we are more than our appearances, more than our bodies, do you agree?

Let’s do anther show of hands. During the past week, how many of you looked in the mirror and wished for something to be different? To be a little taller, or a little thinner – just, you know, the belly; or the thighs. Maybe you looked and wished to be more muscular or younger? To have smoother skin?

It seems, we are at an impasse. We don’t think we should be judged by our looks, but we quite harshly judge ourselves based on them. We think beauty is only skin deep, but we spend a lot of time, effort and money on trying to make ourselves look better, thus constantly engaging in something that is supposedly trivial. And it’s not just me and you either – according to the American Society of Plastic Surgeons, butt implants were the fastest growing type of cosmetic surgery in 2015. On average, there was a butt implant procedure every 30 minutes of every day. When I search for “love your body” in just Amazon Books, I find 14 399 results. 14 000 titles just to help us get comfortable in our own skin. Clearly we need a lot of help.

So the relationship we have with our bodies seems best described as tense. Why is that?Some say it is because we’re self-centered, narcissistic and superficial. I don’t think so. I also have some ideas on how to soothe this tension. To explain those ideas, I will use an example of something many people think is self-centered, narcissistic and superficial – selfies.

Read more…

Follow

Get every new post delivered to your Inbox.

Join 1,677 other followers