Data-Centric Decision Making: Assessing Success and Truth in the Age of the Aglorithm

The New York Times published an article about the state of white collar workers at Amazon last weekend, and everyone has been abuzz about it since. I know I’m a bit late to this discussion, but I do have a day job. Anyway, the article portrays a toxic environment, where workers are pressured to work through health crises and are encouraged to tattle on each other through online apps. There is a lot to talk about in the article, and honestly, I’m not sure how much new ground I am going to cover. However, I feel the need to take this opportunity to talk about data-centric decision making, what it means for labor, and what it means for our understandings of truth, fact, and assessment. That is, of course, a lot to discuss, and I will not be able to adequately cover all of these ideas, but with the article in the public sphere right now, it seems like a good time to discuss what ‘data’ can mean and what it can mean in relation to preexisting power structures.

First and foremost, Amazon employs data-centric management. Productivity is calculated and shared. An employee’s performance is directly tied to quantifiable actions – the number of Frozen dolls bought, the number of items left un-purchased in the cart, etc. Being good at your job, in this situation, isn’t just about being competent; it is about constantly proving competence through data. This idea is not terrible at its root. For a couple of reasons, I kind of dig it. According to this 2005 study, creating quantifiable criteria can help managers avoid discriminating against women in hiring practices. Creating specific data points, in this situation, can help managers ensure that they view candidates in an equitable manner. I am sure you could find more examples where differing to ‘data’ or quantifiable criteria for job performance helps people who have been historically discriminated against in the workplace.

But, what happens when you stop paying attention to how that criteria is quantified? What happens when ‘data’ becomes code for ‘objective truth’ rather than what it is – a human constructed method for measuring reality that can fall prey to discriminatory practices in the same way as other modes of assessment? An algorithm is not a God-given measurement of truth; it is subject to the same prejudices and flaws as any other human invention.

I will give you an example to demonstrate how this this phenomena can occur. Safyia Umoja Noble, a professor at UCLA’s Information Studies Department, has written extensively on how search engines portray women of color. In an article for Bitch Magazine, she describes what happened when her students searched for ‘black girls’ online. SugaryBlackPussy.com was the first result. To be clear, the students did not mention porn in their search. This website was the first result for the simple query, ‘black girls.’ She describes similar results for ‘Latina girls,’ and other women of color. How could this happen? Should porn really be the first result for ‘black girls’? What about Google’s algorithm determined porn to be the most relevant search result?

After a quick thought, the answer is obvious. Google’s algorithm seems to take popularity into consideration when sorting results, meaning that if more people click on porn, then porn is higher in the results. Of course, Google’s algorithm is proprietary and secret, but this assumption does not seem to be outlandish. There is a lot to be said about what using popularity as a criteria for search engine results means for the representation of minorities, but it is best to read Noble’s work to get a thorough discussion of those matters. You’ll learn a lot more that way than if I try to summarize her work for you. Instead, I would like to make a simple point: the search engine is not infallible. It is a human-designed device that reflects preexisting human priorities. When these priorities are problematic, so are the search results.

In the ‘information age,’ or whatever we are calling it at this point, what this means is that preexisting priorities can be amplified in a way that they were not before. More people see these results, and more people can be influenced by them. The search engine does not necessarily provide truth, accuracy, or expertise. Instead, it can provide a magnified version of the problematic, inaccurate, and hurtful representations that have been created and enforced overtime. The algorithm is not truth; the algorithm is media.

Back to Amazon. It may feel like I’ve ventured from that New York Times article, but I haven’t really. As with a search engine, the methods of measuring worker productivity are as subject to human fallibility as other ways of measuring output. As humans create new ways to measure success, those measurements will reflect preexisting notions of what success means and what a successful person looks like. When this phenomena is hidden behind a perceived objectivity in numerical assessment, it is more difficult to argue against. When a manager can simply point to the ‘data’ rather than having an in-depth conversation about what worker output should look like, the workers themselves are left at a loss. In order to participate in negotiations at this level, workers either need to have a high-level understanding of how the criteria for success is quantified or they need to excel within the manager’s data-centric assessment system. And, clearly, excelling in a manager-designed assessment system may not best serve the needs of the worker.

There is a lot more to talk about here, but it will have to wait for another day. I’ll just leave you with one suggestion: it is time we stop asking to see the numbers and start asking to see the math.

Advertisements