Data-Centric Decision Making: Assessing Success and Truth in the Age of the Aglorithm

The New York Times published an article about the state of white collar workers at Amazon last weekend, and everyone has been abuzz about it since. I know I’m a bit late to this discussion, but I do have a day job. Anyway, the article portrays a toxic environment, where workers are pressured to work through health crises and are encouraged to tattle on each other through online apps. There is a lot to talk about in the article, and honestly, I’m not sure how much new ground I am going to cover. However, I feel the need to take this opportunity to talk about data-centric decision making, what it means for labor, and what it means for our understandings of truth, fact, and assessment. That is, of course, a lot to discuss, and I will not be able to adequately cover all of these ideas, but with the article in the public sphere right now, it seems like a good time to discuss what ‘data’ can mean and what it can mean in relation to preexisting power structures.

First and foremost, Amazon employs data-centric management. Productivity is calculated and shared. An employee’s performance is directly tied to quantifiable actions – the number of Frozen dolls bought, the number of items left un-purchased in the cart, etc. Being good at your job, in this situation, isn’t just about being competent; it is about constantly proving competence through data. This idea is not terrible at its root. For a couple of reasons, I kind of dig it. According to this 2005 study, creating quantifiable criteria can help managers avoid discriminating against women in hiring practices. Creating specific data points, in this situation, can help managers ensure that they view candidates in an equitable manner. I am sure you could find more examples where differing to ‘data’ or quantifiable criteria for job performance helps people who have been historically discriminated against in the workplace.

But, what happens when you stop paying attention to how that criteria is quantified? What happens when ‘data’ becomes code for ‘objective truth’ rather than what it is – a human constructed method for measuring reality that can fall prey to discriminatory practices in the same way as other modes of assessment? An algorithm is not a God-given measurement of truth; it is subject to the same prejudices and flaws as any other human invention.

I will give you an example to demonstrate how this this phenomena can occur. Safyia Umoja Noble, a professor at UCLA’s Information Studies Department, has written extensively on how search engines portray women of color. In an article for Bitch Magazine, she describes what happened when her students searched for ‘black girls’ online. SugaryBlackPussy.com was the first result. To be clear, the students did not mention porn in their search. This website was the first result for the simple query, ‘black girls.’ She describes similar results for ‘Latina girls,’ and other women of color. How could this happen? Should porn really be the first result for ‘black girls’? What about Google’s algorithm determined porn to be the most relevant search result?

After a quick thought, the answer is obvious. Google’s algorithm seems to take popularity into consideration when sorting results, meaning that if more people click on porn, then porn is higher in the results. Of course, Google’s algorithm is proprietary and secret, but this assumption does not seem to be outlandish. There is a lot to be said about what using popularity as a criteria for search engine results means for the representation of minorities, but it is best to read Noble’s work to get a thorough discussion of those matters. You’ll learn a lot more that way than if I try to summarize her work for you. Instead, I would like to make a simple point: the search engine is not infallible. It is a human-designed device that reflects preexisting human priorities. When these priorities are problematic, so are the search results.

In the ‘information age,’ or whatever we are calling it at this point, what this means is that preexisting priorities can be amplified in a way that they were not before. More people see these results, and more people can be influenced by them. The search engine does not necessarily provide truth, accuracy, or expertise. Instead, it can provide a magnified version of the problematic, inaccurate, and hurtful representations that have been created and enforced overtime. The algorithm is not truth; the algorithm is media.

Back to Amazon. It may feel like I’ve ventured from that New York Times article, but I haven’t really. As with a search engine, the methods of measuring worker productivity are as subject to human fallibility as other ways of measuring output. As humans create new ways to measure success, those measurements will reflect preexisting notions of what success means and what a successful person looks like. When this phenomena is hidden behind a perceived objectivity in numerical assessment, it is more difficult to argue against. When a manager can simply point to the ‘data’ rather than having an in-depth conversation about what worker output should look like, the workers themselves are left at a loss. In order to participate in negotiations at this level, workers either need to have a high-level understanding of how the criteria for success is quantified or they need to excel within the manager’s data-centric assessment system. And, clearly, excelling in a manager-designed assessment system may not best serve the needs of the worker.

There is a lot more to talk about here, but it will have to wait for another day. I’ll just leave you with one suggestion: it is time we stop asking to see the numbers and start asking to see the math.

Advertisements

Vending Machines, Users, and My Cheez-It Problem

I have become very interested in vending machines. Vending machines have been used to sell candy, cigarettes, stamps, bicycle parts, socks, and live bait. People have even created library book vending machines, which is pretty cool.

Let’s get to the point though. The other day, I tried to get a bag of Cheez-It’s from the vending machine in the NLM canteen. The process was greatly impeded by a new digital interface, and I cannot stop thinking about the experience.

Let’s get some pictures here:

From the Minnesota Historical Society Flickr account: https://www.flickr.com/photos/minnesotahistoricalsociety/3468655544/
From the Minnesota Historical Society Flickr account

This is obviously an older vending machine and not the one I tried to get Cheez-It’s from. It is entirely mechanical and from the 1960’s. You get your candy using levers and coin slots. I like the color, and I like the prices.

Now, here’s a picture of the one in the NLM canteen:

NLM canteen vending machine
NLM canteen vending machine

It is important to note that parts of the design have not changed. We have a large machine with a glass front and a mechanism to tell the machine what you want. What has changed is the mechanism and the exact nature of the human-machine communication.

The first thing I noticed when trying to get my Cheez-It’s was that the prices are not listed near the food. I’m a price-conscious snacker, and I found this design feature stressful. The NLM canteen vending machine is less transparent about its pricing than the old vending machine pictured earlier. I had to click the interface 3 times before I could find the price of the snack, making comparison shopping very difficult. Comparison shopping on the 1960’s vending machine only required a quick glance at the glass. In this way, clearly communicating pricing is accomplished more effectively and efficiently with less advanced technology.

Not to get too off the topic of vending machines, but I see this as part of a larger trend towards opaque pricing schemas and the overall blackboxing of technology. As our technology becomes more advanced, the exact mechanisms by which that technology succeeds are rendered more invisible, more indecipherable. As the technology becomes more convoluted, companies can take advantage of a general lack of understanding in order to price their technology goods, or the goods that their technology helps sell, in ways that a consumer would not appreciate if they better understood the exact ways in which the technology and the systems surrounding that technology function. But, I think Latour and blackboxing will be another blog post for another time. Back to vending machines!

The interface on the NLM canteen vending machine uses a grocery store cart as a design metaphor in order to help users understand how to navigate this overly complicated interface for getting snacks. At first, I thought this was strange. Yes, one buys food at grocery stores and at vending machines, but one rarely equates these two actions. I would never need a grocery cart at a vending machine, and I eat at vending machines pretty often.

Bring your cart to carry your Cheez-It's
Bring your cart to carry your Cheez-It’s

But, while ranting about this vending machine to fellow NDSR resident, Valerie Collins, (I told you I haven’t been able to stop thinking about it), she pointed out something I hadn’t considered. The shopping cart design is not taken from a grocery store; rather, that design metaphor is taken from internet shopping sites, like Amazon.

Screenshot from August 14, 2015
Screenshot from August 14, 2015

The resemblance seems pretty clear, and since the interface is digital, I am inclined to think that the designer borrowed the idea for the shopping cart from internet sites rather than from physical grocery stores.

What we see here, then, is a design metaphor taken from the physical world (grocery stores) implemented online (at Amazon and other sites) and then re-implemented on a physical machine with a digital interface. This vending machine helps illustrate a feed-back loop for design practices that, in the end, may not serve to create better designed goods. If I have not made it clear already, I hate this vending machine. I find it overly complicated, unhelpful, and generally poorly conceived. The implementation of a digital interface does not make the machine friendlier to users, and I cannot imagine it is easy to repair either.

So, the question becomes, why switch to a digital interface when it does not serve the users well? Why design with more advanced tools if those tools are not intrinsically helpful for that use case? Perhaps the new digital interface serves the needs of the company in some way that is not apparent to me. Does it produce tracking data that the company finds helpful? I don’t know. But, I will be sure to consider my experiences with the NLM canteen vending machine as I plan and design tools for library and archive users. Sometimes the best tool is not the most advanced one.

One last lesson learned: people look at you like you are crazy when you take cell phone photos of vending machines.