SUBSCRIBE
MOLLY MCHUGH
BUSINESS 03.10.16 07:00 AM
Uber and Lyft Drivers Work Dangerous
Jobs—But They're on Their Own
Harry Campbell.
CHRISTIE HEMM KLOK/WIRED
H A R R Y C A M P B E L L WA S
driving a man home one night when, upon stopping at a light,
the passenger stripped off all of his clothes, ran naked around the car, and then got back in
as if nothing at all had happened. Odd, yes, but just another night as an Uber driver. Even
now, Campbell is nonplussed. “I think it was a dare," he says. "Every driver has a story like
that”
Get unlimited WIRED access
Subscribe
Sign In
Or worse. Uber horror stories are nothing new. But most of them are stories about
SUBSCRIBE
passengers victimized by drivers. Such headlines are hardly exclusive to Uber: plenty of
sharing economy ventures bring their share of cautionary tales.
What gets far less attention is the abuse, verbal and physical, that drivers endure. In
November, a shocking video of a drunken Taco Bell executive beating an Uber driver went
viral. More recently, a witness filmed a Miami doctor trying to kick a driver before trashing
his car. And these are just a few incidents that made headlines. No matter what you call it,
providing rides to strangers carries the risk of harassment and violence—it's why your
parents told you never to pick up hitchhikers. But while the risk to passengers of using
ridesharing services has been widely debated, the risk to drivers has been largely ignored.
Just how great a risk drivers face is difficult to quantify. Because the ridesharing industry is
so new, and laws regulating it so patchwork, official figures are tough to come by, and the
big companies don't share specifics about incidents their drivers report. Still, online forums
for drivers brim with descriptions of attacks on drivers by passengers, both verbal and
physical, such as a driver posted a video of being spit on and punched.
You might think ridesharing companies would be doing everything they can to ensure
driver safety. But it turns out what they can do is limited by the kind of businesses they are.
Because drivers operate as independent contractors instead of employees, the companies
can't offer true safety training. Under federal law, training is a signifier that someone is an
employee, and both Uber and Lyft have fought bitterly against re-classifying drivers as
employees. By the very nature of how on-demand businesses operate today, drivers in
many ways have to go it alone.
https://youtu.be/o1EzZCBl8Cg
Hard Numbers
When it comes to threats to driver safety, Lyft says it "keeps detailed records" whenever it's
contacted about a ride-related incident. Uber also says it tracks incidents involving the
safety of drivers. But the companies declined to share specific numbers.
Still, if ridesharing companies don't make their figures public, federal regulators do. "Taxi
drivers are over 20 times more likely to be murdered on the job than other workers," the US
Subscribe Sign In
Get
unlimited
WIRED
access
Occupational
Safety
and Health
Administration said in 2010. In a 2014 report, the Bureau
of
Labor Statistics found that of 3,200 3,200 taxi drivers who were hurt or killed on theSUBSCRIBE
job,
180 sustained injuries caused by a violent person—about 5.6 percent.
TRENDING NOW
Science
How This Woman Started Diving in DIY Subs
It isn't that ridesharing companies aren't aware of the risk. Seemingly in response to some
of the more outrageous recent incidents, Uber recently tested a “toy” intended to distract
drunk, obnoxious passengers: a Bop-It, a puzzle-type game that drivers put in their
backseats.
"Our pilot with Bop-Its, we thought 'OK maybe in certain contexts, it would be a good idea
to entertain people so they're in a better mood and ... going in a direction that might not be
helpful," says Joe Sullivan, Uber's chief security officer. In other markets,Subscribe
he says Uber is
Get unlimited WIRED access
Sign In
testing mirrors that face passengers—the idea is that seeing yourself behaving like an ass
might prompt you to stop behaving like an ass. Uber concedes these ideas might sound
SUBSCRIBE
silly, but the point is that it's constantly seeking high as well as low-tech ways of keeping
everyone in the car safe.
But some drivers remain skeptical. "It’s kind of stupid to think they can pacify a bunch of
drunk passengers with a Bop-It versus investing in real safety measures," says Campbell,
who runs The Rideshare Guy, a popular blog about driving for Uber and Lyft.
Drunk Girl Tries To Hijack An Uber and Destroys His C…
C…
Fending for Themselves
Of course, drivers do have some control over just how much risk they take on. They can
choose not to work in the wee hours or to avoid those parts of town where they may not
feel safe. They can also try not to pick up passengers at bars and other locales. And many
drivers do just that, even though it may cut into their pay.
But these precautions don't guarantee that drivers won't find themselves in a sketchy
situation. The geolocation feature in the Uber and Lyft apps aren't always 100 percent
precise; a driver who thinks he's headed toward a well-lit location may find himself instead
driving down a dark alley. And abusive jerks don't come out only at night,
nor are they
Subscribe
Get unlimited WIRED access
Sign In
found only in bars. As it turns out, picking up strangers in your car is an inherently risky job.
And that leaves drivers to fend for themselves—which in a sense also makes them the
real
SUBSCRIBE
experts on their own safety, at least those who've put in time on the road. "They're the ones
who've been in the cars for tons of nights, and they're the people we want to learn from and
help them connect with other drivers," says Uber's Sullivan. "We see it happening
informally at driver support centers and in forums."
But that's not enough for some drivers.
"When you get into a taxi, there’s a reason there’s plexiglass between you and the driver,"
Campbell says.
Reputation for Safety
When app-based ridesharing started, companies pitched themselves as better than cabs in
every way—friendlier, cleaner, and safer. Logins via Facebook or the apps themselves
provided a measure of comfort for everyone, because they made drivers' and passengers'
identities known to each other. Rating systems were intended to provide further peace of
mind. If someone was a jerk, whether driver or passenger, eventually they'd be booted off
the platform.
But ridesharing has exploded in popularity since then, and those reputation-based safety
measures aren't keeping pace. It's one thing when you have a small group of passengers
and drivers tracking each other. But when countless drivers and passengers are joining and
leaving the system every day, reputation-based systems become less compelling. It's
entirely possible that a four or five-star passenger has a bad day or too much to drink. And
drivers, drawn by the lure of surge pricing, might put aside their reservations and decide to
pick up that obviously intoxicated guy at 2:30 am.
To keep up with demand, to grow at the pace expected of venture-backed tech startups,
and to compete with each other, Uber and Lyft are in a constant race to recruit and retain
drivers. And some drivers say that haste can make their own safety feel like an
afterthought. Drivers get a few tips on how to look out for themselves, but these are easily
overlooked or soon forgotten in the haste to get more drivers on the road.
"Uber does no training at all. I never felt safe driving for Uber," says one former driver who
asked not to be identified for fear of jeopardizing his current job and the possibility of going
Get
backunlimited
to driving. WIRED access
Subscribe
Sign In
The driver, who says he has worked in Seattle and Southern California, said he carried
a
SUBSCRIBE
gun for safety while driving in Washington State, where he had a concealed carry permit.
He quit carrying a gun upon moving to California because the state doesn't allow it.
The driver says he often drove in the same area in and around Newport Beach where the
Taco Bell exec allegedly attacked Uber driver Edward Caban. (The executive, Benjamin
Goldman, is suing Caban for $5 million, claiming Caban illegally recorded the assault.) He
called it quits shortly after hearing about the attack. "At that point, it wasn’t worth it for me
at all."
Ride Sense
Old-school taxi drivers know certain safety-related tricks of the trade, like turning off the
car, grabbing your keys, and stepping out of the vehicle before kicking out a passenger—
that way they can't attack you from behind. Many cities require cab companies to expressly
inform drivers about the risks associated with driving a cab and how to handle violent or
unruly passengers. Cab drivers may receive fairly rigorous training, which includes a
discussion of safety. Taxis themselves are often fitted with standard safety precautions such
as plexiglass dividers and video cameras. Some also have GPS units installed directly in
vehicles, which are much harder to remove or switch off than GPS in a phone.
San Francisco law requires that taxis come equipped with video cameras and that cabs
advertise clearly that passengers are being recorded, says Bob Cassinelli, a spokesman for
Yellow Cab San Francisco. “We take the approach that nobody wants to be seen behaving
badly on a camera and tell people, 'Look you’re being recorded, keep that in mind,’" he
says. "We approach these things on a preventative basis.” The company says that assaults
against drivers declined after it started installing cameras in cars.
Ridesharing companies are less concrete when describing precautions taken on behalf of
drivers.
Lyft spokeswoman Alexandra LaManna says safety is "top priority."
"For drivers who feel uncomfortable with their passenger, we encourage them to stop and
end the ride," she says. "We also have a trust and safety team available 24/7 for
emergencies and a dedicated critical response line to immediately reach specially trained
Get
unlimited
WIRED access
experts
on the phone."
Subscribe
Sign In
Dorothy Chou, who works on Uber's public policy team, says safety is built into the
SUBSCRIBE
product, providing a level of protection to drivers that didn't used to exist, largely thanks
the app. She points to standard features meant to prioritize driver safety, including GPStracking and Uber's ratings system, which allows drivers to know who's in a vehicle and
whether a passenger is a problem. Cashless payments, meanwhile, reduces the possibility
of being robbed on site.
She also says Uber gives drivers tips on high-traffic holidays like Halloween and New Year's
Eve on how to handle unruly riders. Still, the recent experiments with Bop-Its and mirrors
suggest Uber is aware there's more to be done. Among other things, the company is hiring a
behavioral scientist to focus on safety.
Safety Limits
Still, ridesharing companies' independent contractor business models only allow them to
do so much to ensure driver safety. True training would put that employment classification
at risk and bolster claims that drivers should be made full employees. And employee status
that would have huge financial implications for the companies for things like
unemployment benefits, health insurance, taxes, lawsuits, and liability, says Stefani
Johnson, an assistant professor of human resources at the University of Colorado's Leeds
School of Business.
RELATED STORIES
DAVEY ALBA
Paris Cabbies and Uber Are Clashing—Again
DAVEY ALBA
Lyft Drivers Settle Suit But Still Aren’t Employees
DAVEY ALBA
Inside Seattle’s Bold Plan to Let Its Uber Drivers Organize
Get unlimited
WIRED access
Subscribe
Sign In
SUBSCRIBE
"The more control ... a company has over its workers, the more likely a court is to uphold
that those workers are employees rather than independent contractors," says Johnson.
"Offering training to employees enhances the employee-employer relationship because the
company has greater control over the drivers."
What might better safety measures look like? Campbell suggests offering drivers free or
heavily subsidized dash cams—something he's long suggested every Uber driver buy.
(Sullivan says Uber is always looking at new pilots, but hasn't decided to do one with dash
cams yet.) Companies can also make absolutely clear to passengers that abusing drivers in
any way will not be tolerated and will get them quickly banned—and not just when a video
of a drunken moron attacking a driver goes viral.
"I do think with the high publicity stuff, they take the driver’s side really quickly," says
Campbell of Uber's handling of the Taco Bell exec and Miami cases. "They support the
driver, they kick the passenger off the platform."
But for the everyday cases that don't get thousands of views, he believes ridesharing
companies can do more: "They haven’t really put their money where their mouth is." The
only problem: doing more could cost them a lot more money.
#LYFT #ON-DEMAND ECONOMY #SHARING ECONOMY #UBER
VIEW COMMENTS
SPONSORED STORIES
POWERED BY OUTBRAIN
WIRED STAFF
One Free Press Coalition Spotlights Journalists Under Attack
Get unlimited WIRED access
Subscribe
Sign In
SUBSCRIBE
ZACHARY KARABELL
Alphabet Flirts With $1 Trillion but Needs a Second Act
WILL KNIGHT
Snow and Ice Pose a Vexing Obstacle for Self-Driving Cars
SCOTT THURM
A Foundation Built on Oil Embraces the Green Revolution
WILL BEDINGFIELD, WIRED UK
The UK Exited the EU—and Is Leaving a 'Meme Ban' Behind
More business
BIGunlimited
BETS
Get
WIRED access
Subscribe
Trump Proposes a Cut in Research Spending, but a Boost for AI
Sign In
WILL KNIGHT
SUBSCRIBE
DIAGNOSIS
How AI Is Tracking the Coronavirus Outbreak
WILL KNIGHT
PLAINTEXT
Get unlimited WIRED access
Big Tech Has a Trillion-Dollar Problem
Subscribe
Sign In
STEVEN LEVY
SUBSCRIBE
FORMULAIC
Europe Limits Government by Algorithm. The US, Not So Much
TOM SIMONITE
VACANCY
Subscribe
Get
unlimited WIRED
Coronavirus
Fears access
Will Leave Empty Seats at a Top AI Event
WILL KNIGHT
Sign In
WILL KNIGHT
SUBSCRIBE
CONNECTIONS
Jeff Weiner Updates His LinkedIn Profile
NICHOLAS THOMPSON
GET OUR NEWSLETTER
WIRED’s biggest stories delivered to your inbox.
Enter your email
SUBMIT
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Get
unlimitedUS
WIRED
FOLLOW
ON access
PINTEREST
Subscribe
Sign In
SUBSCRIBE
See what's inspiring us.
FOLLOW
SUBSCRIBE
ADVERTISE
SITE MAP
PRESS CENTER
FAQ
ACCESSIBILITY HELP
CUSTOMER CARE
CONTACT US
SECUREDROP
COUPONS
NEWSLETTER
WIRED STAFF
JOBS
RSS
Get unlimited WIRED access
Subscribe
Sign In
CNMN Collection
SUBSCRIBE
© 2018 Condé Nast. All rights reserved.
Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated
1/1/20) and Privacy Policy and Cookie Statement (updated 1/1/20). Your California Privacy Rights. The material
on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior
written permission of Condé Nast. Ad Choices.
Get unlimited WIRED access
Subscribe
Sign In
Session: Smart Tools, Smart Work
CHI 2013: Changing Perspectives, Paris, France
Turkopticon: Interrupting Worker Invisibility
in Amazon Mechanical Turk
Lilly C Irani
UC Irvine, Department of Informatics
Irvine, CA 92697
lirani@ics.uci.edu
M. Six Silberman
Bureau of Economic Interpretation
six@economicinterpretation.org
ABSTRACT
year of deployment. The system receives 100,000 page
views a month and has become a staple tool for many AMT
workers, installed over 7,000 times at time of writing.
As HCI researchers have explored the possibilities of human
computation, they have paid less attention to ethics and
values of crowdwork. This paper offers an analysis of
Amazon Mechanical Turk, a popular human computation
system, as a site of technically mediated worker-employer
relations. We argue that human computation currently relies
on worker invisibility. We then present Turkopticon, an
activist system that allows workers to publicize and evaluate
their relationships with employers. As a common
infrastructure, Turkopticon also enables workers to engage
one another in mutual aid. We conclude by discussing the
potentials and challenges of sustaining activist technologies
that intervene in large, existing socio-technical systems.
Turkopticon allows workers to create and use reviews of
employers when choosing employers on AMT. Building and
maintaining the system, as well as communicating about the
system with workers, has offered us a distinct vantage point
into the social processes of designing interventions into
large-scale, real world systems. Turkopticon supports a
thriving collective of workers engaged in mutual aid,
brought together by our simple browser extension and webbased technology.
This paper makes several contributions. First, it offers a case
study designing an intervention into a highly distributed
microlabor system. Second, it shows an example of systems
design incorporating tools feminist analysis and reflexivity.
Rather than conducting HCI research to reveal and represent
values and positions, and then building systems to resolve
those political differences, we built a system to make
worker-employer relations visible and to provoke ethical
and political debate. Third, this paper contributes lessons
learned from intervening in existing, large-scale sociotechnical systems (here, AMT) from its margins.
Author Keywords
Activism; infrastructure; human computation; Amazon
Mechanical Turk; design; ethics
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g., HCI):
Miscellaneous.
INTRODUCTION
Crowdsourcing and human computation are often described
as a new frontier for HCI research and creativity, and for
technological progress more broadly. CHI researchers have
built word processors powered by crowds. Others have
shown how usability and visualization evaluations can be
taken out of the lab and into the natural environments of
crowdworkers.
METHOD AND OUR STANCE
This paper draws on four years of participant-observation as
design activists within AMT worker and technologist
communities. Turkopticon grew out of a tactical media art
project intended to raise questions about the ethics of human
computation. Tactical media, one tradition within activist
art, emphasizes developing urgent, culturally provocative
interruptions and resistance through the design of media [13,
19, 21]. In addition to the interviews, observation, and
casual conversation that feature in many HCI ethnographies,
our encounters with Turk workers began through highly
mediated “Human Intelligence Tasks” and feedback around
Turkopticon. (We began this research in 2008, prior to the
growth of popular online worker forums turkernation.com
and mturkforum.com.)
These frontiers, however, are enabled by the novel
organization of digital workers, distributed across the world
and organized through task markets, APIs, and network
connections. This paper looks behind the walls of
abstraction that enable human computation in one specific
system, Amazon Mechanical Turk (AMT). We present
workers’ occupational hazards as human computers, and
explain the activist project we developed in response.
Turkopticon, the project we present, is a tool in its fourth
We conducted several informal surveys through Mechanical
Turk. 67 respondents answered our open-ended question
survey about what they would desire as a “Workers’ Bill of
Rights.” Points of agreement among worker respondents on
this survey became the basis for the design of Turkopticon.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise,
or republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee.
CHI 2013, April 27–May 2, 2013, Paris, France.
Copyright © 2013 ACM 978-1-4503-1899-0/13/04...$15.00.
611
Session: Smart Tools, Smart Work
CHI 2013: Changing Perspectives, Paris, France
The first author complements participant-observation, as a
system builder of Turker tools, with observation and openended interviews with AMT employers. She attended a
major crowdsourcing conference as well two smaller
crowdsourcing meetups. She also conducted open-ended
interviews with four employers on AMT and numerous
conversations with other employers. These ethnographic
data contextualize the data we generate as we design and
maintain Turkopticon.
Amazon legally defines the workers as contractors subject
to laws designed for freelancers and consultants; this
framing attempts to strip workers of minimum wage
requirements in their countries. United States workers are a
significant minority, numbering at 46.8% in recent surveys
[24]. This framing, however, has not been tested in courts,
and courts have deemed similar framings of distributed,
non-computer data work illegal [18].
AMT can be described many ways. Explaining it as a
microlabor marketplace draws attention to pricing
mechanisms, how workers choose tasks, and how
transactions are managed. Explaining it as a crowdsourcing
platform draws attention to the dynamics of mass
collaboration among workers, the aggregation of inputs, and
the evaluation of the crowdsourced outputs. Explaining
AMT as a source of human computation resources,
however, is consistent with how both the computing
research community and Amazon’s own marketing frames
the system [e.g. 28].
Over the course of this research, each of our stances
developed as a result of our own involvement with the
workers through the project, and through our evolving
understandings of the broader crowdsourcing community.
We began highly critical of the fragmentation of labor into
hyper-temporary jobs, seeing them as an intensification of
decades-old US trends toward part-time, contingent work
for employer flexibility and cost-cutting [3, 35]. AMT, it
seemed to us produced temporary employees at “the speed
of thought,” to borrow Bill Gates’ promissory turn of
phrase, precisely by forgetting about ergonomics, repetitive
stress injuries, and minimum wage laws. We were biased –
decidedly so.
Dividing data work into small components is not itself new.
A 1985 case, Donovan vs DialAmerica, tells of an earlier
version of AMT-style labor. An employer sent cards with
names to home workers hired as independent contractors.
These contractors had to ascertain the correct phone number
for each name; they were paid per task. Courts decided that
these workers were in fact employees entitled to minimum
wage under the Fair Labor Standards Act (FLSA) [18,
p.136]. Since the late 1990s, American companies have
hired Business Process Outsourcing firms in Englishspeaking countries with lower costs of living to perform
large volumes of data processing tasks that cannot be fully
automated.1
Our biases were validated by some workers and challenged
by others. For each one who reported needing the money to
pay for rent or groceries, there was another who did it for
fun or to “kill time.”
We highlight our own stances under advisement of Borning
and Muller who, with many feminist scholars, call for
researchers to shed trappings of objective authority and
account for how our own contexts and assumptions shape
our research practices [7]. However, it is not only that our
biases distort our perception of reality that is out there in the
world of AMT work. Certainly, we have much to learn
about how workers feel about their work and the problems
they encounter, as we have published. But we also intervene
in AMT by building a technology used by its workers. By
intervening in the system as designers and as observers, we
change the reality of the system itself [4, 29, 41]. The
ethical challenges and issues faced by workers, and the
ethical issues we face as researchers, are produced in the
encounters between us, the workers, and Turkopticon. This
paper offers a snapshot of the lessons we have learned and
their implications for design practice at this point in the
evolving socio-technical system.
Humans-as-a-service
AMT jumps beyond these older forms of information work
by setting workers up as resources that can be directly
integrated into existing computer systems as “human
computation.” When Jeff Bezos launched AMT to an MIT
audience in 2006, he announced: “You’ve heard of
software-as-a-service. Now this is human-as-a-service” [5].
Since launch, AMT has been marketed as one of Amazon’s
Web Services, alongside silicon computational cycles and
data storage in the cloud. Bloggers and technologists have
followed suit, both in published sources and conferences
and meetups we attended, calling AMT a “Remote Person
Call” (playing off of “Remote Procedure Call”) and “the
Human API.” Crowdsourcing company CrowdFlower even
coined the neologism “Labor-as-a-service (LaaS)” to market
the value of crowdsourced workforces to companies.
First, we explain AMT, focusing on the kinds of workeremployer relationships enabled by the system. We then
describe our motivations for building Turkopticon, the
design of the system, and learnings relevant to the design of
political and activist technologies.
This combination of abstraction and service orientation in
both the metaphors and infrastructural forms suggest a
particular kind of social relationship. “As-a-service” draws
BACKGROUND: AMAZON MECHANICAL TURK (AMT)
Amazon Mechanical Turk is a website and service operated
by Amazon as a meeting place for requesters with large
volumes of microtasks and workers who want to do those
tasks, usually for money [24]. These tasks often add up to a
few dollars an hour for those experienced with the platform.
1
In 2012, a US Mechanical Turk worker filed a class action suit
against Crowdflower for violations of FLSA. The outcome of the
suit was unknown at time of press.
612
Session: Smart Tools, Smart Work
CHI 2013: Changing Perspectives, Paris, France
meaning from commonplace resonances of service. To serve
is to make labor and attention available for those served; to
promise service is to be bound, by duty or by wage, to the
will of the served. Among computer scientists, “as-aservice” builds off of this common sense meaning and more
specifically suggests a division of technical labor by which
programmers can access computational processing functions
housed and maintained on the Internet and by someone else.
As long as the service keeps running, programmers need not
concern themselves with where the code is running, what
kind of machine it runs on, or who keeps the code running,
but only the proper protocol for issuing the call through a
computer and receiving the response. As-a-service suggests
an arrangement of computers, networks, system
administrators, and real estate that allows programmers to
access a range of computer services remotely and instantly.
recognizes that systems that might hum along beyond notice
in one moment might break down and require maintenance
and repair in another. And a system that might hum along
beyond notice for an end-user might be very much the focus
of attention for those in charge of maintaining it. The
question “when is infrastructure?” then, suggests also
asking, “for whom is infrastructure?” When it is working as
infrastructure, AMT platform clearly hums along supporting
the work of employers — the programmers, managers, and
start up hackers who integrate human computation into their
technologies. In this light, that the design features and
development of AMT has prioritized the needs of employers
over workers is not surprising.
Further, by hiding workers behind web forms and APIs,
AMT helps employers see themselves as builders of
innovative technologies, rather than employers unconcerned
with working conditions. Suchman argues that there are
“agencies at the interface” that reconfigure the relations
among humans and machines, making both what they are
[40]. AMT’s power lies in part in how it reconfigures social
relations, rendering Turk workers invisible [37], redirecting
focus to the innovation of human computation as a field of
technological achievement.
Framing workers on AMT as computational services is
more than just rhetorical flourish. Through AMT, employers
can literally access workers through APIs. Though a web
form-based interface is available, the API allows AMT
employers can put tasks into the workforce and integrate
worker output directly into their algorithms. Techniques for
integrating workers into computational technologies in this
way have been pioneered in HCI, in databases research, and
in industry (see [42] for a summary). Twitter, for example,
has recently open sourced a visual toolkit for running human
judgment experiments on AMT [12]. These experiments are
a key component of developing, evaluating, and training
search and ranking algorithms. Twitter’s toolkit offers an
interface for building these experiments, providing
monitoring tools and visualizations interfacing with AMT’s
24/7, massively distributed workforce through APIs.
CrowdFlower also builds atop AMT’s APIs, offering
crowdsourced data processing tools tailored to needs
common to different industries.
Employing Humans-as-a-Service
In this section, we explain basic features of AMT and show
how the design prioritizes the needs of employers.
AMT employers define HITs on AMT by creating webbased forms that specify an information task and allow
workers to input a response. Tasks include structuring
unstructured data (e.g. entering a given webpage into an
employer’s structured form fields), transcribing snippets of
audio, and labeling an image (e.g. as pornography or
violating given Terms of Service). Employers define the
structure of the data workers must input, create instructions,
specify the pool of information that must be processed, and
set a price. (Ipeirotis [22] offers an excellent background.)
We see here, then, that AMT brings together crowds of
workers as a form of infrastructure, rendering employees
into reliable sources of computation. As established
organizations develop and publicly release tools for the
system, they embed computational microwork firmly in
existing technological practices and systems. AMT is
becoming infrastructure in the sense that Star & Ruhleder
have analyzed it: AMT is shared, AMT is incorporated into
existing shared practices, and ideally, AMT is ready-to-hand
and worked through not on. Working technological
infrastructures, in Star & Ruhleder’s analyses, are used with
such fluency that they become taken-for-granted, humming
quietly and usefully in the background. The infrastructures
kept humming dutifully in the background in AMT are the
socio-technical system of workers interacting with
employers through APIs, spreadsheets, and minimal webbased task forms.
The employer then defines criteria that candidate workers
must meet to work on the task. These criteria include the
worker’s “approval rating” (the percentage of tasks the
worker has performed that employers have approved and, by
consequence, paid for), the worker’s self-reported country,
and whether the worker has completed certain skill-specific
qualification exams offered on the platform. This filter
approach to choosing workers, as compared to more
individualized evaluation and selection, allows employers to
request work from thousands of temporary workers in a
matter of hours.
Once a worker submits work, the employer can choose
whether to pay for it. This discretion allows employers to
reject work that does not meet their needs, but also enables
wage theft. Because AMT’s participation agreement grants
employers full intellectual property rights over submissions
regardless of rejection, workers have no legal recourse
against employers who reject work and then go on to use it.
Ruhleder and Star famously called for going beyond a
consideration of what is infrastructure to a consideration of
when is infrastructure [34]. Asking when is infrastructure
613
Session: Smart Tools, Smart Work
CHI 2013: Changing Perspectives, Paris, France
Fig. 1: What a worker sees: the Human intelligence Tasks (HITs) list on AMT.
Employers vet worker outputs through automated
approaches such as qualifying workers through test tasks to
which the correct answer is known or requesting responses
to a single input from several workers and algorithmically
eliminating any answers that do not agree with the majority.
Turkopticon has been designed to offer workers a way to
dissent, holding requesters accountable and offering one
another mutual aid.
MOTIVATING TURKOPTICON
Turkopticon developed as an ethically-motivated response
to workers’ invisibility in the design of AMT. We were
troubled by a number of issues in our first encounters with
AMT, not only worker invisibility. Workers, even in the
US, are paid below minimum wage in many cases.
Technologist and research discourse seemed unconcerned
with the human costs of human computation. Individuated
workers had little opportunity to build solidarity, offering
them little chance of creating sufficiently coordinated
actions to exert pressure on employers and Amazon.
Within this large scale, fast moving, and highly mediated
workforce, dispute resolution between workers and
employers becomes intractable. Workers dissatisfied with a
requester’s work rejection can contact the requester through
AMT’s web interface. Amazon does not require requesters
to respond and many do not; several requesters have noted
that a thousand to one worker-to-requester ratio makes
responding cost prohibitive. In the logic of massive crowd
collaborations, dispute resolution does not scale. Dahn
Tamir, a large-scale requester, explained a logic the first
author heard from several Turk employers:
Rather than working from our own intuitions, however, we
took seriously the possibility that this new form of work
also might offer workers benefits and pleasures that we did
not understand, or cause troubles we could not anticipate.
Survey research on Turk worker motivations, for example,
reports that though a significant minority of workers rely on
their income from the platform to pay for household
expenses. At the same time, other workers report working
for fun or to pass the time while bored (sometimes even at
another job) [23, 22, 33].
“You cannot spend time exchanging email. The time you
spent looking at the email costs more than what you paid
them. This has to function on autopilot as an algorithmic
system…and integrated with your business processes.”
Instead of eliciting a response, workers’ dispute messages
become signals to the employer. Rick, a CEO of a
crowdsourcing startup, explained to me that messages from
workers signal the algorithm’s performance in managing
workers and tasks. If a particular way of determining
“correctness” for a task results in a large number of
disputing messages, Rick’s team will look into revising the
algorithm but rarely will retroactively revise decisions.
Algorithmic management, here, precludes individually
accountable relations.
Workers’ “Bill of Rights”
To provoke workers’ imaginations about the infrastructural
possibilities, we placed a task onto AMT asking workers to
articulate a “Worker’s Bill of Rights” from their
perspective. We chose this approach over a more neutral
battery of questions because of the highly mediated nature
of our interactions with workers through the medium of the
HIT. Workers paid per task — of which our question was
one — provided short answers to open-ended questions
based on our past experiences questioning workers in the
platform. Asking a provocative question drew stronger,
more detailed responses oriented towards concerns of
crowdsourcing ethics. We also sought permission from
workers to publish their responses on the web in hopes of
generating interaction between workers and broader publics
concerned with crowdsourcing.
Workers have limited options for dissent within AMT itself.
Resistance through incorrect answers can simply be filtered
out through employer’s algorithmic tests of correctness.
Dissatisfied workers’ within AMT had little option other
than to leave the system altogether. Because AMT treats
workers interchangeably and because workers are so
numerous (tens of thousands by the most conservative
estimates), AMT can sustain the loss of workers who do not
accept the system’s terms.
614
Session: Smart Tools, Smart Work
CHI 2013: Changing Perspectives, Paris, France
Our work treated crowdsourcing ethics as an open question
about a new technology, still under negotiation. In
structurationist terms, practices and meanings of the
technology had not yet stabilized [32]. Our ethical
questions, then, were not trying to get at some underlying,
stable truth, but rather at ongoing ethical and political
negotiations among participants in crowdsourcing systems.
Like Bruckman and Hudson, we gathered empirical data on
workers’ ethics — here framed as rights — to explore the
ethical dimensions of crowdsourcing [9]. Rather than draw
firm conclusions here, however, we continue to keep the
debate open. We grapple with the problem of advocacy as
explained by Bardzell [2], in which Feminist HCI
practitioners both seek to bring about social progress, but
also question their own images of what such social progress
looks like. By publishing responses to our questions and
building Turkopticon, as we will discuss, we sought to
provoke debate about progress in crowdsourcing and make
questions of work conditions visible among technologists,
policy makers, and the media.
justify their rejections, and that workers have the right to
confront employers about those rejections.
A number of workers directed their frustrations towards
Amazon itself. One worker was so frustrated that he or she
thanked the first author by name for posting the HIT and
offering an opportunity to express his anger: “I don’t care
about the penny I didn’t earn for knowing the difference
between an apple and a giraffe, but I’m angry that MT will
take requester’s money but not manage, oversee, or mediate
the problems and injustices on their site.”
Another worker noted the imbalance in Amazon’s priorities
as they developed the AMT platform:
“I would also like workers to have more of a say around
here, so that they can not easily be taken advantage of, and
are treated fairly, as they should be. Amazon seems to pay
more credence to the requesters, simply ignoring the fact
that without workers, nothing would be done!”
We confirmed this priority with prominent requesters as
well as a source close to Amazon who wished to remain
anonymous. Because Amazon collects money for task
volume, Amazon has little reason to prioritize worker needs
in a market with a labor surplus.
Workers’ responses to the question of a “Bill of Rights”
revealed a range of concerns, some broadly expressed
among workers and others that polarized.
Of our 67 responses [42], workers recurringly raised a
number of issues:
•
35 workers felt that their work was regularly rejected
unfairly or arbitrarily
•
26 workers demanded faster payment (Amazon allows
employers 30-days to evaluate and pay for work)
•
7 explicitly mentioned a “minimum
“minimum payment" per HIT
•
14 mentioned “fair" compensation generally
•
8 expressed dissatisfaction with employers’ and
Amazon’s lack of response to their concerns
wage"
Mutual Aid for Accountability
Our exploratory interactions with workers left us with no
unified image of what workers are like and what
intervention might be “appropriate.” Those workers who
suggested action offered diverse ways forward. Some were
interested in a forum in which Turkers could air concerns
publicly without censorship or condescension, and worker
visibility and dignity more generally. Others were interested
in a way to build long-term work relationships with prolific
requesters, and worker-requester relations generally. Several
respondents asked for unionization, while several others
volunteered their aversion to unions.
or
There were few shared values and priorities that could guide
the development of an infrastructure of mutual aid. There
were, however, possibilities for creating partial alliances —
points of common cause across diverse workers. Donna
Haraway, a feminist STS scholar, argues for partial
connections — alliances built on common cause rather than
common experience or identity — as a way to sustain
political and ethical action across people with irreducible
differences [20].2 We took inspiration from this approach.
The consequences of these occupational hazards for workers
included lost or delayed income, accidental download of
malware that damaged their computers, and reduced worker
“approval ratings.” Approval ratings are one of the few
ways employers can filter workers. When an employer
rejects an employer’s work, whether because it did not meet
their needs or simply so they employer did not have to pay,
the worker’s approval rating goes down. If the rating goes
too far down, AMT will hide tasks requiring high ratings
from the worker. Lost approval ratings, then, are lost
opportunities for work which make it even more difficult to
accumulate experiences to raise the rating again.
2
Haraway’s argument responded to criticisms that socialist
feminism, a Marxist analysis of gender, claimed white women’s
experiences of gender marginalization as common cause for all
women. Crenshaw, for example, countered that women exist at the
intersection of race, class, and gender categories; each intersection
created specific kinds of vulnerabilities. What Haraway proposed
was a way to make progressive interventions without making
universalizing claims about the issues of all women. She did this
by proposing that women, as irreducibly different “cyborgs,” build
alliances based on common cause and partial connections [20].
A Sense of Fairness
Beyond the inconveniences and dangers of doing Turk
work, several workers articulated a more general frustration
we characterize as a sense of fairness. This sense came
through in numerous responses that requesters ought to
respond to questions from workers, that requesters ought to
615
Session: Smart Tools, Smart Work
CHI 2013: Changing Perspectives, Paris, France
Motivated by responses to the “Bill of Rights,” we designed
and built Turkopticon. Turkopticon responded in part to the
occupational hazards of Turking listed above. We also built
Turkopticon to offer workers ways of supporting one
another in context of their existing practices. The system
allows workers to make their relationships with employers
visible and call those employers to account. As workers
build up the record of relationships with employers, they
also build up a commons together with other contributors.
By explicitly designing for scales beyond the individual or
the dyadic relationship, we sought to build up a group of
people who see their interests as aligned with others [16,
31]. Dourish called this the design of politics; he calls for
moving beyond the user-technology dyad that often defines
design interventions to the creation of larger scale
collectives and movements building on social software.
name, and insert a small CSS button next to the name that,
on mouse-over, launches more details on that requester. The
extension issues an XMLHTTP request for details we have
on the requester that then load in the background as the rest
of the “Available HIT” page renders.
The embedded review overlay contains both averaged
ratings of the requester, and a link to view all reviews and
open-ended comments on the requester on our website. (See
figure 2.) From this overlay, workers can also review
requesters. When the worker clicks the requester review
link, we take them to Turkopticon’s requester review form
with the requester ID we strip from page’s underlying
HTML pre-populating the review’s form field. The
embedded overlay is available anywhere in the AMT
interface where a worker might see a requester: both at
points where they are selecting HITs and where they are
checking approval and payment status for submitted HITs.
The crowd we wanted to mobilize into a collective,
however, was constituted by an infrastructure we had no
control over – the AMT platform itself. In contrast to the
collectives Dourish seeks to mobilize through Facebook, or
the Internet hackers Chris Kelty describes as building the
infrastructure that make their association possible [26, p.3],
our task was to create a means of association people whose
common cause was their work on AMT but who lack the
technical skills to build infrastructures of assembly. Rather
than design a system anew, our work was to graft a new
infrastructure onto an existing one.
Standardizing Requester Reputations
We now turn to how the kind of data we decided to collect
on requesters. Because the AMT model often has workers
doing HITs from a large number of employers in a session,
we needed to offer workers a quick way to assess
employers. We also saw in the Bill of Rights that workers
were not unified in what they valued in an employer. Some
wanted a short response time while others did not care, for
example. By taking ratings on various qualities rather than
taking an aggregating rating in the style of product review
sites, we offered workers discretion in evaluating the
ratings.
TURKOPTICON: THE SYSTEM
Turkopticon is a browser extension for Firefox and Chrome
that augments workers’ view of their AMT HIT lists with
information other workers have provided about employers
(“Requesters” in AMT parlance). Workers enter reviews of
employers that they have worked with, entering ratings of
four qualities of employers as well as an open-ended
comment explaining their rating. These reviews are
available on the Turkopticon website; workers can view
both recent reviews, as well as all reviews for a particular
requester, identified by a unique Amazon requester ID.
Turkopticon collects quantitative ratings from reviewers on
four qualities that we hypothesized would be relevant based
on the Workers’ Bill of Rights survey.
Turkopticon is named for panopticon, a prison surveillance
design most famously analyzed by Foucault. The prison is
round with a guard tower in the center. The tower does not
reveal whether the guard is present, so prisoners must
assume they could be monitored at any moment. The
possibility of surveillance, the theory goes, induces
prisoners to discipline themselves. Turkopticon’s name
cheekily references the panopticon, pointing to our hope that
the site could not only hold employers accountable, bu
induce better behavior.
•
Communicativity: How responsive has this requester
been to communications or concerns you have raised?
•
Generosity: How well has this requester paid for the
amount of time their HITs take?
•
Fairness: How fair has this requester been in approving
or rejecting your work?
•
Promptness: How promptly has this requester approved
your work and paid?
A score of "0" means we have no data for that attribute
We also require workers to enter a free-form text comment
to contextualize their scores. We provide the free-form box
so that workers can share more nuanced, fine-grained stories
of their experiences. We require workers to fill it, however,
because the substance of testimonials is one of the ways
other workers can evaluate other workers credibility.
Going beyond simply a review site, we designed
Turkopticon to fit into workers existing Turking workflow.
The browser extension – a Javascript userscript packaged
for both Firefox and Chrome – works by searching the
document object model (DOM) of AMT pages as the
worker browses. We locate links that contain requester IDs
in their target URLs, choose the link that is a requester
Bootstrapping a Collective System
Turkopticon is nothing without users and their reviews; like
many CSCW systems, it requires a “critical mass” to serve
users at all [1]. How to launch a brand new system when the
system has no content? Generating community around the
project was difficult because workers were so invisible,
616
Session: Smart Tools, Smart Work
CHI 2013: Changing Perspectives, Paris, France
Fig. 3: Workers are protected from retribution
by the obfuscation of their email addresses.
incentives for requesters to write positive reviews for
themselves and, in practice, requesters attempt to do this.
We balanced the need for anonymity with reputation by
displaying users’ reviews signed with a partially obfuscated
email address. This email address is then linked to a page
that shows all reviews written by that user. Readers of
reviews can make judgments about the credibility of
workers by evaluating other contributions by the user and
making their own decision about whether to engage the
employer.
Fig. 2: The Turkopticon browser add-on adds information
about requesters provided by other workers.
Comment Moderation
largely separated from one another, and today’s prominent
worker forums (e.g. TurkerNation or mturkforum) were
much smaller.
After two years of running the tool unmoderated, we
developed a set of user interface designs to allow selected
users to moderate comments on the site. The mechanism is
technically simple, leaning on existing social practices and
community reputation. Any Turkopticon user can flag a
review. A moderator has to add a second flag to hide the
review from the site.
We overcame this problem by enlisting the support of
DoloresLabs, a crowdsourcing company that builds custom
toolkits for employers wishing to employ Mechanical Turk
labor. DoloresLabs created a task for our team with a list of
prominent requesters and solicited 300 initial reviews for
which it compensated workers. The initial reviews seeded
our database so new users installing Turkopticon could
immediately integrate the tool into their workflow. Rather
than requiring initial users to produce reviews, our
bootstrapping allowed for users to consume the reviews we
hoped they would eventually produce and improve upon.
We selected our first cohort of moderators by calculating the
most prolific reviewers on the site, emailing them
invitations to moderate Turkopticon, and posting the list of
those who accepted invitations to a widely read worker
forum. We left nominations up for a week and received no
objections, so we proceeded.
In selecting moderators, we also attempted to align
Turkopticon with other worker forums in two ways. First,
we selected moderators from the worker community who
were engaged in debates and movements in worker forums
that we, as non-workers, had little visibility into. By letting
moderators in, we also gave them visibility and input into
our design processes; based on this inside view, these
moderators have been able to vouch for us during critical
junctures where a bug or misunderstood feature triggers
suspicions among users.
Making alliance with a prominent employer in the
Mechanical Turk system was a double-edged sword.
DoloresLabs supported us because they believed that
crowdlabor industries would benefit from a fairer labor
market; Turkopticon promised to remedy the information
asymmetry between workers and employers, repairing
Mechanical Turk into a more “transparent” marketplace [6].
Our team, by contrast, built Turkopticon in part to draw
attention to commodification and exploitation in large-scale
crowdsourcing markets. Just as the Turkopticon tool was a
way of building partial connections across workers, the
Turkopticon design process made partial connections
despite different visions for the future of crowdsourcing.
Along with moderation, we also introduced an option for
workers to take on screen names – self-chosen identifiers –
in place of their obfuscated email addresses. This simple
measure has made it possible for reviewers to choose to
harmonize their Turkopticon identity with their identity in
other forums. We do not, however, force any harmonization.
Reputation without Retribution
Turkopticon attempts to prevent employers from retaliating
against workers writing reviews by obfuscating workers’
email addresses. As we designed Turkopticon, we
anticipated that workers would fear retribution for writing
critical reviews. Our discussions with workers on forums
have confirmed this at least for some workers. At tension
with the need for anonymity, however, is the need for
reputation among users of the system. There are high
We rely on primarily social moderation, by a small number
of moderators, for several reasons. First, automated
approaches are difficult to implement in practice because
they cannot account for community-specific and emergent
norms [38]. Within the space of social moderation, broadbased community moderation (e.g. Slashdot [27]) is
susceptible to vandalism because our users are potentially
617
Session: Smart Tools, Smart Work
CHI 2013: Changing Perspectives, Paris, France
from two competing classes with opposed incentives.
Requesters could easily make an account and begin flagging
negative reviews they have received, or even pay Turk
workers to down vote their reviews. Moderators draw on
knowledge from their involvement in other worker forums
to judge the credibility of reviews in question.
technology design’s potential for sustaining new polities
that can become powerful foundations for social change.
Tactical Quantification
Although quantification has myriad problems as a
description of lived practice, Turkopticon employs tactical
quantification to enable worker interaction and employer
accountability while integrating into the rhythms of AMT.
Tactical quantification is a use of numbers not because they
are more accurate, rational, or optimizable [10], but because
they are partial, fast, and cheap – a way of making do in
highly constrained circumstances.
DISCUSSION
Strengthening Ties Through Maintenance and Repair
Though HCI has conventionally been concerned with the
design, deployment, and evaluation of technological
artifacts, the social and technical life of Turkopticon, like
any technology, depends on ongoing maintenance and repair
[25]. Certainly, we do ongoing technical maintenance. For
example, we have to rebuild the extension when Firefox and
Chrome release versions with new requirements of add-ons;
server load that grew with use demanded that we rewrite
code to make more efficient use of our servers resources.
We were skeptical of quantifying workers’ rich experiences
and diverse frustrations, conditioned by their diverse social
positions and needs. HCI researchers have raised a number
of critiques of quantification in computational systems.
Quantification has been associated with failed, injurous
modernist attempts to model, rationalize, and optimize
messy real world systems. These models necessarily
universalize and simplify [10]. In the hands of powerful
actors, quantifying, approximate models can drive policies
that attempt to form the world in models’ images [17].
Less remarked on, however, is the work of keeping up with
changing design requirements as worker and requester
practices change. Comment moderation to cull increasing
requester reviews and profanity was one such change,
already discussed. We also recently augmented the requester
review form with a toggle indicating whether a requester
violates Amazon’s Terms and Conditions. These design
changes reflect changing norms as the kinds of tasks and
practices on AMT shift.
The use of Turkopticon in the wild has, unsurprisingly,
borne out some of these concerns. The “generosity”
category, for example, has strained under the weight of
representing such a subjective assessment. Workers in India
accustomed to much lower salaries and cost of living than
Americans may feel that a job averaging $2 an hour is
generous, while an American might balk at such a rate.
As important as the specific design features that we add and
upkeep are the community relationships we build and
strengthen through this ongoing maintenance of
Turkopticon. We learn of concerns and confusions through
our user support forum, through our email, and through our
moderators who face emerging review practices on the
frontline of the Turkopticon reviews page. We, as systems
designers and maintainers, gain from highly engaged
workers who help us understand what it means to see like a
Turk worker and keep up with changes to their evolving
practices. We enlist moderators in discussions of web site
policy and interaction design, and alter and repair the
technology in response to their requests and observations.
Moderators here are not objects to be observed by us, but
experts in their own right who participate in the collective
activism of keeping Turkopticon thriving. (Bardzell and
Bardzell have also argued for the incorporation of experts
into activist design.)
Standardizing ratings into quantified buckets was instead a
compromise we made to our own values as designers in
negotiating the power relations of the AMT ecosystem.
AMT emphasizes speed and scale [11, 36]. To attract and
retain users, we had to begin with the norms of the
infrastructure in which we intervened, lest we push too far
and become incompatible.
In this sense, Turkopticon is not an expression of our own
values, or even the values of the users we interviewed, but a
compromise between those values and the weight of the
existing infrastructural norms that torqued our design
decisions as we intervened in this powerful, working real
world system. In their analyses of the consequences of
infrastructural classifications, Bowker and Star use the
concept of torque to describe the way people’s lives can be
twisted and shaped as they are forced to fit classification
systems and infrastructures, such as racial classifications on
government documents or disease categorizations. People
live messy, fluid lives that can fall out of sync with the
rhythms, categories, and temporality of the infrastructure
[8,p.190]. Bowker and Star note that more powerful actors
do not experience torque as they determine the categories of
the infrastructure and often experience those categories as
natural. We were situated at the margins of a large, working
sociotechnical system, trying to insert ourselves in. The
design of Turkopticon, then, had to be as much an
expression of the standards and rhythms set by a large,
This work of maintenance and upgrading, undertaken with
the participation of workers, does more than offer insight
into needs and requirements. This work strengthens ties and
builds solidarity among workers collaborating on the
practical, shared, and political circumstances they face as
crowdworkers. Dourish has argued that HCI research often
takes market framings for granted, individuating users as
decision-makers to be persuaded or empowered [16].
Framings of social computing that emphasize networks and
interaction can similarly frame collectivity as an aggregation
of individuals. We call on HCI researchers to instead see
618
Session: Smart Tools, Smart Work
CHI 2013: Changing Perspectives, Paris, France
corporate infrastructure as it was of designer and user
values, desires, and politics. By intervening in a working,
real world collaborative technological system, we did not
enjoy the luxuries of ethics- and values-oriented design
projects that design technologies anew.
platforms. This agonistic reminder disrupts the optimism
that surrounds crowdpowered systems.
However, Turkopticon’s existence sustains and legitimizes
AMT by helping safeguard its workers. AMT relies on an
ecosystem of third party developers to provide functional
enhancements to AMT (e.g. CrowdFlower, SamaSource,
Twitter). Turkopticon is a squeaky but reliable part of this
ecosystem. Ideally, however, we hoped that Amazon would
change its systems design to include worker safeguards.
This has not happened. Instead, Turkopticon has become a
piece of software that workers rely on funded through
subsidies from academic research – an unsustainable
foundation for such a critical tool.
Publics and their Means of Assembly
A number of researchers have argued that design activities
can generate publics – groups that coalesce around
identification with a common problem and a shared effort to
resolve the problem [14, 30]. Activities such as exploratory
prototyping or future-envisioning engage diverse
stakeholders in identifying causes of common concern.
Design engagement offers one way of collectively inquiring
into assumptions, dependencies, and paths forward.
To stay vital, our team plans on developing new media
interventions to give the Turkopticon community greater
visibility to the press, to policy makers, and to organizers.
Through the design of layered infrastructures, we can
support complex and overlapping publics that open up
questions about possible futures once again.
Our early work on Turkopticon – especially the Workers’
Bill of Rights – shared this spirit of engaging workers in
imagining alternative ways of doing microlabor. Workers’
responses revealed vastly disparate visions and selfunderstandings when it came to issues of minimum wage,
relations with requesters, and desire for additional forms of
support. Moreover, workers distributed across the world
faced vastly different circumstances. Indian Turkers, for
example, tend to be highly educated and face lower costs of
living than Americans. Bringing these workers together as a
public to engage in shared inquiry and democratic
interchange would require speaking across cultures,
ideologies, and vastly different life circumstances.
Turkopticon performs an intermediate step in the formation
of publics by bringing people together around practical,
broadly shared concerns. By creating infrastructures for
mutual aid, we bolster the social interchange and
interdependency that can become a foundation for a more
issue-oriented public. There have been calls in HCI for
representing interdependence as a way of working towards
more ethical and sustainable practices [31]. AMT’s labor
market, however, individuates by design; workers are
independent by default. Turkopticon provides an
infrastructure through which workers can engage in
practices of interdependence, here as mutual aid.
THE AMBIVALENCE
TECHNOLOGIES
OF
SUCCESS
IN
This paper has offered an account of an activist systems
development intervention into the crowdsourcing system
AMT. We argued that AMT is predicated on
infrastructuring and hiding human labor, rendering it a
reliable computational resource for technologists. Based on
a “Workers’ Bill of Rights” meant to evoke workers’
imaginations, we identify hazards of crowdwork and our
response as designers to those hazards – Turkopticon. The
challenges of developing Turkopticon shows the challenges
of developing real-world technologies that intervene in
existing, large-scale sociotechnical systems. Such activism
takes design out of the studio and into the wild, not only
testing the seeds of possible technological futures, but
attempting to steer and shift the existing practices and
infrastructures of our technological present.
ACKNOWLEDGEMENTS
We dedicate this paper to the memory of Beatriz da Costa,
the tactical media artist and professor who pushed us to take
the plunge from imagining to building and maintaining. We
thank Chris Countryman, Paul Dourish, Gillian Hayes, Lynn
Dombrowski, Karen Cheng, Khai Truong, and anonymous
reviewers for feedback. This work was supported by NSF
Graduate Research Fellowship and NSF award 1025761.
ACTIVIST
Turkopticon has succeeded in attracting a growing base of
users that sustain it as a platform for an information-sharing
community. In part because of its practical embeddedness, it
has drawn sustained attention to ethical questions in
crowdsourcing over the course of its operation. This
attention comes not only in the crowdsourcing community,
but also in broader public fora. We have been invited to
speak at industry meetups and on Commonwealth Club
panels on crowdsourcing. We have also attracted attention
from journalists writing pieces on crowdsourcing in venues
such as O’Reilly Radar, The Sacramento Bee, AlterNet, and
The San Jose Mercury News. As a media piece,
Turkopticon’s sustained dissent over the last four years has
qualities of adversarial design [15]; the system stands as a
visible reminder of the microlabors that sustain crowd
REFERENCES
1. Ackerman, M. 2000. The intellectual challenge of
CSCW: The gap between social requirements and
technical feasibility. Human-computer interaction,
15(2), 179–203.
2. Bardzell, S. 2010. Feminist HCI : Taking Stock and
Outlining an Agenda for Design, Proc. CHI, 1301–1310.
3. Barley, S.R. and Kunda, G. 2004. Gurus, hired guns,
and warm bodies: itinerant experts in a knowledge
economy. Princeton University Press.
619
Session: Smart Tools, Smart Work
CHI 2013: Changing Perspectives, Paris, France
23.Ipeirotis, P. 2010. Analyzing the Mechanical Turk
Marketplace. XRDS, 17(2), 16-21.
4. Berg, M. 1998. The politics of technology: On bringing
social theory into technological design. ST&HV, 23(4),
456–490.
24.Ipeirotis, P. 2008. Why People Participate in Mechanical
Turk? Accessed: http://www.behind-the-enemylines.com/2008/03/why-people-participate-onmechanical.html
5. Bezos, J. 2006. Opening Keynote. MIT Emerging
Technologies Conference. Accessed:
http://video.mit.edu/watch/opening-keynote-andkeynote-interview-with-jeff-bezos-9197/
6. Biewald, L. 2009. “Turkopticon.” Crowdflower Blog.
Accessed:
http://blog.crowdflower.com/2009/02/turkopticon/
25.Jackson, S., Pompe, A., and Krieshok, G. 2011. Things
fall apart: maintenance, repair, and technology for
education initiatives in rural Namibia. Proc iConference,
83–90.
7. Borning, A. and Muller, M. 2012. Next steps for value
sensitive design. Proc. CHI, 1125-1134.
26.Kelty, C. 2012. Two Bits: The Cultural Significance of
Free Software. Duke University Press.
8. Bowker, G. & Star, S.L. 2000. Sorting Things Out:
Classification and Its Consequences. MIT Press.
27.Lampe, C. and Resnick, P. 2004. Slash(dot) and burn.
Proc SIGCHI 2004, 543–550.
9. Bruckman, A. and Hudson, J.M. 2005. Using Empirical
Data to Reason about Internet Research Ethics. Proc.
ECSCW, 287-306.
28.Law, E. and Von Ahn, L. 2011. Human Computation.
Morgan Claypool Publishers.
29.Law, J. 2004. After method: mess in social science
research. Routledge.
10.Brynjarsdottir, H., Håkansson, M., Pierce, J., Baumer,
E., DiSalvo, C., and Sengers, P. 2012. Sustainably
unpersuaded. Proc. CHI, 947–956.
30.LeDantec, C. 2012. Participation and Publics:
Supporting Community Engagement, Proc. CHI 2012,
1351-1360.
11.Crenshaw, K. 1991. Mapping the Margins:
Intersectionality, Identity Politics, and Violence Against
Women of Color. Stanford Law Review, 43(6), 12411299.
31.Light, A. 2011. Digital interdependence and how to
design for it. Interactions, 18 (2), 34.
32.Orlikowski, W.J. 1992. The Duality of Technology:
Rethinking the Concept of Technology in Organizations.
Organization Science, 3(3), 398–427.
12.Crowdsourced data analysis with Clockwork Raven.
Accessed at
http://engineering.twitter.com/2012/08/crowdsourceddata-analysis-with.html
33.Ross, J., Irani, L., Silberman, M.S., et al. 2010. Who are
the crowdworkers?, EA CHI 2010 (alt.chi), 2863-2872.
13.da Costa, B. and Philip, K., eds. 2008. Tactical
Biopolitics: Art, Activism, Technoscience. MIT Press.
34.Ruhleder, K. and Star, S.L. 2001. Steps Toward an
Ecology of Infrastructure: Design and Access for Large
Information Spaces. In J. Yates and J. Van Maanen, eds.,
Information Technology and Organizational
Transformation: History, Rhetoric, and Practice. Sage,
305–343.
14.DiSalvo, C. 2009. Design and the Construction of
Publics. Design Issues, 25(1), 48–63.
15.DiSalvo, C. 2012. Adversarial Design. MIT Press.
16.Dourish, P. 2010. HCI and environmental sustainability:
the Design of Politics and Politics of Design. Proc. DIS
2010, 1–10.
35.Smith, V. 1997. New Forms of Work Organization.
Annual Review of Sociology, 23, 315–339.
36.Snow, R., O’Connor, B., Jurafsky, D., & Ng, A.Y. 2008.
Cheap and fast---but is it good?: Evaluating non-expert
annotations for natural language tasks. Proc. EMNLP,
254–263.
17.Dourish, P. and Mainwaring, S. 2012. Ubicomp’s
Colonial Impulse. Proc. Ubicomp.
18.Felstiner, A. 2010. Working the Crowd : Employment
and Labor Law in the Crowdsourcing Industry. Berkeley
Journal of Employment & Labor Law, 32 (1), 143–204.
37.Star, S.L. and Strauss, A. 1999. Layers of Silence,
Arenas of Voice: The Ecology of Visible and Invisible
Work. Proc. CSCW, 8, 9–30.
19.Garcia, D. and Lovink, G. 1997. The ABC of tactical
media. nettime listserv. Accessed:
http://www.nettime.org/Lists-Archives/nettime-l9705/msg00096.html
38.Sood, S., Antin, J., and Churchill, E. 2012. Profanity use
in online communities. Proc. CHI 2012, 1481–1490.
20.Haraway, D.J. 1990. Simians, Cyborgs, and Women: The
Reinvention of Nature. Routledge.
39.Suchman, L. 1995. Making Work Visible. CACM, 38(9).
40.Suchman, L. 2006. Human Machine Reconfigurations,
Cambridge Univ. Press.
21.Hirsch, T. 2009. Learning from activists. Interactions,
16(3), 31–33.
41.Taylor, A. 2011. Out There. Proc CHI, 685–694.
22.Ipeirotis, P. 2010. Demographics of Mechanical Turk..
NYU Working Paper No. CEDER-10-01
42.TurkWork. http://turkwork.differenceengines.com
620
High score, low pay: why the gig economy loves gamification |...
https://www.theguardian.com/business/2018/nov/20/high-score...
High score, low pay: why the gig economy
loves gamification
Using ratings, competitions and bonuses to incentivise workers
isn’t new 7 but as I found when I became a Lyft driver, the gig
economy is taking it to another level. By Sarah Mason
Main image: Illustration: Alamy/Guardian Design Team
I
Tue 20 Nov 2018 01.00 EST
n May 2016, after months of failing to find a traditional job, I began driving for the
ride-hailing company Lyft. I was enticed by an online advertisement that promised
new drivers in the Los Angeles area a $500 “sign-up bonus” after completing their
first 75 rides. The calculation was simple: I had a car and I needed the money. So, I
clicked the link, filled out the application, and, when prompted, drove to the
nearest Pep Boys for a vehicle inspection. I received my flamingo-pink Lyft
emblems almost immediately and, within a few days, I was on the road.
Initially, I told myself that this sort of gig work was preferable to the nine-to-five grind. It
would be temporary, I thought. Plus, I needed to enrol in a statistics class and finish my
graduate school applications – tasks that felt impossible while working in a full-time desk
job with an hour-long commute. But within months of taking on this readily available, yet
strangely precarious form of work, I was weirdly drawn in.
Lyft, which launched in 2012 as Zimride before changing its name a year later, is a car
service similar to Uber, which operates in about 300 US cities and expanded to Canada
(though so far just in one province, Ontario) last year. Every week, it sends its drivers a
personalised “Weekly Feedback Summary”. This includes passenger comments from the
previous week’s rides and a freshly calculated driver rating. It also contains a bar graph
showing how a driver’s current rating “stacks up” against previous weeks, and tells them
whether they have been “flagged” for cleanliness, friendliness, navigation or safety.
At first, I looked forward to my summaries; for the most part, they were a welcome boost
to my self-esteem. My rating consistently fluctuated between 4.89 stars and 4.96 stars,
and the comments said things like: “Good driver, positive attitude” and “Thanks for
getting me to the airport on time!!” There was the occasional critique, such as “She weird”,
or just “Attitude”, but overall, the comments served as a kind of positive reinforcement
mechanism. I felt good knowing that I was helping people and that people liked me.
But one week, after completing what felt like a million rides, I opened my feedback
1 of 10
12/31/19, 7:25 PM
High score, low pay: why the gig economy loves gamification |...
https://www.theguardian.com/business/2018/nov/20/high-score...
summary to discover that my rating had plummeted from a 4.91 (“Awesome”) to a 4.79
(“OK”), without comment. Stunned, I combed through my ride history trying to recall any
unusual interactions or disgruntled passengers. Nothing. What happened? What did I do? I
felt sick to my stomach.
Because driver ratings are calculated using your last 100 passenger reviews, one logical
solution is to crowd out the old, bad ratings with new, presumably better ratings as fast as
humanly possible. And that is exactly what I did.
For the next several weeks, I deliberately avoided opening my feedback summaries. I
stocked my vehicle with water bottles, breakfast bars and miscellaneous mini candies to
inspire riders to smash that fifth star. I developed a borderline-obsessive vacuuming habit
and upped my car-wash game from twice a week to every other day. I experimented with
different air-fresheners and radio stations. I drove and I drove and I drove.
T
he language of choice, freedom, and autonomy saturate discussions of ride
hailing. “On-demand companies are pointing the way to a more promising
future, where people have more freedom to choose when and where they
work,” Travis Kalanick, the founder and former CEO of Uber, wrote in October
2015. “Put simply,” he continued, “the future of work is about independence
and flexibility.”
In a certain sense, Kalanick is right. Unlike employees in a spatially fixed worksite (the
factory, the office, the distribution centre), rideshare drivers are technically free to choose
when they work, where they work and for how long. They are liberated from the
constraining rhythms of conventional employment or shift work. But that apparent
freedom poses a unique challenge to the platforms’ need to provide reliable, “on demand”
service to their riders – and so a driver’s freedom has to be aggressively, if subtly, managed.
One of the main ways these companies have sought to do this is through the use of
gamification.
A driver working for Lyft and Uber in Los Angeles. Photograph:
Richard Vogel/AP
Simply defined, gamification is the use of game elements – point-scoring, levels,
competition with others, measurable evidence of accomplishment, ratings and rules of
play – in non-game contexts. Games deliver an instantaneous, visceral experience of
success and reward, and they are increasingly used in the workplace to promote emotional
2 of 10
12/31/19, 7:25 PM
High score, low pay: why the gig economy loves gamification |...
https://www.theguardian.com/business/2018/nov/20/high-score...
engagement with the work process, to increase workers’ psychological investment in
completing otherwise uninspiring tasks, and to influence, or “nudge”, workers’ behaviour.
This is what my weekly feedback summary, my starred ratings and other gamified features
of the Lyft app did.
There is a growing body of evidence to suggest that gamifying business operations has
real, quantifiable effects. Target, the US-based retail giant, reports that gamifying its instore checkout process has resulted in lower customer wait times and shorter lines. During
checkout, a cashier’s screen flashes green if items are scanned at an “optimum rate”. If the
cashier goes too slowly, the screen flashes red. Scores are logged and cashiers are expected
to maintain an 88% green rating. In online communities for Target employees, cashiers
compare scores, share techniques, and bemoan the game’s most challenging obstacles.
But colour-coding checkout screens is a pretty rudimental kind of gamification. In the
world of ride-hailing work, where almost the entirety of one’s activity is prompted and
guided by screen – and where everything can be measured, logged and analysed – there
are few limitations on what can be gamified.
I
n 1974, Michael Burawoy, a doctoral student in sociology at the University of
Chicago and a self-described Marxist, began working as a miscellaneous machine
operator in the engine division of Allied Corporation, a large manufacturer of
agricultural equipment. He was attempting to answer the following question: why
do workers work as hard as they do?
In Marx’s time, the answer to this question was simple: coercion. Workers had no
protections and could be fired at will for failing to fulfil their quotas. One’s ability to obtain
a subsistence wage was directly tied to the amount of effort one applied to the work
process. However, in the early 20th century, with the emergence of labour protections, the
elimination of the piece-rate pay system, the rise of strong industrial unions and a more
robust social safety net, the coercive power of employers waned.
Yet workers continued to work hard, Burawoy observed. They co-operated with speed-ups
and exceeded production targets. They took on extra tasks and sought out productive
ways to use their downtime. They worked overtime and off the clock. They kissed ass.
After 10 months at Allied, Burawoy concluded that workers were willingly and even
enthusiastically consenting to their own exploitation. What could explain this? One
answer, Burawoy suggested, was “the game”.
For Burawoy, the game described the way in which workers manipulated the production
process in order to reap various material and immaterial rewards. When workers were
successful at this manipulation, they were said to be “making out”. Like the levels of a
video game, operators needed to overcome a series of consecutive challenges in order to
make out and beat the game.
At the beginning of every shift, operators encountered their first challenge: securing the
most lucrative assignment from the “scheduling man”, the person responsible for doling
out workers’ daily tasks. Their next challenge was a trip to “the crib” to find the blueprint
and tooling needed to perform the operation. If the crib attendant was slow to dispense
3 of 10
12/31/19, 7:25 PM
High score, low pay: why the gig economy loves gamification |...
https://www.theguardian.com/business/2018/nov/20/high-score...
the necessary blueprints, tools and fixtures, operators could lose a considerable amount of
time that would otherwise go towards making or beating their quota. (Burawoy won the
cooperation of the crib attendant by gifting him a Christmas ham.) After facing off against
the truckers, who were responsible for bringing stock to the machine, and the inspectors,
who were responsible for enforcing the specifications of the blueprint, the operator was
finally left alone with his machine to battle it out against the clock.
A Lyft promotion using a Back to the Future-style DeLorean car
in New York in 2015. Photograph: Lucas Jackson/Reuters
According to Burawoy, production at Allied was deliberately organised by management to
encourage workers to play the game. When work took the form of a game, Burawoy
observed, something interesting happened: workers’ primary source of conflict was no
longer with the boss. Instead, tensions were dispersed between workers (the scheduling
man, the truckers, the inspectors), between operators and their machines, and between
operators and their own physical limitations (their stamina, precision of movement,
focus).
The battle to beat the quota also transformed a monotonous, soul-crushing job into an
exciting outlet for workers to exercise their creativity, speed and skill. Workers attached
notions of status and prestige to their output, and the game presented them with a series
of choices throughout the day, affording them a sense of relative autonomy and control. It
tapped into a worker’s desire for self-determination and self-expression. Then, it directed
that desire towards the production of profit for their employer.
E
very Sunday morning, I receive an algorithmically generated “challenge” from
Lyft that goes something like this: “Complete 34 rides between the hours of
5am on Monday and 5am on Sunday to receive a $63 bonus.” I scroll down,
concerned about the declining value of my bonuses, which once hovered
around $100-$220 per week, but have now dropped to less than half that.
“Click here to accept this challenge.” I tap the screen to accept. Now, whenever I log into
driver mode, a stat meter will appear showing my progress: only 21 more rides before I hit
my first bonus. Lyft does not disclose how its weekly ride challenges are generated, but the
value seems to vary according to anticipated demand and driver behaviour. The higher the
anticipated demand, the higher the value of my bonus. The more I hit my bonus targets or
ride quotas, the higher subsequent targets will be. Sometimes, if it has been a while since I
have logged on, I will be offered an uncharacteristically lucrative bonus, north of $100,
4 of 10
12/31/19, 7:25 PM
High score, low pay: why the gig economy loves gamification |...
https://www.theguardian.com/business/2018/nov/20/high-score...
though it has been happening less and less of late.
Behavioural scientists and video game designers are well aware that tasks are likely to be
completed faster and with greater enthusiasm if one can visualise them as part of a
progression towards a larger, pre-established goal. The Lyft stat meter is always present,
always showing you what your acceptance rating is, how many rides you have completed,
how far you have to go to reach your goal.
In addition to enticing drivers to show up when and where demand hits, one of the main
goals of this gamification is worker retention. According to Uber, 50% of drivers stop using
the application within their first two months, and a recent report from the Institute of
Transportation Studies at the University of California in Davis suggests that just 4% of
ride-hail drivers make it past their first year.
Retention is a problem in large part because the economics of driving are so bad.
Researchers have struggled to establish exactly how much money drivers make, but with
the release of two recent reports, one from the Economic Policy Institute and one from
MIT, a consensus on driver pay seems to be emerging: drivers make, on average, between
$9.21 (£7.17) and $10.87 (£8.46) per hour. What these findings confirm is what many of us
in the game already know: in most major US cities, drivers are pulling in wages that fall
below local minimum-wage requirements. According to an internal slide deck obtained by
the New York Times, Uber actually identifies McDonald’s as its biggest competition in
attracting new drivers. When I began driving for Lyft, I made the same calculation most
drivers make: it is better to make $9 per hour than to make nothing.
Before Lyft rolled out weekly ride challenges, there was the “Power Driver Bonus”, a
weekly challenge that required drivers to complete a set number of regular rides. I
sometimes worked more than 50 hours per week trying to secure my PDB, which often
meant driving in unsafe conditions, at irregular hours and accepting nearly every ride
request, including those that felt potentially dangerous (I am thinking specifically of an
extremely drunk and visibly agitated late-night passenger).
Of course, this was largely motivated by a real need for a boost in my weekly earnings. But,
in addition to a hope that I would somehow transcend Lyft’s crappy economics, the
intensity with which I pursued my PDBs was also the result of what Burawoy observed
four decades ago: a bizarre desire to beat the game.
D
rivers’ per-mile earnings are supplemented by a number of rewards, both
material and immaterial. Uber drivers can earn “Achievement Badges” for
completing a certain number of five-star rides and “Excellent Service
Badges” for leaving customers satisfied. Lyft’s “Accelerate Rewards”
programme encourages drivers to level up by completing a certain number
of rides per month in order to unlock special rewards like fuel discounts
from Shell (gold level) and free roadside assistance (platinum level).
In addition to offering meaningless badges and meagre savings at the pump, ride-hailing
companies have also adopted some of the same design elements used by gambling firms to
promote addictive behaviour among slot-machine users. One of things the anthropologist
5 of 10
12/31/19, 7:25 PM
High score, low pay: why the gig economy loves gamification |...
https://www.theguardian.com/business/2018/nov/20/high-score...
and NYU media studies professor Natasha Dow Schüll found during a decade-long study of
machine gamblers in Las Vegas is that casinos use networked slot machines that allow
them to surveil, track and analyse the behaviour of individual gamblers in real time – just
as ride-hailing apps do. This means that casinos can “triangulate any given gambler’s
player data with her demographic data, piecing together a profile that can be used to
customise game offerings and marketing appeals specifically for her”. Like these
customised game offerings, Lyft tells me that my weekly ride challenge has been
“personalised just for you!”
Former Google “design ethicist” Tristan Harris has also described how the “pull-torefresh” mechanism used in most social media feeds mimics the clever architecture of a
slot machine: users never know when they are going to experience gratification – a dozen
new likes or retweets – but they know that gratification will eventually come. This
unpredictability is addictive: behavioural psychologists have long understood that
gambling uses variable reinforcement schedules – unpredictable intervals of uncertainty,
anticipation and feedback – to condition players into playing just one more round.
A customer leaving a rating and review of an Uber driver.
Photograph: Felix Clay/The Guardian
We are only beginning to uncover the extent to which these reinforcement schedules are
built into ride-hailing apps. But one example is primetime or surge pricing. The phrase
“chasing the pink” is used in online forums by Lyft drivers to refer to the tendency to drive
towards “primetime” areas, denoted by pink-tinted heat maps in the app, which signify
increased fares at precise locations. This is irrational because the likelihood of catching a
good primetime fare is slim, and primetime is extremely unpredictable. The pink appears
and disappears, moving from one location to the next, sometimes in a matter of minutes.
Lyft and Uber have to dole out just enough of these higher-paid periods to keep people
driving to the areas where they predict drivers will be needed. And occasionally – cherry,
cherry, cherry – it works: after the Rose Bowl parade last year, I made in 40 minutes more
than half of what I usually make in a whole day of non-stop shuttling.
It is not uncommon to hear ride-hailing drivers compare even the mundane act of
operating their vehicles to the immersive and addictive experience of playing a video
game or a slot machine. In an article published by the Financial Times, long-time driver
Herb Croakley put it perfectly: “It gets to a point where the app sort of takes over your
motor functions in a way. It becomes almost like a hypnotic experience. You can talk to
drivers and you’ll hear them say things like, I just drove a bunch of Uber pools for two
6 of 10
12/31/19, 7:25 PM
High score, low pay: why the gig economy loves gamification |...
https://www.theguardian.com/business/2018/nov/20/high-score...
hours, I probably picked up 30–40 people and I have no idea where I went. In that state,
they are literally just listening to the sounds [of the driver’s apps]. Stopping when they
said stop, pick up when they say pick up, turn when they say turn. You get into a rhythm
of that, and you begin to feel almost like an android.”
S
o, who sets the rules for all these games? It is 12.30am on a Friday night and the
“Lyft drivers lounge”, a closed Facebook group for active drivers, is divided.
The debate began, as many do, with an assertion about the algorithm. “The
algorithm” refers to the opaque and often unpredictable system of automated,
“data-driven” management employed by ride-hailing companies to dispatch
drivers, match riders into Pools (Uber) or Lines (Lyft), and generate “surge” or
“primetime” fares, also known as “dynamic pricing”.
The algorithm is at the heart of the ride-hailing game, and of the coercion that the game
conceals. In their foundational text Algorithmic Labor and Information Asymmetries: A
Case Study of Uber’s Drivers, Alex Rosenblat and Luke Stark write: “Uber’s self-proclaimed
role as a connective intermediary belies the important employment structures and
hierarchies that emerge through its software and interface design.” “Algorithmic
management” is the term Rosenblat and Stark use to describe the mechanisms through
which Uber and Lyft drivers are directed. To be clear, there is no singular algorithm.
Rather, there are a number of algorithms operating and interacting with one another at any
given moment. Taken together, they produce a seamless system of automatic decisionmaking that requires very little human intervention.
For many on-demand platforms, algorithmic management has completely replaced the
decision-making roles previously occupied by shift supervisors, foremen and middle- to
upper- level management. Uber actually refers to its algorithms as “decision engines”.
These “decision engines” track, log and crunch millions of metrics every day, from ride
frequency to the harshness with which individual drivers brake. It then uses these
analytics to deliver gamified prompts perfectly matched to drivers’ data profiles.
Because the logic of the algorithm is largely unknown and constantly changing, drivers are
left to speculate about what it is doing and why. Such speculation is a regular topic of
conversation in online forums, where drivers post screengrabs of nonsensical ride
requests and compare increasingly lacklustre, algorithmically generated bonus
opportunities. It is not uncommon for drivers to accuse ride-hailing companies of
programming their algorithms to favour the interests of the corporation. To resolve this
alleged favouritism, drivers routinely hypothesise and experiment with ways to
manipulate or “game” the system back.
When the bars let out after last orders at 2am, demand spikes. Drivers have a greater
likelihood of scoring “surge” or “primetime” fares. There are no guarantees, but it is why
we are all out there. To increase the prospect of surge pricing, drivers in online forums
regularly propose deliberate, coordinated, mass “log-offs” with the expectation that a
sudden drop in available drivers will “trick” the algorithm into generating higher surges. I
have never seen one work, but the authors of a recently published paper say that mass logoffs are occasionally successful.
7 of 10
12/31/19, 7:25 PM
High score, low pay: why the gig economy loves gamification |...
https://www.theguardian.com/business/2018/nov/20/high-score...
Viewed from another angle, though, mass log-offs can be understood as good, oldfashioned work stoppages. The temporary and purposeful cessation of work as a form of
protest is the core of strike action, and remains the sharpest weapon workers have to fight
exploitation. But the ability to log-off en masse has not assumed a particularly
emancipatory function. Burawoy’s insights might tell us why.
Gaming the game, Burawoy observed, allowed workers to assert some limited control over
the labour process, and to “make out” as a result. In turn, that win had the effect of
reproducing the players’ commitment to playing, and their consent to the rules of the
game. When players were unsuccessful, their dissatisfaction was directed at the game’s
obstacles, not at the capitalist class, which sets the rules. The inbuilt antagonism between
the player and the game replaces, in the mind of the worker, the deeper antagonism
between boss and worker. Learning how to operate cleverly within the game’s parameters
becomes the only imaginable option. And now there is another layer interposed between
labour and capital: the algorithm.
A
fter weeks of driving like a maniac in order to restore my higher-thanaverage driver rating, I managed to raise it back up to a 4.93. Although it felt
great, it is almost shameful and astonishing to admit that one’s rating, so
long as it stays above 4.6, has no actual bearing on anything other than your
sense of self-worth. You do not receive a weekly bonus for being a highly
rated driver. Your rate of pay does not increase for being a highly rated
driver. In fact, I was losing money trying to flatter customers with candy and keep my car
scrupulously clean. And yet, I wanted to be a highly rated driver.
And this is the thing that is so brilliant and awful about the gamification of Lyft and Uber:
it preys on our desire to be of service, to be liked, to be good. On weeks that I am rated
highly, I am more motivated to drive. On weeks that I am rated poorly, I am more
motivated to drive. It works on me, even though I know better. To date, I have completed
more than 2,200 rides.
A longer version of this article first appeared in Logic, a new magazine devoted to deepening
the discourse around technology
Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly
email here.
•
It’s because of you…
… and your unprecedented support for the Guardian in 2019 that our journalism thrived in
a challenging climate for publishers. Thank you. You provide us with the motivation and
financial support to keep doing what we do.
Over the last three years, much of what we hold dear has been threatened – democracy,
civility, truth. This US administration is establishing new norms of behaviour. Anger and
cruelty disfigure public discourse and lying is commonplace. Truth is being chased away.
The need for a robust, independent press has never been greater, and with your help we
can continue to provide fact-based reporting that offers public scrutiny and oversight.
8 of 10
12/31/19, 7:25 PM
High score, low pay: why the gig economy loves gamification |...
https://www.theguardian.com/business/2018/nov/20/high-score...
You’ve read more than 13 articles in 2019, and each one was made possible thanks to the
support we received from readers like you across America in all 50 states. This generosity
helps protect our independence and it allows us to keep delivering quality reporting that's
open for all.
"America is at a tipping point, finely balanced between truth and lies, hope and hate, civility
and nastiness. Many vital aspects of American public life are in play – the Supreme Court,
abortion rights, climate policy, wealth inequality, Big Tech and much more. The stakes could
hardly be higher. As that choice nears, the Guardian, as it has done for 200 years, and with
your continued support, will continue to argue for the values we hold dear – facts, science,
diversity, equality and fairness. Thank you." – US editor, John Mulholland
We are asking our readers help to prepare for 2020. If you can, please consider supporting
us again today with a year-end gift. Contribute from as little as $1 and help us reach our
goal.
Support The Guardian
9 of 10
12/31/19, 7:25 PM
High score, low pay: why the gig economy loves gamification |...
https://www.theguardian.com/business/2018/nov/20/high-score...
Learn more
Topics
The long read
Games
Uber
Lyft
features
10 of 10
12/31/19, 7:25 PM
Purchase answer to see full
attachment