When thinking of AI futures, the classic sci-fi tropes tell us that machines will one day take over and replace humans, with robots rendering work as we know it obsolete: the outcome will either be a post-work utopia or robot-human war.
But that future is here, and the reality is far more mundane. Instead of eliminating human work, the AI industry is creating new ways of exploiting and obscuring workers.
Lurking behind the amorphous and often abstract notion of ‘AI’ are material realities. 80 percent of machine learning development consists of repetitive data preparation tasks and ‘janitorial’ work such as collecting data, labelling data to feed algorithms, and data cleaning – tasks that are a far cry from the high glamour of the tech CEOs who parade their products on stage. At other times, the computing purportedly being done by AI is actually being done by human workers: start-ups have workers pose as chatbots or transcribe information it claims is being done through ‘smart’ technology.
The invisible and insecure nature of this form of work places a suspicious lens over the supposed freedoms of ‘anytime, anywhere’ ideals of working. Defined by Mary L. Gray as ‘ghost work’ in a book of the same name written with Siddharth Suri, this form of work is endless in both the number of possible tasks available (transcription, translation, data labelling, survey work) and in terms of the culture of expectancy, in which workers should always be completing or thinking about new work. As Gray states, ‘the great paradox of AI is that the desire to eliminate human work generates new tasks for humans.’
Gray and Suri’s book primarily considers university-educated ghost workers based in the US, and while Amazon Mechanical Turk, a major AI-work crowdsourcing platform, recruits less than two percent of its workers from the Global South, the same is not true for other platforms like Samasource and Mighty AI. These platforms rely on workers from Southeast Asia and sub-Saharan Africa, where they tend to receive below or barely a living wage, compared to the significant wealth raked in by the platforms themselves.
Samasource, for example, posted profits of $19 million in 2019, while some of the data labellers sourced through it were found to earn around $8 per day. Though perhaps less common, the question of low pay isn’t absent from working conditions in the Global North, either – Saiph Savage, director or the Human Computer Interaction Lab at West Virginia University, found that some of the work sourced through Amazon Mechanical Turk paid as little as £1.45 per hour.
In this sense, corporate buzzwords like ‘inclusion’ and ‘diversity’, commonplace in tech start-up culture, act as window dressing: inclusion of workers from the Global South doesn’t automatically equal their fair treatment. As feminist theorist Sara Ahmed says in her analysis of institutional diversity practices, ‘Equality and diversity can be used as masks to create the appearance of being transformed.’ Whether that transformation actually takes place is considered a secondary issue.
In a geopolitical context, the colonial history of these work imbalances is vital to understanding them. ‘Ghost work’, usually low-paid and precarious, can be outsourced to the Global South in order to return luxurious AI goods to the metropole, literally with the intention of alleviating the demand for those there to work. As Chan et al. note, the ability of Global South-based workers to access the technologies they help to produce is often limited, particularly in areas of the world where internet access is more expensive. This is a reproduction of a centuries-long relationship mediated through twenty-first-century technologies – one aspect of what Michael Kwet has termed ‘digital colonialism’.
The particular irony of this is that AI industry growth within an uneven global landscape perpetuates a biased image of the workers that propel it forwards. Algorithmic discrimination is well-known: existing racial (and gendered) biases inevitably result in facial recognition systems that are less likely to recognise, for example, Black people. Public image datasets are also disproportionately trained on US or European norms, meaning images are classified with lower accuracy when they come from countries like Pakistan or Ethiopia than when they come from the US.
Popular culture and clever advertising campaigns shine a magnanimous light on AI’s potential, but what currently exists is anticlimactic. The biases that some will feel were relatively contained by the locker-room cultures of Silicon Valley techno-sexists have leaked out into a massive global landscape and combined with neo-colonial global capitalism to form something far from liberatory: as journalist and writer Rebecca Solnit puts it, ‘the physical landscape of Silicon Valley is now everywhere, not only in the attempts to clone its success but in the spread of its products and its waste throughout the globe.’
The expansion of AI beyond Western spaces is meant, in the popular imagination, to signal the opportunity for a post-work world, but at present, nothing comes about without labour. The end result of this obfuscation is multi-levelled layers of exploitation – of workers in both the Global South and North, both in the form of their work and in the development of AI then sold to wealthy companies, sometimes to spy on or replace them. The promises made of a glittery AI future remain highly selective. If we continue down this route, we should question whether we want an automated world at all.