The history of artificial intelligence is a history of boring conferences. The term itself was first coined at a boring conference, one that took place in the summer of 1965 at Dartmouth College. One could chart a number of pre-histories when it comes to anxieties around smart, responsive machines and their effects on our world, but the real hype has always been underlaid by the bureaucratic haze of lanyards, plenaries, and good timekeeping.
The hype waned during the 1970s, a time sometimes known as the ‘AI Winter’, before being inflated again with the injection of private capital by those interested in the labour-saving potentials of the technology. The ever-looming ‘Jobpocolypse’ brought about by AI quickly became a pressing concern for AI ethicists and policy wonks of all stripes. It’s also the focus of the first annual ‘International Conference on AI in Work, Innovation, Productivity and Skills’, a conference organised by the OECD, a powerful intergovernmental organisation made up predominantly of rich countries and intent on finding ways to ratchet economic growth.
From the perspective of the OECD suits, the main issue confronting AI, and one which was raised over and over again at the conference, is the ‘productivity puzzle’. Despite the enthusiasm over AI’s potential to make work more efficient, statisticians and researchers have been at pains to describe why the economic efficiency of most developed nations has slowed down since the early 2000s. Silicon Valley may celebrate our brave new world of big data and machine learning, but AI hasn’t actually been able to squeeze more out of workers.
There are a number of contending explanations for the productivity puzzle. In Automation and the Future of Work, economic historian Aaron Benanav suggests that the issue itself may be less to do with productivity than with low demand for labour on a global scale: decades of overcapacity led to the bottoming-out of manufacturing, long seen as the stalwart of capitalist economic growth. And as growth eased, so too did job creation. Crucially for Benanav, the fact that less jobs are being created didn’t necessarily lead to more unemployment. ‘Job losers have been obliged to join new labor market entrants in low-quality jobs—earning less-than-normal wages in worse-than-average working conditions,’ he writes.
If AI isn’t taking our jobs or improving economic productivity, what then can we make of an AI conference? Benanav describes automation hype as fundamentally a ‘social theory’, one which doesn’t just concern itself with technology, but with the consequences of change for society as a whole. Talking about AI is never just talking about AI. In some ways, the OECD and other similar intergovernmental groups’ interest in AI governance is more honest than the tech community, if only because it states plainly that work and productivity are the issues at hand, rather than a utopian vision of happy machines and care-free humans. But conferences like the one the OECD has held this week also frame AI within questions of ethics and governance, positioning it as a nebulous force that’s emerged from the ether and thrown our world into disarray, rather than acknowledging the role of political power in shaping our automated world.
For the OECD, the fact that workers’ pay and conditions have been declining in developed countries for decades is a matter for new skills-based approaches which seek to improve ‘job quality’ in the pursuit of more productivity. Just as the human resources department at a job might suggest we unwind and relax with wellness programmes aimed at making us more efficient, the OECD’s concern for workers is ultimately about finding ways to address the productivity decline.
What’s missing from this newfound appreciation for worker experience is any mention of genuine worker power. Workers shouldn’t be picking up the pieces of automation after it’s happened – they should be part of the design. The involvement of a small number of international trade union bodies at the conference may be a good start, but too often these groups are presented as ‘stakeholders’, rather than vehicles through which workers can be given genuine political agency to decide how new technologies are deployed in the workplace.
The best evidence of this may rest in the OECD’s fixation on metrics, classification, and measurement of AI computing power, an undoubtedly lukewarm pursuit compared to the tech industry’s love affair with sci-fi novels and thought-provoking moral quandaries. In part, this fixation goes back to the productivity puzzle, but it also gives us a sense of how those who govern are looking to wield power in the unfolding of an automated future. ‘If you cannot measure it, you cannot manage it,’ OECD Secretary General Angel Gurria declared during his speech on AI tech at the conference’s opening plenary.
AI, the speakers at the conference are at pains to remind us, should involve a full-scale mission approach. The subtext is clear: AI hype has become not just about the take-up of new technology, but an industrial strategy led by those at the top, with the exclusion of the workers it will affect.
This raises the possibility of an increasingly opaque ‘black box society’, a term used by legal theorist Frank Pasquale to describe the way that the internal workings of algorithms and automated technology remain outside the reach of society at large. Data about AI uptake, and how those AI systems work, can be vital for workers trying to organise their workplace: to understand the way that automation is being distributed across a firm, what jobs are at risk, and what new forms of labour workers must take on due to automation, intel needs to be made available to workers first and foremost. The OECD conference is suggestive of a burgeoning kind of AI managerial capitalism, in which what are fundamentally questions of labour and the workplace become technocratic issues to be answered by a select few who can peer inside the black box.
AI itself seems to already be headed towards a more managerial framework. ‘Robots haven’t come for your job, they’ve come for your boss’s job,’ Jeremias Adams-Prassl, a legal scholar and one of the more clear-thinking progressive voices at the conference, remarked during one of the panel discussions. For many workplaces, AI isn’t a mystical force that eliminates labour hours, but one that serves alongside workers to funnel them through tasks and increase their speed. Speakers at the OECD conference were intent on making this point – the potentials of AI were to be found in the way it complements human labour, rather than acting as a rival. As an OECD report released in tandem with the conference suggests, ‘AI is likely to reshape the work environment of many people, by changing the content and design of their jobs, the way workers interact with each other and with machines, and how work effort and efficiency are monitored.’
Gig economy workers are the shock troops, says Adams-Prassl, pointing to the use of automated dialling and surveillance-heavy technology used in precarious call centre work. Recent news of Amazon’s delivery driver AI surveillance tech, which places cameras in trucks and vans to monitor workers, is another example. If AI is able to take on managerial tasks centred around a human worker, as Adams-Prassl suggests, these precedents show that the hierarchy of the workplace remains firmly intact: those who can set the parameters of AI become more elusive, and more powerful managers in their own right.
The solution to the issues of surveillance and exploitative working conditions, according to the OECD, is a return to their set of ‘AI Principles’ first laid out in 2019 to address the ethics of automation – no matter that computer engineers surveyed by the OECD have been explicit about their lack of intention to take up the principles in the day-to-day design of systems. As we’ve seen in the tech industry with Google’s lay-off of researchers Timnit Gebru and Meredith Whittaker in response to their concerns (Gebru’s suggestion that Google’s voice technology was discriminatory against marginalised groups, and Whittaker’s co-organisation of an international walkout against sexual assault and military contracts), tech overlords are more than happy to dismiss the chin-stroking moral issues when it suits them.
Rendering tech’s political conflicts as issues of managerial strategy can only continue to erode labour rights and working conditions. Astra Taylor describes ‘fauxtomation’ as the over-hyped, and less-than-useful automated processes that ‘reinforc[e] the perception that work has no value if it is unpaid and acclimates us to the idea that one day we won’t be needed.’ The AI conference circuit has become a crucial part of that fauxtomation, reinforcing power relations and taking workplace issues out of the hands of workers. Now is the time for worker co-design of AI.