Last year, DeepMind, a subsidiary of the Google holding company Alphabet and one of the world’s leading AI research companies, reported losses of £470.2 million and debts of up to £1 billion. The response from the company was surprisingly calm. According to a Financial Times report, DeepMind claimed there was no risk of default and that the debt had been guaranteed by Google.
So what was the calculated loss being made by Alphabet? When it acquired DeepMind in 2010, it was in response to a growing interest in AI research, but also a race that had started between the US and China to create the world’s first artificial general intelligence. Whoever was able to first achieve this feat, would have control of a technology that stood to transform the global economy, not to mention a monopoly hold that would guarantee its market dominance far into the future.
To understand why is to understand the difference between normal artificial intelligence and artificial general intelligence. The former exists all around us and constitutes any system that is capable of improving itself through what is known as ‘machine learning.’
We see this in Apple’s Siri, Amazon’s Alexa and in various navigation apps. In the past, computer systems needed to be preprogrammed in order to perform a given task, while today, many of them are able to improve over time and by themselves.
In the process of working towards AGI, companies like DeepMind nevertheless also develop strong AI system. This is the ‘product’ so to speak, and is sold to other companies within Alphabet to improve their services. The figures reported last year, however, demonstrate how small a part this plays in the company’s raison d’etre.
These systems are also limited to a single discipline, or if capable of learning several disciplines, must do so in turn and from scratch. An artificial general intelligence by contrast, is a system that is able turn its hand to, and master, several, if not infinite disciplines, through a process known as ‘transfer learning.’
To achieve this however, requires three main ingredients. The first is research. At the time of writing, DeepMind employs hundreds of PhDs, most of whom command salaries high into the six figures.
With universities unable to compete, there are worries of a brain drain in our seats of learning, not to mention the mass privatisation of knowledge. Research that would have once been the ownership of Oxbridge or Imperial, now belonging to a US tech giant. This is in addition to Google’s aggressive patenting strategy, and its negative impact on intellectual property, which has already been widely reported.
The second is computing power, and so far Alphabet is the world leader in that department. Researchers develop the algorithms, engineers create the technical environment, but to achieve the high-powered learning of an advanced AI, the system will require an enormous quantity of Petabyte power.
Finally, and most crucially, these two factors are dependent on the sufficient supply of data. For a system to become advanced enough to accommodate the advanced skills of an AGI, it will need to be trained on enormous datasets.
One of the few areas capable of supplying this is critical infrastructure. The private companies and public bodies supplying our food, energy and water supply, transport systems and health services, generate enormous quantities of real-time data.
AI promises to be able to enhance these systems, by detecting issues of efficiency that might otherwise fall under the radar of human detection, or responding to and predicting problems with a speed and proficiency that currently exceeds existing technological capacity. But the transaction isn’t one sided. By receiving this data, systems are also able to learn, improve and gain in strength.
This is why Google’s involvement in the management of the Coronavirus outbreak, should alarm us, as well as the inclusion of DeepMind CEO Demis Hassabis at the government’s SAGE meetings. Conflict of interest runs high, when human desperation amidst a global pandemic could lead to the acquisition of data to supply and improve advanced and privately owned, technological systems.
Authors including Naomi Klein have raised alarm about the possibility for the tech sector to profit from the crisis. But arguably more important, is not just the acquisition of capital per se, but large quantities of different kinds of data.
Covid-19 creates the possibility for an unprecedented degree of private technological interference into civic life, which either directly, or by the creation of a dependency within the health sector that is difficult to reverse further down the line, leads to the possibility of our data being used to build systems that could lead to extreme inequalities further down the line.
All of which is to say, that the imperatives for ‘helping’ are nowhere near as simple or altruistic as they might at first seem, and the relationship between the AI sector, the health service and other aspects of our critical infrastructure, will at some point become reciprocal.
This is one issue that falls outside of the scope of AI ethics, which has become a growing discipline in recent years. So far, it seems to amount to a question of whether this nascent technology might avoid issues of bias and discrimination, as well as preventing its use for military purposes.
Both issues are urgently important, but arguably more fundamental than both, and which no company-authorised ethics board is ever going to consider, is the market logic central to the corporation, and whether we as a society should be volunteering ourselves for the purposes of enhancing a world-altering technology. For all these reasons, a Google-powered contact tracing app must be resisted at all costs.