Review: The Technological Singularity
The Technological Singularity
By Murray Shanahan, MIT Press, ISBN: 9780262527804
With the current slew of doomsday predictions about super-intelligent machines, ‘The Technological Singularity’ provides a timely crash-course for the uninitiated. The red eye of HAL, 2001’s malevolent super computer, on the front cover is a subtle nod to the hysteria surrounding recent advances in the field, which has been balanced by an equally sensationalist dose of utopian futurology from the cheerleaders of artificial intelligence (AI).
As part of MIT Press’s ‘Essential Knowledge’ series, however, the aim of this book is not to make predictions or judgements, but to bring those just coming to the topic up to speed. It takes its name from the concept that humans are heading towards a point in time where machines will become so intelligent that our ability to comprehend their actions will break down and with it our ability to predict events, in much the same way as the laws of physics break down at the centre of a black hole.
While some reserve a special quality for human-level intelligence, the consensus is that, whether due to an AI arms race or simply the inexorable march of progress, this event is inevitable. How long it will take and how we will get there is up for debate though, with the path we choose likely to have major implications for the technology’s impact on our world. Shanahan gives a solid overview of the two main routes – digital emulation of a biologically-realistic brain or engineering one from scratch using machine learning – and why self-improving AI could mean a rapid transition from human-level capabilities to super intelligence. While brain-inspired AI might retain traits that give us some window into its operation, an intelligence designed from first principles could be quite alien.
The book highlights the dangers of anthropomorphising AI and identifies the three key questions that will govern its actions – what is its reward function, how does it learn and how does it maximise its expected reward. From this conceptual base Shanahan goes on to explore some of the key moral and existential issues development of AI will pose. Is an emulated brain conscious? Can you confer ‘personhood’ and its rights and obligations on an entity that can be replicated, split and merged? How can we engineer an AI that retains human values? Would a super intelligence have more rights than us?
The book divides the effect of AI into two distinct waves of disruption – the introduction of specialised AIs that will replace all but the most important jobs and potentially infantilise humanity by taking over all aspects of their decision-making; and the advent of super intelligence. The various ways we might try and shape and control these effects are discussed at length, but the book also highlights how difficult that could be by showing how even the soundest strategies are likely to collapse when faced with the unpredictability of AI. Shanahan makes it clear these events could be within our lifetime, meaning this topic needs to become part of the public discourse now.
The brevity of the book means some important topics, such as theory of consciousness, are given only superficial treatment, and as Shanahan concedes in his preface, despite his best attempts at neutrality his own opinions do creep in at points. His preference for biologically-inspired embodied AI betrays his background as a cognitive roboticist, while closing with an appeal to consider the possibility that creating AI is part of humanity’s cosmological destiny hints at his enthusiasm for the technology.
For those already well-read on the topic much of what is discussed will be familiar, but Shanahan’s presentation is succinct, comprehensive and commendably accessible for such a complex subject.