Skip to main content

Kennedy School Review

Topic / Science, Technology and Data

For Smarter Debate and Better Policy, Let’s Scrap the ‘Killer Robots’

By Katherine Mansted

Will the rise of intelligent machines spell doom for humanity?

Popular movies and news reporting on artificial intelligence (AI) would certainly have us think so.

In Hollywood’s imaginings, AI is dangerous and uncontrollable. AI seduces: recall Ex Machina’s calculating femmebot. AI murders: think of the homicidal HAL 9000 from 2001: A Space Odyssey. AI annihilates: recall the machine overlords of The Matrix and the world domination-seeking Skynet system from The Terminator series. Techno-paranoia is popular among today’s TV writers too, from Netflix’s Black Mirror to the sinister robot awakening of HBO’s Westworld.

In the news, editors court clicks with headlines quoting Stephen Hawking’s statement that AI could “spell the death of the human race,” and Elon Musk’s sensational suggestion that AI is like “summoning the devil.” You probably also have read that IBM’s Deep Blue supercomputer vanquished reigning chess world champion Gary Kasparov in 1996, and that in 2016 Google’s AlphaGo defeated the world’s strongest player of the Chinese strategy game, Go. The news media is quick to depict these incidents as early signs of the inevitable triumph of machines over humans.

Stephen Hawking’s full thoughts on AI, however, offer a more balanced prediction of our future: “Powerful AI will either be the best or the worst thing ever to happen to humanity.” In other words, the rise of computer intelligence might prove to be history’s most significant, life-augmenting invention. Killer robots are not inevitable.

In fact, peddling a doomsday AI scenario is debilitating to the public and policymakers alike.

Science fiction and science reporting are not just great entertainment—they are also powerful political tools. By creating, exploring, and discussing possible futures, science writers offer us the opportunity to preview and choose the kind of world we want to build. When science stories play to our anxieties about AI’s worst possible futures, and ignore stories about its best futures, we lose an opportunity for meaningful debate.

It is not widely known that after his face-off with Deep Blue, Kasparov created “freestyle chess,” a game where humans join with computers to become even more formidable competitors than teams of humans or computers acting alone. Today, the world’s best chess player is a machine/human team.

As Wired creator Kevin Kelly explains in his 2016 book The Inevitable, “If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers.”

So how exactly can science fiction help us unlock this better future?

First, science fiction can give us a better idea of what living with AI might really look like. Despite Hollywood’s reliance on anthropomorphized robot characters, AI systems are unlikely to think, or look, like humans. They will be invisible and embedded in our smart phones, cars, houses, and workplaces. And although they will be intelligent, it is by no means inevitable (or even possible) that they will be conscious, like humans and other mammals. While welcome relief from the standard AI narrative, the self-aware and romantically-inclined OS played by Scarlett Johansson in Her is not a probable development. Stories that describe more realistic forms of AI can help us forecast the skills we will need to build to successfully incorporate AI into our lives and work.

Second, positive stories about AI can inspire the next generation of technologists and inventors. Let’s not forget that mobile phones, digital newspapers, cloning, interactive TV, satellites, and submarines all appeared in science fiction before they became reality. By imagining futures where we combine our human intelligence with machine intelligence to become greater than the sum of our parts, we can inspire major breakthroughs in the real world. Intelligent computer systems are already capable of real-time language translation. What’s the next frontier? Individualized cures for chronic diseases, optimized energy and transport systems, and “smart” policy decision aids are all plausible developments.

Finally, more nuanced and complex depictions in science fiction could help us to tackle more “real and present concerns” about AI, according to Sara M. Watson of Harvard University’s Berkman Klein Center for Internet and Society.

One such urgent issue is discovering how to teach AI systems to communicate their alien thought processes to us. As Watson explains, the inner workings of AI algorithms are often so complex that their results can’t be described in a way that is intelligible to humans—even their own creators. This is a problem when companies and governments use machine learning, a subset of AI, to make fundamental decisions—like who to hire, lend money to, or imprison. If we are to steer the evolution of AI towards a positive future, we need to be more creative about imagining how the “brains” of these systems might work—and, importantly, how we want them to work. Fixating on the AI apocalypse distracts us from this task.

No one—not even Stephen Hawking—knows exactly how AI will evolve. But science writing can catalyze important debate on how to live with, work alongside, and infuse human values into the design of emerging technology. And in turn, smarter debate will help steer the emergence of AI toward the future that we want, not the future that we fear.

Katherine is pursuing a Master in Public Policy at the Harvard Kennedy School as an Australian General Sir John Monash Scholar. She is Vice-Chair of The Future Society and her current interests include technology governance and cybersecurity.

Photo: Brandon Warren, Flickr Creative Commons