Stop Talking About Superintelligence and Start Taking AI Risks Seriously

Stop Talking About Superintelligence and Start Taking AI Risks Seriously

In mid-2014 the late Stephen Hawking warned the world that “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”[i] This ominous prediction spearheaded a rash of dystopian proclamations from prominent tech celebrities about the supposed dangers of AI, with everyone from Elon Musk to Henry Kissinger piling on. Partly in response, multiple institutions formed around missions like “AI for Social Good” to combat the decidedly negative optics.

But more than five years later, are we any closer to learning to avoid the risks? Having observed AI closely throughout this period for my doctoral research, I say that we are only beginning to understand what the risks are, much less how to avoid them. What accounts for the failure?

First is the complexity of AI technology. There’s no single “thing” at the center of it all. AI is a complex combination of algorithms, data, computers, and people. So too, the risks are multiple; a long series of causal arrows is necessary to connect AI systems with actual and potential harms. By contrast, something like nuclear energy is straightforward. Most experts agree about the source of the danger and the nature of the risk—dangerous stuff, fissile material, that must be handled carefully, or else people die.

Second, but more importantly, are the AI experts who choose to impair, rather than facilitate, our understanding of the risks. Instead of, say, studying actual dangers posed by extant AI technologies, the “AI safety” community researches the hypothetical risks of what they call artificial superintelligence, or “ASI”.[ii] Familiar from science fiction, this is the idea that a sufficiently smart AI could rewrite itself to become smarter and smarter, eventually becoming orders of magnitude smarter than anything in existence. Echoing the warnings of both 19th century sci-fi visionary Samuel Butler[iii] and 20th century film director James Cameron,[iv] AI safety researchers consider this an “existential risk” to humanity.

But let’s get real: “superintelligence” is not a thing. ASI does not exist. It poses no threat to anyone. Human-level AI is still nowhere in sight, much less superhuman-level AI.[v] The very concept relies on dubious assumptions about machine capacity, technological progress, and the nature of intelligence.[vi] Moreover, outside the small “AI safety” community, most practicing AI scientists don’t take it seriously.[vii]

In their defense, the “AI safety” community contends that even if the probability of achieving ASI is low, the danger is so great that it is worth thinking about. This would be true, in a vacuum.

But we live in the real world, where AI is already putting millions if not billions of people at risk, sans superintelligence. Consider some of the dangers:

  • Military: Building on a decade of semi-autonomous drone warfare, the world’s top militaries are creating a new generation of lethal autonomous weapons systems. Advocates argue they will save lives, but this remains uncertain. According to former General Mattis, however, weaponized AI systems will certainly change the character of war, though in ways no one can anticipate—not even him.
  • Geopolitical: While Elon Musk claims that ASI could become an “immortal dictator,” actual dictators are embracing AI systems as powerful tools for surveillance, control, and anti-democratic information warfare. In 2017, Russian leader Vladmir Putin asserted that “Whoever leads in AI will rule the world”[viii] and China’s Premier Xi Jinping announced his nation’s intention to do just that by 2030.[ix] Once again seen as key to dominance in the geopolitical order, AI is at the center of a new arms race between the world’s most powerful nations.
  • Economic: Aide to the powerful, AI is no friend to workers. With estimates of job loss ranging from 9% to 47%, no one knows how significant the economic risks of AI are. One Pew study of 1900 experts found them evenly divided about whether AI would improve or destroy the economy.[x] Some suggest industry is not being forthright about the danger. Venture capitalist Kai-fu Lee, for example, argues “Tech companies should stop pretending AI won’t destroy jobs.”[xi]

This is just the tip of iceberg. I could mention that AI is a greedy, infinite sink for energy;[xii] that algorithms learn biases from their training data, entrenching discriminatory biases in software;[xiii] that robots are accelerating natural resource extraction during the sixth greatest extinction in the history of life;[xiv] or that AI-powered systems designed to “hijack” minds in the new “attention economy” appear to have devastating effects on young people.[xv]

As some of the actors in this unfolding tragedy are beginning to realize, none of these risks are hypothetical.[xvi] Yet “AI safety” has almost nothing to say about them, preferring instead to remain focused on the philosophical fantasy of ASI.

If it did not divert valuable attention, talent, and resources away from professional inquiry into the manifold risks posed by actual AI systems already in the world, their fixation on ASI might be of little consequence. But because it does, the “superintelligensia”[xvii] are ironically putting humanity in greater danger—all the while painting themselves as saviors.

Therefore, anyone interested in actually helping society learn to avoid the risks of AI should immediately stop talking about “superintelligence” and start taking real-world AI risks seriously. There is still time to avert catastrophe.

[i] Stephen Hawking et al., “Stephen Hawking: ‘Transcendence Looks at the Implications of Artificial Intelligence – but Are We Taking AI Seriously Enough?,’” The Independent, May 1, 2014, http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence–but-are-we-taking-ai-seriously-enough-9313474.html.

[ii] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014); Future of Life Institute, “Beneficial AGI 2019,” Future of Life Institute (blog), 2019, https://futureoflife.org/beneficial-agi-2019/.

[iii] Samuel Butler, Erewhon (London: Trübner, 1872).

[iv] Matthew Belloni and Boris Kit, “James Cameron Sounds the Alarm on Artificial Intelligence and Unveils a ‘Terminator’ for the 21st Century,” The Hollywood Reporter, September 27, 2017, http://www.hollywoodreporter.com/features/james-cameron-sounds-alarm-artificial-intelligence-unveils-a-terminator-21st-century-1043027.

[v] Gary Marcus and Ernest Davis, Rebooting AI: Building Artificial Intelligence We Can Trust, First edition (New York: Pantheon Books, 2019).

[vi] Kevin Kelly, “The Myth of a Superhuman AI,” WIRED, April 25, 2017, https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/.

[vii] François Chollet, “The Impossibility of Intelligence Explosion,” François Chollet (blog), November 27, 2017, https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec.

[viii] RT Int’l, “’Whoever Leads in AI Will Rule the World’: Putin to Russian Children on Knowledge Day,” RT International, September 1, 2017, https://www.rt.com/news/401731-ai-rule-world-putin/.

[ix] Will Knight, “China Wants to Shape the Global Future of Artificial Intelligence,” MIT Technology Review, March 16, 2018, https://www.technologyreview.com/s/610546/china-wants-to-shape-the-global-future-of-artificial-intelligence/.

[x] Pew Research Center, AI, Robotics, and the Future of Jobs, Digital Life in 2025 (Washington, DC: Pew Research Center, 2014), http://www.pewinternet.org/2014/08/06/future-of-jobs/.

[xi] Kai-Fu Lee, “Tech Companies Should Stop Pretending AI Won’t Destroy Jobs,” MIT Technology Review, February 21, 2018, https://www.technologyreview.com/s/610298/tech-companies-should-stop-pretending-ai-wont-destroy-jobs/.

[xii] Gary Marcus, “Deep Learning: A Critical Appraisal” (Preprint, January 2, 2018), https://arxiv.org/abs/1801.00631.

[xiii] Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan, “Semantics Derived Automatically from Language Corpora Contain Human-like Biases,” Science 356, no. 6334 (2017): 183–186.

[xiv] Gerardo Ceballos, Paul R. Ehrlich, and Rodolfo Dirzo, “Biological Annihilation via the Ongoing Sixth Mass Extinction Signaled by Vertebrate Population Losses and Declines,” Proceedings of the National Academy of Sciences, July 10, 2017, 6089–6096, https://doi.org/10.1073/pnas.1704949114.

[xv] Jean M. Twenge, “Have Smartphones Destroyed a Generation?,” The Atlantic, September 2017, https://www.theatlantic.com/magazine/archive/2017/09/has-the-smartphone-destroyed-a-generation/534198/.

[xvi] James Vincent, “Former Facebook Exec Says Social Media Is Ripping Apart Society,” The Verge, December 11, 2017, https://www.theverge.com/2017/12/11/16761016/former-facebook-exec-ripping-apart-society.

[xvii] Melanie Mitchell, “We Shouldn’t Be Scared by ‘Superintelligent A.I.,’” The New York Times, October 31, 2019, sec. Opinion, https://www.nytimes.com/2019/10/31/opinion/superintelligent-artificial-intelligence.html.

Leave a Reply

Your email address will not be published. Required fields are marked *

Post navigation