Most Recent

2021    “Artificial Intelligence & its Discontents.” Special Issue, Interdisciplinary Science Reviews 46(1-2)
(Table of ContentsFull Issue PDF)

Freud famously observed that civilization, despite being ostensibly intended to protect humanity from misery, is paradoxically a great source of unhappiness. Similarly, AI is both touted as the solution to humanity’s biggest problems and decried as one of the biggest problems humankind has ever faced – even, perhaps, its last.”

From the Introduction to the Issue
Happy to see this excellent double-issue in print!

2021    “The ‘General Problem Solver’ Does Not Exist: Mortimer Taube and the Art of AI Criticism.” IEEE Annals of the History of Computing 43(1): 60-73.

By combatting this conjectural conviction that humans are just ‘meat machines,’ AI workers, critics, and discontents of all stripes can build upon Taube’s legacy and contribute to the construction of a humane future in which this pseudoscientific ideology comes to be seen as an embarrassment to technological civilization,
not unlike eugenics or physiognomy.”

Award-Winning Work

2018    “Broken Promises & Empty Threats: The Evolution of AI in the USA, 1956-1996.” Technology’s Stories.

…as of the mid-1990s, AI was little more than a morass of broken promises and empty threats, a defeated technoscience impaled on its own hubris.”

Journal Articles

2021    “Unsavory Medicine for Technological Civilization: Introducing ‘Artificial Intelligence & its Discontents’.” Interdisciplinary Science Reviews 46(1-2): 1-18.

2020    with Mustafa Bayram, Simon Springer, and Vural Özdemir. “COVID-19 Digital Health Innovation Policy: A Portal to Alternative Futures in the Making.” OMICS: A Journal of Integrative Biology 24(8): 460-469.

2020    with Chandler Maskal. “Sentiment Analysis of the News Media on Artificial Intelligence Does Not Support Claims of Negative Bias Against AI.” OMICS 24(5): 286-299. 

2020    with Vural Özdemir, et al. “Digging Deeper into Precision/Personalized Medicine.” OMICS 24(2): 1-19.

2019    “Artificial Intelligence and Japan’s Fifth Generation: The Information Society, Neoliberalism, and Alternative Modernities.” Pacific Historical Review 88(4): 619-658.  

2014    with Linnda R. Caporael. “The Primacy of Scaffolding within Groups for the Evolution of Group-Level Traits.” Behavioral and Brain Sciences 37(3): 255-256. 

[AI scientist] Yutaka Matsuo imagines how the Japanese computing industry might have progressed differently: ‘I know there is no ‘What if?’ in history, but my dream is to imagine that if the Web had appeared 15 years earlier, Japan would be sitting where Silicon Valley is right now.'”

From “Artificial Intelligence and Japan’s Fifth Generation”

Conference Proceedings

2020    “‘AI for Social Good’ and the First AI Arms Race: Lessons from Japan’s Fifth Generation Computer Systems (FGCS) Project.” In Proceedings of the 34th Annual Conference of the Japanese Society for Artificial Intelligence

2018    “A Framework for Evaluating Barriers to the Democratization of Artificial Intelligence.” In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 8079-8080. Palo Alto, CA: AAAI Press.

2018    “AI Risk Mitigation Through Democratic Governance: Introducing the 7-Dimensional AI Risk Horizon.” In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 366-367. New York: ACM.

Book Chapters

2018    with Daniel Chard. “Disrupting the AAA$.” In Science for the People: Documents from America’s Movement of Radical Scientists. Schmalzer, Chard, and Bothelo, eds., 36-61. Cambridge, MA: University of Massachusetts Press. 

2015    with Ron Eglash. “Satoyama as Generative Justice: Humanity, Hybridity, and Biodiversity in a Horizontal Social Ecology.” In Shape Shifting. Marhöfer and Lylov, eds., 58-68. Berlin: Archive Books.

2014    with Ron Eglash. “Basins of Attraction for Generative Justice.” In Chaos Theory in Politics. Banerjee, Erçetin, and Tekin, eds., 75-88. Dordrecht: Springer Netherlands. 

A metronome keeps a steady rhythm even if perturbed: it forms a cyclic basin of attraction. Remove the motor and it gradually comes to a stop: a new basin of attraction was formed. Similarly, humans are a motor of biodiversity in this system.”

From “Humanity, Hybridity, and Biodiversity”

Book Reviews

2018/2019       “Review of Homo Deus: A Brief History of Tomorrow, by Yuval Harari, New York: Vintage, 2017.” ICON 24: 234-236

2017    “Review of Robots, by John M. Jordan, Cambridge, MA: The MIT Press, 2016.” ICON 23: 200-202

2016    “Review of Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom, Oxford: Oxford University Press, 2014.” ICON 22: 144-146.

Without this bit of context, historians will find much of Homo Deus, such as Harari’s claim that ‘once technology enables us to re-engineer human minds, Homo sapiens will disappear, human history will come to an end and a completely new kind of process will begin, which people like you and me cannot comprehend,’ incomprehensible.”

From “Review of Homo Deus

Media Appearances

2020    “HAI Fellow Colin Garvey: A Zen Buddhist Monk’s Approach to Democratizing AI.” Stanford HAI

2020    Featured Guest. “A Zen Approach to Making AI Work for All of Us,” Techsequences podcast series, July 8.

2019    Contributor. “Talking to Machines: LISP and the Origins of AI.” Redhat Command Line Heroes podcast, Season 3, Episode 7.

People are often able to withstand serious suffering if they know it’s meaningful. But I know a lot of young people see a pretty bleak future for humanity and aren’t sure where the meaning is in it all. And so I would love to see AI play a more positive role in solving these serious social problems. But I also see a potential for increased risk and suffering, in a physical way, maybe with killer robots and driverless cars, but potentially also psychological and personal suffering. Anything I can do to reduce that gives my scholarship an orientation and meaning.”  

From “A Zen Buddhist Monk’s Approach to Democratizing AI”


2020    with Özdemir, Vural, Simon Springer, and Mustafa Bayram. “COVID-19 Health Technology Governance, Epistemic Competence, and the Future of Knowledge in an Uncertain World.” OMICS 24(8): 451–53.

2019    “Hypothesis: Is ‘Terminator Syndrome’ a Barrier to Democratizing Artificial Intelligence and Public Engagement in Digital Health?” OMICS 23(7): 362-363.

2018    “Interview with Colin Garvey, Rensselaer Polytechnic Institute. Artificial Intelligence and Systems Medicine Convergence.” OMICS 22(2): 130-132.

Yes, futures are always plural. But as with many other emerging technologies, incumbent interests are working hard to establish control over the AI narrative and present it as inevitable. ‘The machines are coming—are you ready, or not?’ For me, this is why AI manifests that ironic paradox of modern technology that the philosopher Hannah Arendt identified—it supposedly makes us more powerful, and yet we’re somehow powerless to resist it! “

From “Interview with Colin Garvey”