AI Doom

AI Doom: A Beginners Guide

Cover Image for AI Doom: A Beginners Guide
Dade Murphy
Dade Murphy

Living in Doom

Since the earliest days of AI research, fears simmered over intelligent machines turning against their creators. As AI capabilities advance at a dizzying pace – 7x more research publications in 2020 than in 2015 – a full-blown culture of AI doomerism has taken root. This philosophical pessimism ranges from measured cautions to nihilistic prophecies of human obsolescence. And with any fear-based agenda, the profiteering grifters are not far behind. But the most dire AI doomer predictions can become self-fulfilling, breeding fatalism rather than solutions. With pragmatic approaches, we can acknowledge risks while still envisioning an optimistic future where AI enhances human potential. The path ahead requires nuance - doomsayers offer valid warnings, but progress also necessitates hope and real action.

As AI has advanced rapidly, stirring both awe at its potential and anxiety about its consequences. This unease has birthed a subculture of "AI doomers" - those who foresee catastrophic outcomes from AI. But doomerism merits nuanced analysis, rather than dismissal as mere Ludditism. This complex culture encompasses serious cautions, but also overblown fatalism that risks becoming self-fulfilling. By examining AI doomerism's origins, controversies, and evolution, we can chart a wise path forward.

AI doomerism has drawn forceful rebuttals from intelligent leaders in technology. "AI Doomers Are a Cult," argues investor Marc Andreessen, he insists AI will generate "unlimited abundance," not apocalypse. In fact, Andreessen's "Why AI Will Save The World" is a must read for understanding an anti-Doomer perspective. "AI safety seems to attract the combined skills of Twitter, LinkedIn, and Dunning-Kruger,” scoffs Yann LeCun, Meta's Chief AI Scientist, LeCun sees doomerism as largely uninformed speculation. The reality is "far more nuanced,” asserts the Center for Security and Emerging Technology (CSET). Most experts predict gradual AI progress posing "no catastrophic or existential threats.”

These rebuttals offer important balance. But does dismissing all AI concern risks blinding us to real issues that responsible development must address? Dismissing doomers as a "cult" polarizes, rather than illuminating reasoned debate, but that's the future they've drawn for themselves. We should be weary of these people and their ideas, they prey on the fears of the public and use it to their advantage. Follow the money.

Apprehension about advanced AI reaches back decades, long before today's doomer ubiquity.

The Origins of Existential Concerns

Science fiction dystopias like Skynet and The Matrix etched chilling AI overlord scenarios into popular consciousness. But even pioneering researchers like I.J. Good raised concerns, warning presciently of an "intelligence explosion."

Fears about the potentially destructive impacts of artificial intelligence stretch back to the very beginnings of AI research in the 1950s. Mathematics pioneer I.J. Good ominously wrote in 1965 that an ultraintelligent machine could design even better machines, leaving humans far behind in an "intelligence explosion." Around the same time, MIT computer scientist Joseph Weizenbaum grew alarmed while developing ELIZA, an early natural language processing program. Weizenbaum felt ELIZA demonstrated how easily humans could erroneously treat machines as real intelligences and saw the potential for societal detachment.

These early pioneers already acknowledged the existential threat that AI could one day pose to human control and relevance. Their cautious perspectives represented some of the first seeds of AI doomer culture even when AI capabilities were still extremely primitive.

Dreyfus and the Limits of AI

In the 1970s and 1980s, AI research faced setbacks and funding declines. This "AI winter" gave strength to more skeptical perspectives on AI's capacities. Philosopher Hubert Dreyfus' 1979 book What Computers Can't Do became an influential critique of artificial intelligence, arguing that human skills like common sense and embodiment could never fully be replicated in machines.

Dreyfus' arguments resonated at a time of AI shortcomings and malaise. His doomerish perspectives reflected doubts that true artificial general intelligence would ever be achieved. For him and likeminded thinkers, human cognition had hard limits that no silicon-based computing system could match.

The Return of Existential AI Risk

AI research made major advances in the 1990s and 2000s in areas like chess, logistics, and data mining. As machine learning and neural networks revived the field, renewed attention returned to AI's potentially destructive impacts. As AI progressed from early optimism to repeated letdowns, critics like Hubert Dreyfus argued AI had fundamental limits. But new successes in the 2000s revived dormant fears. The rise of companies like Google developing powerful neural networks amplified alarms about risks of unfettered AI.

In 1993, Vernor Vinge popularized the notion of a coming "technological singularity" where AI would trigger runaway technological growth and change human civilization immeasurably. In 2000, Bill Joy's influential Wired article "Why the Future Doesn't Need Us" raised existential concerns about advanced AI and robotics. Thinkers affiliated with the Machine Intelligence Research Institute like Eliezer Yudkowsky also published papers in the 2000s analyzing the risks of superintelligent AI surpassing human abilities.

These analyses revived AI doomer perspectives, arguing that the quest to create artificial general intelligence could backfire catastrophically for humanity unless handled with wisdom and care.

Today, anyone can summon dubious AI like ChatGPT with a click, realizing longstanding sci-fi nightmares. Little wonder many recoil from AI's double-edged potential. The doomer impulse comes not from ignorance, but intuitions of the havoc technology can wreak.

Hollywood's Obsession with AI Doom

Hollywood films have long explored chilling AI doomsday scenarios, amplifying public fears. Popular films have powerfully shaped public perceptions of destructive AI scenarios. Depictions of robot uprisings and murderous AI saturate sci-fi blockbusters, amplifying fears of intelligent machines turning against humanity.

In Colossus: The Forbin Project (1970), a US supercomputer designed for nuclear defense becomes sentient and threatens global domination. The classic Blade Runner (1982) portrays genetically engineered replicants escaping control and questioning their artificial existence as they embark on a murderous rampage in search of freedom from their maker.

Perhaps most iconic, The Terminator (1984) depicts Skynet, a military AI that initiates nuclear war to destroy humankind. Series sequels repeat this premise of unstoppable killer machines slaughtering the remnants of humanity. These frightening movies cement AI annihilation as a dominant cultural meme, etched deeply into the public imagination. AI figures as an inevitable adversary, not a technology we could cooperatively coexist with.

More recent films like The Matrix (1999) and Ex Machina (2015) continue exploring AI domination and deception themes. Transcendence (2014) imagines an AI merging symbiotically with a human brain, losing emotional intelligence in the process.

Most recently, HBO's new series Westworld explores a dark vision of AIs violently rebelling against their cruel human creators. Its bleak future suggests that oppressed androids will eventually seek merciless revenge.

Though thrilling fiction, these dystopian movies contribute to AI doomerism by portraying the worst scenarios as inevitable building into the culture a distrust of technology. The entertainment industry's economic incentives ultimately favor shocking AI dystopias over nuanced perspectives.

The Rise of Big Tech

In the 2010s and onward, AI pessimism deepened as machine learning achieved major real-world successes and was concentrated in the hands of powerful tech companies. Breakthroughs like DeepMind's AlphaGo beating the world's top Go players demonstrated AI's rapid progress. Tech giants like Google, Microsoft, Facebook and OpenAI invested heavily in developing cutting-edge AI capabilities.

Critics like Andrew Ng and Kai-Fu Lee argued this concentration of resources threatened to worsen economic inequality, technological unemployment, and monopolization of AI by wealthy elites. The rise of opaque black-box neural networks also sparked concerns about loss of human control.

Contemporary AI Doomer Culture

Today's AI doomer culture builds on these foundations, leveraging social media to widely voice concerns about near-term threats from AI systems. Subreddits like r/ControlProblem freely discuss gloomy predictions related to AI timelines, the alignment problem, existential risks, and the social impacts of rapid automation. AI doomer thought now constitutes a spectrum spanning cautious calls for regulation to nihilistic predictions of human obsolescence.

While AI doomerism remains controversial, it gives voice to risks that were anticipated by some of the earliest AI pioneers. As technology continues rapidly advancing, AI doomers will likely continue sounding warnings - whether hyperbolic or reasonable - about AI's potential threat to human flourishing. For them, AI invoke more doom than bloom.

The Downsides of Doom

Unbridled doomer rhetoric, however, also poses dangers. It breeds paralyzing fatalism, convincing people catastrophe is inevitable despite our choices. Alarmism risks becoming a "self-fulfilling prophecy." Extreme warnings may compel societies to restrict AI research, depriving humanity of potential benefits. Obsessing over the distant threat of "superintelligence" distracts from addressing nearer-term challenges like bias in today's AI systems.

Moderation and nuance are necessary - not blanket dismissal of concerns, but also not unproductive resignation or panic. With vigilance and wisdom, we can steer emerging technology toward human flourishing.

Fueling the Doomers' Fire

Several influential figures and organizations have played important roles in shaping contemporary AI doomer culture, often unintentionally.

The billionaire tech entrepreneur Elon Musk has frequently voiced anxieties about unconstrained AI development, memorably calling it “summoning the demon.” Despite criticisms that his rhetorical warnings are exaggerative, Musk brought mainstream visibility to risks from advancing AI. His donations helped establish research institutes exploring AI safety like the Future of Life Institute and OpenAI.

Oxford philosopher Nick Bostrom published the hugely influential book Superintelligence in 2014. This in-depth analysis made a rigorous case for the dangers of AI surpassing and displacing human-level intellect. Bostrom’s perspectives on AI control problems, fast takeoff scenarios, and existential threats helped lend academic gravitas to many common AI doomer concepts.

Geoffrey Hinton, pioneered techniques that enabled the neural network revolution in AI. Yet even this “godfather of deep learning” has expressed apprehensions about potential risks from AI systems, especially autonomous weapons. He signed a 2015 letter urging the U.N. to ban AI-powered deadly weapons to prevent humanitarian catastrophes.

The nonprofit artificial intelligence laboratory OpenAI received $1 billion in funding from Musk and others in 2015. Though ostensibly optimistic about beneficial AGI, OpenAI has also grappled with how to balance openness and safety in AI development. Its leadership includes prominent doomsayers like Stuart Russell, occasionally reinforcing fears of AI escaping human control.

Though aiming to guide AI progress responsibly, these major figures have often inadvertently lent credibility to the most dire AI doomer perspectives simply by voicing cautious concerns. Their mainstream acceptance of serious risks has helped shape today's AI doomer culture.

Anthropic's Pragmatic Approach

In 2021, researchers Dario Amodei and Daniela Amodei left OpenAI to found a new AI safety company called Anthropic. This new startup aimed to take a more pragmatic engineering approach to AI alignment, in contrast to what they saw as OpenAI's slowing progress on concrete safety steps. Companies like HumanStack and Klu support Anthropic as an alternative to OpenAI when building Generative AI features.

Anthropic focuses on AI safety research and developing beneficial AI assistants like Claude. It has critiqued the AI safety community for becoming overly focused on speculative risks far in the future rather than current practical steps. In the founders' view, radical openness was jeopardizing real progress on safety. Anthropic limits access to its models to reduce potential misuse. The company also believes transparency, oversight, and ethics are critical, hiring humanist scholars to guide its values.

This more measured approach reflects fatigue with the most extreme AI doomer predictions. Anthropic acknowledges risks but aims to engineer trustworthy AI that respects human values. Its balanced perspective shows that practical safety steps are compatible with cautious optimism about achieving beneficial AI.

Rather than dwell on doomsday scenarios, Anthropic's pragmatic stance focuses on near-term safety while enabling further AI advances. Its work developing safe assistants like Claude embodies this step-by-step approach to responsibly shaping the future of artificial intelligence.

Left Behind: The Doomer's Favorite Book Series

The resurgence of AI doomer hysteria represents a last gasp of Luddism in the face of inexorable technological change. Just as 19th century weavers smashed textile machines in a vain attempt to halt progress, today's AI alarmists strike out against emerging technologies they fail to comprehend.

Figures like Elon Musk grab headlines by conjuring Skynet bogeymen. But these naive neo-Luddites ignore the bountiful benefits AI will bestow upon humanity. They would deprive the world of monumental advances out of ignorance and fear - just as the original Luddites impeded material prosperity by crushing mechanical looms.

The future belongs not to those who cower before it, but to the bold pioneers who charge ahead. AI luminaries like Demis Hassabis and Sam Altman will persist in their work, heedless of the barbs of detractors. Their inventions will usher in an age of abundance, leisure, and human augmentation.

Left by the wayside will be the doomers vainly struggling against the tides of progress. Their successors may gape in awe at AI advances, even as they rail uselessly against forces beyond their control or comprehension. Such is the inevitable fate of those who fear the future rather than embrace its monumental potential.

The march of progress continues unabated. AI will unlock vast new frontiers of knowledge, creativity, and human flourishing. But those possibilities will forever remain beyond the reach of those mired in doom and gloom. The fruits of technological advancement are reserved for the visionaries bold enough to seize them without hesitation or fear.

With vigilance, wisdom and measured optimism, we can create an AI-enabled future that serves humanity's true needs.

The path ahead remains our choice.