AI Doom

Top 10 AI Doomers

Cover Image for Top 10 AI Doomers
Dade Murphy
Dade Murphy

Who are the top 10 AI doomers?

Like any good site participating in internet modernity, we too have a top 10 list. This one is for the top 10 AI doomers. We've compiled a list of the top 10 AI doomers based on their influence, reach, and the "quality" of their content. We've also included a brief description of each doomer and their work.

Alexey Turchin

Alexey is many things, but most famously he created at least 3 AGI/AI risk charts. Here GPT-4 summarizes the content of one of the charts:

The chart is a hypothetical scenario based on the concept of Artificial Intelligence (AI), specifically a scenario where AI becomes hostile or harmful to humans. While it is based on some existing technologies and theories, such as AI, autonomous weapons, and cyber warfare, it extrapolates these in a speculative manner. It involves a lot of assumptions and extreme worst-case scenarios that are not necessarily based on solid scientific evidence or consensus.

The chart explores a wide range of potential AI-related disasters, from global catastrophes orchestrated by AIs, military drones attacking humans, hacking of infrastructure, war between different AIs, and various forms of AI malfunction or malevolence.

While some aspects of this content (such as the risks of AI malfunction or misuse, the potential for autonomous weapons, or the vulnerabilities of infrastructure to cyber attacks) are areas of real concern and active discussion among scientists, ethicists, and policymakers, the content as a whole is highly speculative and not grounded in empirical evidence or widely accepted scientific theories. Other aspects, such as AI-induced human extinction, human-robot hybrids, AI-induced human enslavement, or AI conquering the universe, are mostly speculative and based more in the realm of science fiction than scientific reality.

Therefore, while this content might stimulate interesting discussion about the ethical, societal, and security implications of AI, it should not be regarded as a realistic or accurate prediction of future developments.

Research

Researchers and organizations like OpenAI, Google DeepMind, and the Center for AI Safety are actively working on AI alignment and safety. They aim to develop AI systems that are aligned with human values, follow human intent, and manage the risks associated with powerful AI systems. The field acknowledges the potential risks of unaligned AI, such as disempowering humanity or even leading to human extinction if AI systems outperform humans in most economically valuable tasks.

Critics

Critics argue that current AI alignment research may not be sufficient to address all possible risks. Some believe that AI alignment is a fool's errand and that there will always be a trade-off between AI capabilities and our ability to control them. However, researchers in the field are actively working on refining their understanding of AI alignment and developing new techniques to make AI systems safer and more aligned.

Difference Between Research and Fear

It is essential to differentiate between the hype generated by some influencers and media companies and the genuine efforts of researchers and organizations working on AI alignment and safety. While it is true that some influencers may exaggerate the risks or hype the issues without providing solutions, the AI alignment field is a legitimate area of research with many dedicated professionals working to address the challenges and mitigate the risks associated with AI systems.