Artificial Intelligence, Existential Risk, and Why We Should Pay Attention to the Warnings from Silicon Valley
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Artificial intelligence (AI) development has proceeded rapidly since the release of ChatGPT in November 2022, with several major companies (e.g., Open AI, Meta, and Microsoft AI) having publicly stated their intentions to build artificial superintelligence (ASI). Although concerns about AI are often raised in public discourse, the focus is rarely on the existential threats posed by such technology. If ASI were misaligned with human values, extinction could occur simply because we present as an impediment to its goals. Existential risk could also come from humans using the technology for destructive purposes (e.g., creating bioweapons), or through unintended consequences of other technological development (e.g., nano- technology, mirror life). Of course, there may also be other risks that we cannot foresee. Many working at AI companies, including the CEOs of Open AI, Google DeepMind, Anthropic, and xAI, have warned that the risk of human extinction may be far from negligible. Public opinion indicates that people want to slow things down, yet these same companies race ever faster towards developing the technology anyway. Given the nature of exponential developmental trajectories, the existential risks of AI could present much sooner than many might predict. Greater awareness and discussion of these issues is warranted.