Climate Research Agendas Should Account for Anticipated AI Risks

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Climate change has often been characterized as humanity's greatest threat. However, rapid advancements in artificial intelligence (AI) may cause technological risks to materialize more rapidly than many climate impacts. This raises fundamental questions for the climate community. How should climate professionals address the potential for AI systems to automate cognitive tasks? How might dramatic technological transformation over shorter timescales affect the relative value of near-term versus long-term research priorities? Drawing on recent developments in AI capabilities, expert forecasting, and risk assessment, I suggest three broad scenarios to guide our thinking about AI development and its implications for climate research. Rather than advocating for any single response, I explore how climate professionals might thoughtfully prepare for multiple possible futures while maintaining commitment to addressing the climate crisis. The analysis considers potential adjustments to research communication, graduate education, and collaborative engagement with the AI safety community, while acknowledging the significant uncertainties inherent in technological forecasting. These questions have no easy answers, but they warrant serious consideration as the climate research community navigates unprecedented technological change alongside ongoing environmental challenges.

Article activity feed