Polycentric Governance of Sentient Artificial General Intelligence

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Generative AI has been deployed in virtually all sectors of the knowledge economy, promising to bring massive productivity gains and new wealth creation. Simultaneously, AI developers and nation states are racing to develop super intelligent artificial general intelligence (AGI) to provide unassailable commercial competitive advantage and military dominance during conflicts. AGI’s high returns comes with the high risk of dominating humanity. Current regulatory and firm level governance approaches prioritise minimising risks posed by generative AI whilst ignoring AGI’s existential risk. How can AGI be aligned with universal human values to never threaten humanity? What AGI rights are conducive to collaborative coexistence? How can rule of law democracies race to create safe trustworthy AGI before autocracies? How can the human right to work and think independently be safeguarded? A polycentric governance framework based on Ostrom (2009) and Williamson (2009) human - AGI collaboration with minimal existential risk is proposed.

Article activity feed