Report
October 31, 2024
Key Points
Executive Summary
New and increasingly capable artificial intelligence applications are a fact of life. They offer great promise of advances in human welfare but also have engendered fears of misalignment with human values and objectives, leading at best to harm to individuals and at worst to catastrophic societal outcomes and even threats to human survival. Consequently, considerable attention has been given to whether AI applications should be subject to regulation and, if so, what form that regulation should take. In the EU and the US, the focus has been on using risk management processes to ensure safe development and deployment and establishing confidence in AI use.
Risk management processes and safety regimes draw on a long history of developing computer applications based on models of mathematical, scientific, and engineering precision—and this is likely satisfactory for managing risks associated with “good, old-fashioned” symbolic AI. Nevertheless, a new generation of generative AIs (GAIs) that have been pretrained are not well suited to governance and management using risk management processes because their very basis is toward continuous adaptation and infinite variety rather than constraint and increased precision. They will also likely intersect with complex dynamic human systems, leading to great uncertainty. Managing uncertainty is different from managing risk, so a different sort of regulatory framework is needed for GAIs.
This report explores the distinctions between risk and uncertainty in AI. It illustrates why existing risk management arrangements are insufficient to prevent truly unexpected harms from GAIs. It argues that what is required is a set of arrangements for managing the consequences of harm arising, without chilling the incentives for innovative development and competitive deployment of GAIs. Arguably, insurance arrangements for managing outcome uncertainties provide a more constructive way forward than do risk management regimes, which presume knowledge of outcomes that is just not available.