The recent artificial intelligence safety summit convened by U.K. Prime Minister Rishi Sunak has sparked discussions about the creation of an ‘IPCC for AI’ to assess risks from AI and guide its governance. This article examines the potential pitfalls and drawbacks of such an international advisory panel.
An International Panel on AI Safety
At the recent summit, Sunak announced an agreement amongst like-minded governments to establish an International Panel on AI Safety (IPAIS), modeled after the Intergovernmental Panel on Climate Change (IPCC). The IPCC synthesizes scientific literature on climate change into assessment reports to inform climate policy. Similarly, an IPAIS would distill technical research on AI into synopses of capabilities, timelines, risks, and policy options for global policymakers.
An IPAIS would provide regular evaluations of AI systems, offer predictions about technological progress and potential impacts, and could potentially play a role in approving frontier AI models before market release. In fact, Sunak negotiated an agreement with leading tech companies and attending countries to subject advanced AI models to government supervision before release.
The Lessons from the IPCC
While the concept of an IPCC for AI may sound appealing, it is important to learn from the mistakes and criticisms faced by the IPCC. The IPCC has been criticized for presenting an overly pessimistic view of climate change, downplaying uncertainties and positive trends. There have also been concerns about groupthink and the influence of ideologically-aligned scientists in the IPCC’s assessment process.
Similarly, the AI safety conference in the U.K. has been criticized for its lack of diversity in viewpoints and narrow focus on existential risks, suggesting bias may already be present in the IPAIS even before its official creation.
Challenges of Creating an Elite Committee
Creating elite committees to guide policy on complex issues is not a new phenomenon. However, history has shown that relying solely on intellectual elites can lead to errors. In the Middle Ages, power was concentrated in the hands of the clergy who claimed to interpret arcane information for the common man. Today, technical AI and climate research can intimidate the layperson with complex statistics and models, leading to a similar message: heed our wisdom or face doom.
But history has also shown that the intellectual elite can err. The Catholic Church hindered scientific progress and persecuted those who challenged their beliefs. Nations that embraced economic and technological dynamism flourished, while those that closed themselves off stagnated. Similarly, climate activists have resisted innovations like genetically modified crops and nuclear energy, despite their potential to reduce poverty and protect the planet.
Pitfalls of Centralized AI Governance
Creating a global AI governance body, akin to an IPCC for AI, may lead to several pitfalls. Firstly, it blurs the line between policy advocacy and science, potentially marginalizing diverse perspectives and stifling scientific debate. Secondly, it discourages jurisdictional competition and promotes one-size-fits-all commitments, disregarding different risk tolerances and values of individual nations. Lastly, regulations based on precautionary international bodies are likely to be overly pessimistic and restrictive, hindering the potential benefits of AI applications.
AI has the potential to benefit civilization in numerous ways, from healthcare innovations to environmental sustainability. However, overly stringent regulations based on alarmist predictions and centralized vetting procedures could impede its progress. Instead, decentralized policies targeted at concrete harms, along with research and education from diverse viewpoints, offer a more balanced path forward.
While the idea of an IPCC for AI may seem attractive, it is important to consider the drawbacks and potential pitfalls. Relying solely on an intellectual elite to guide AI governance may repeat the mistakes of the past. Instead, a more balanced approach that encourages diverse perspectives, decentralized policies, and rigorous scientific debate is necessary for thoughtful AI governance.