If increasingly panic-stricken headlines are to be believed, Artificial Intelligence poses an existential threat to humanity. The Prime Minister’s advisor on AI has warned that we have just two years to protect the species, and the Center for AI Safety has published a statement signed by hundreds of AI researchers, lawmakers, academics, and industry leaders saying, ‘mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war’.
Such claims are worth taking seriously given the fact that governments all over the world are considering an array of AI policies. But they shouldn’t make us rush for a pause in AI research or motivate lawmakers to hamper AI innovation. There are risks associated with AI, but there are also benefits. Given the potential for AI to improve almost every aspect of our lives, it is worth considering what degrees of risk justify intervention and what such interventions might look like.
Including AI on a list of risks that includes nuclear war and pandemics is interesting but unhelpful. Nuclear weapons have only been used in warfare twice by the same military power against the same enemy in one conflict. In other words, nuclear warfare is very rare. This is in contrast to pandemics, which are comparatively frequent. AI’s potential to cause mass casualties is currently theoretical.