“I AM BECOME DEATH, THE DESTROYER OF WORLDS”: APPLYING STRICT LIABILITY TO ARTIFICIAL INTELLIGENCE AS AN ABNORMALLY DANGEROUS ACTIVITY
Volume 96, No. 3, Summer 2024
By Renee Henson

Artificial intelligence (AI)-enabled tools have produced a myriad of injuries, up to and including death. This burgeoning technology has caused scholars to ask questions, such as, How do we create a legal framework for AI? Because AI creators have acknowledged that even they do not know the capacities of their technology for good or bad outcomes, this Article argues that an existing framework, strict liability, is an appropriate fit for harms arising from this new technology because a party need not prove negligence to prevail. Strict liability was uniquely developed to handle those activities that are “abnormally dangerous.” An abnormally dangerous activity is one that imposes an abnormal risk on anyone who is in the vicinity of its use.

The quintessential historical example of this is strict liability applied to the production of atomic energy. Congress acknowledged that nuclear energy would be extremely beneficial to society but could not be supported by the safety net of insurance, due to the potentially catastrophic results from its production. Congress enacted the Price-Anderson Act to both establish insurance for nuclear plant operators and to set a liability cap. The Act served as a carrot to encourage nuclear operator entrepreneurs and as a protection for the public. The development of nuclear energy is comparable to the development of AI. Nuclear energy and AI share the essential feature that their creators acknowledge the potentially enormous, but not fully understood, capacities of their creations to do harm.

This Article begins by discussing the development of strict liability for emerging technologies with the attribute of being “abnormally dangerous.” It then explores the issues associated with applying a strict liability framework to AI and posits that an umbrella insurance protection similar to the Price-Anderson Act would be a viable solution to one of the most salient questions in modern history: How do we create a legal framework for AI? This Article argues that regulation should create a compensatory structure for potentially catastrophic harms created by an unknown (or not fully understood) technology.

Tags: