Rise of the ‘Machine Defendant’? A Cautionary Analysis and Conceptualisation of Civil and Criminal Liability Approaches to the Actions of Robots and Artificial Intelligence
Machines have been known to cause harm to humans, and liability for such harm has often been sheeted home to the natural or corporate persons behind the machines who have breached legal duties. With the rise of intelligent and autonomous machines which increasingly learn from external sources, the possibility of actions and ‘decision making’ by AIs that may not have been anticipated by those behind the machines increases. So too does the possibility that designers, manufacturers, owners, or users may argue that they could not reasonably foresee such actions or decisions by AI and so should not be held legally responsible for these actions when they cause harm. Solutions to the possibility that victims of harm may have no civil or criminal remedy range from laws analogising with liability for dangerous animals, to granting limited legal personality to AI. Yet strict or stricter liability on the manufacturers and promoters of AI appears to be the preferable policy option for remedying such harm for the foreseeable future.