Who Should Bear the Risk When Self-Driving Vehicles Crash?

Journal of Applied Philosophy 38 (4):630-645 (2020)
  Copy   BIBTEX

Abstract

The moral importance of liability to harm has so far been ignored in the lively debate about what self-driving vehicles should be programmed to do when an accident is inevitable. But liability matters a great deal to just distribution of risk of harm. While morality sometimes requires simply minimizing relevant harms, this is not so when one party is liable to harm in virtue of voluntarily engaging in activity that foreseeably creates a risky situation, while having reasonable alternatives. On plausible assumptions, merely choosing to use a self-driving vehicle typically gives rise to a degree of liability, so that such vehicles should be programmed to shift the risk from bystanders to users, other things being equal. Insofar vehicles cannot be programmed to take all the factors affecting liability into account, there is a pro tanto moral reason not to introduce them, or restrict their use.

Author's Profile

Antti Kauppinen
University of Helsinki

Analytics

Added to PP
2020-03-03

Downloads
1,434 (#6,549)

6 months
254 (#7,507)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?