Who Should Bear the Risk When Self‐Driving Vehicles Crash?

Download Edit this record How to cite View on PhilPapers
The moral importance of liability to harm has so far been ignored in the lively debate about what self-driving vehicles should be programmed to do when an accident is inevitable. But liability matters a great deal to just distribution of risk of harm. While morality sometimes requires simply minimizing relevant harms, this is not so when one party is liable to harm in virtue of voluntarily engaging in activity that foreseeably creates a risky situation, while having reasonable alternatives. On plausible assumptions, merely choosing to use a self-driving vehicle typically gives rise to a degree of liability, so that such vehicles should be programmed to shift the risk from bystanders to users, other things being equal. Insofar vehicles cannot be programmed to take all the factors affecting liability into account, there is a pro tanto moral reason not to introduce them, or restrict their use.
PhilPapers/Archive ID
Upload history
First archival date: 2020-03-03
Latest version: 3 (2020-11-28)
View other versions
Added to PP index

Total views
471 ( #11,748 of 2,425,353 )

Recent downloads (6 months)
229 ( #2,037 of 2,425,353 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.