Abstract
While autonomous vehicles (AVs) are not designed to harm people, harming people is an inevitable by-product of their operation. How are AVs to deal ethically with situations where harming people is inevitable? Rather than focus on the much-discussed question of what choices AVs should make, we can also ask the much less discussed question of who gets to decide what AVs should do in such cases. Here there are two key options: AVs with a personal ethics setting (PES) or an “ethical knob” that end users can control or AVs with a mandatory ethics setting (MES) that end users cannot control. Which option, a PES or an MES, is best and why? This chapter argues, by drawing on the choice architecture literature, in favor of a hybrid view that requires mandated default choice settings while allowing for limited end user control.