It is not clear to what the projects of creating an artificial intelligence (AI) that does ethics, is moral, or makes moral judgments amounts. In this paper we discuss some of the extant metaethical theories and debates in moral philosophy by which such projects should be informed, specifically focusing on the project of creating an AI that makes moral judgments. We argue that the scope and aims of that project depend a great deal on antecedent metaethical commitments. Metaethics, therefore, plays (...) the role of an Archimedean fulcrum in this context, very much like the Archimedean role that it is often taken to take in context of normative ethics (Dworkin 1996; Dreier 2002; Fantl 2006; Ehrenberg 2008). (shrink)
In this paper, I offer an account of the dependence relation between perception of change and the subjective flow of time that is consistent with some extant empirical evidence from priming by unconscious change. This view is inspired by the one offered by William James, but it is articulated in the framework of contemporary functionalist accounts of mental qualities and higher-order theories of consciousness. An additional advantage of this account of the relationship between perception of change and subjective time is (...) that is makes sense of instances where we are not consciously aware of changes but still experience the flow of time. (shrink)
Quality Space Theory is a holistic model of qualitative states. On this view, individual mental qualities are defined by their locations in a space of relations, which reflects a similar space of relations among perceptible properties. This paper offers an extension of Quality Space Theory to temporal perception. Unconscious segmentation of events, the involvement of early sensory areas, and asymmetries of dominance in multi-modal perception of time are presented as evidence for the view.
In my dissertation I critically survey existing theories of time consciousness, and draw on recent work in neuroscience and philosophy to develop an original theory. My view depends on a novel account of temporal perception based on the notion of temporal qualities, which are mental properties that are instantiated whenever we detect change in the environment. When we become aware of these temporal qualities in an appropriate way, our conscious experience will feature the distinct temporal phenomenology that is associated with (...) the passing of time. The temporal qualities model of perception makes two predictions about the mechanisms of time perception; one that time perception is modality specific and the other that it can occur without awareness. My argument for this view partially depends on a number of psychophysical experiments that I designed and implemented myself and which investigate subjective time distortions caused by looming visual stimuli. These results show that the mechanisms of conscious experience of time are distinct from the mechanisms of time perception, as my theory of temporal qualities predicts. (shrink)
Predictions about autonomous weapon systems (AWS) are typically thought to channel fears that drove all the myths about intelligence embodied in matter. One of these is the idea that the technology can get out of control and ultimately lead to horrific consequences, as is the case in Mary Shelley’s classic Frankenstein. Given this, predictions about AWS are sometimes dismissed as science-fiction fear-mongering. This paper considers several analogies between AWS and other weapon systems and ultimately offers an argument that nuclear weapons (...) and their effect on the development of modern asymmetrical warfare are the best analogy to the introduction of AWS. The final section focuses on this analogy and offers speculations about the likely consequences of AWS being hacked. These speculations tacitly draw on myths and tropes about technology and AI from popular fiction, such as Frankenstein, to project a convincing model of the risks and benefits of AWS deployment. (shrink)
This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what (...) needs to be done to achieve an acceptable level of explainability in an ML algorithm when it is used in a healthcare context. (shrink)
Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.