In this chapter, we consider ethical and philosophical aspects of trust in the practice of medicine. We focus on trust within the patient-physician relationship, trust and professionalism, and trust in Western (allopathic) institutions of medicine and medical research. Philosophical approaches to trust contain important insights into medicine as an ethical and social practice. In what follows we explain several philosophical approaches and discuss their strengths and weaknesses in this context. We also highlight some relevant empirical work in the section on (...) trust in the institutions of medicine. It is hoped that the approaches discussed here can be extended to nursing and other topics in the philosophy of medicine. (shrink)
Advocates of moral enhancement through pharmacological, genetic, or other direct interventions sometimes explicitly argue, or assume without argument, that traditional moral education and development is insufficient to bring about moral enhancement. Traditional moral education grounded in a Kohlbergian theory of moral development is indeed unsuitable for that task; however, the psychology of moral development and education has come a long way since then. Recent studies support the view that moral cognition is a higher-order process, unified at a functional level, and (...) that a specific moral faculty does not exist. It is more likely that moral cognition involves a number of different mechanisms, each connected to other cognitive and affective processes. Taking this evidence into account, we propose a novel, empirically informed approach to moral development and education, in children and adults, which is based on a cognitive-affective approach to moral dispositions. This is an interpretative approach that derives from the cognitive-affective personality system (Mischel and Shoda, 1995). This conception individuates moral dispositions by reference to the cognitive and affective processes that realise them. Conceived of in this way, moral dispositions influence an agent's behaviour when they interact with situational factors, such as mood or social context. Understanding moral dispositions in this way lays the groundwork for proposing a range of indirect methods of moral enhancement, techniques that promise similar results as direct interventions whilst posing fewer risks. (shrink)
Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
It is not clear to what the projects of creating an artificial intelligence (AI) that does ethics, is moral, or makes moral judgments amounts. In this paper we discuss some of the extant metaethical theories and debates in moral philosophy by which such projects should be informed, specifically focusing on the project of creating an AI that makes moral judgments. We argue that the scope and aims of that project depend a great deal on antecedent metaethical commitments. Metaethics, therefore, plays (...) the role of an Archimedean fulcrum in this context, very much like the Archimedean role that it is often taken to take in context of normative ethics (Dworkin 1996; Dreier 2002; Fantl 2006; Ehrenberg 2008). (shrink)
Kunstmatige intelligentie (AI) en systemen die met machine learning (ML) werken, kunnen veel onderdelen van het medische besluitvormingsproces ondersteunen of vervangen. Ook zouden ze artsen kunnen helpen bij het omgaan met klinische, morele dilemma’s. AI/ML-beslissingen kunnen zo in de plaats komen van professionele beslissingen. We betogen dat dit belangrijke gevolgen heeft voor de relatie tussen een patiënt en de medische professie als instelling, en dat dit onvermijdelijk zal leiden tot uitholling van het institutionele vertrouwen in de geneeskunde.
This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what (...) needs to be done to achieve an acceptable level of explainability in an ML algorithm when it is used in a healthcare context. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.