Switch to: References

Add citations

You must login to add citations.
  1. (1 other version)Biomimicry and AI-Enabled Automation in Agriculture. Conceptual Engineering for Responsible Innovation.Marco Innocenti - 2025 - Journal of Agricultural and Environmental Ethics 38 (2):1-17.
    This paper aims to engineer the concept of biomimetic design for its application in agricultural technology as an innovation strategy to sustain non-human species’ adaptation to today’s rapid environmental changes. By questioning the alleged intrinsic morality of biomimicry, a formulation of it is sought that goes beyond the sharp distinction between nature as inspiration and the human field of application of biomimetic technologies. After reviewing the main literature on Responsible Innovation, we support Vincent Blok’s “eco-centric” perspective on biomimicry, which considers (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Pluralism and the Design of Autonomous Vehicles.Adam Henschke & Chirag Arora - 2024 - Philosophy and Technology 37 (3):1-19.
    This paper advocates for an ethical analysis of autonomous vehicle systems (AVSs) based on a moral epistemic pluralism. This paper contends that approaching the design of intricate social technologies, such as AVSs, is most effective when acknowledging a diverse range of values. Additionally, a comprehensive ethical framework for autonomous vehicles should be applied across two interconnected layers. The first layer centers on the individual level, where each autonomous vehicle becomes a unit of moral consideration. The second layer focuses on the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A sociotechnical system perspective on AI.Olya Kudina & Ibo van de Poel - 2024 - Minds and Machines 34 (3):1-9.
    Download  
     
    Export citation  
     
    Bookmark  
  • Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context.Roel Dobbe & Anouk Wolters - 2024 - Minds and Machines 34 (2):1-51.
    This paper provides an empirical and conceptual account on seeing machine learning models as part of a sociotechnical system to identify relevant vulnerabilities emerging in the context of use. As ML is increasingly adopted in socially sensitive and safety-critical domains, many ML applications end up not delivering on their promises, and contributing to new forms of algorithmic harm. There is still a lack of empirical insights as well as conceptual tools and frameworks to properly understand and design for the impact (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Transparency for AI systems: a value-based approach.Stefan Buijsman - 2024 - Ethics and Information Technology 26 (2):1-11.
    With the widespread use of artificial intelligence, it becomes crucial to provide information about these systems and how they are used. Governments aim to disclose their use of algorithms to establish legitimacy and the EU AI Act mandates forms of transparency for all high-risk and limited-risk systems. Yet, what should the standards for transparency be? What information is needed to show to a wide public that a certain system can be used legitimately and responsibly? I argue that process-based approaches fail (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics of generative AI and manipulation: a design-oriented research agenda.Michael Klenk - 2024 - Ethics and Information Technology 26 (1):1-15.
    Generative AI enables automated, effective manipulation at scale. Despite the growing general ethical discussion around generative AI, the specific manipulation risks remain inadequately investigated. This article outlines essential inquiries encompassing conceptual, empirical, and design dimensions of manipulation, pivotal for comprehending and curbing manipulation risks. By highlighting these questions, the article underscores the necessity of an appropriate conceptualisation of manipulation to ensure the responsible development of Generative AI technologies.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethics of Artificial Intelligence.Stefan Buijsman, Michael Klenk & Jeroen van den Hoven - forthcoming - In Nathalie Smuha (ed.), Cambridge Handbook on the Law, Ethics and Policy of AI. Cambridge University Press.
    Artificial Intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but frequently fail to suffice due to the context (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ectogestative Technology and the Beginning of Life.Lily Frank, Julia Hermann, Ilona Kavege & Anna Puzio - 2023 - In Ibo van de Poel (ed.), Ethics of Socially Disruptive Technologies: An Introduction. Cambridge, UK: Open Book Publishers. pp. 113–140.
    How could ectogestative technology disrupt gender roles, parenting practices, and concepts such as ‘birth’, ‘body’, or ‘parent’? In this chapter, we situate this emerging technology in the context of the history of reproductive technologies and analyse the potential social and conceptual disruptions to which it could contribute. An ectogestative device, better known as ‘artificial womb’, enables the extra-uterine gestation of a human being, or mammal more generally. It is currently developed with the main goal of improving the survival chances of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Connecting ethics and epistemology of AI.Federica Russo, Eric Schliesser & Jean Wagemans - forthcoming - AI and Society:1-19.
    The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an integrated approach premised on the idea that a glass-box epistemology should explicitly consider how to incorporate values and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Design for values and conceptual engineering.Herman Veluwenkamp & Jeroen van den Hoven - 2023 - Ethics and Information Technology 25 (1):1-12.
    Politicians and engineers are increasingly realizing that values are important in the development of technological artefacts. What is often overlooked is that different conceptualizations of these abstract values lead to different design-requirements. For example, designing social media platforms for deliberative democracy sets us up for technical work on completely different types of architectures and mechanisms than designing for so-called liquid or direct forms of democracy. Thinking about Democracy is not enough, we need to design for the proper conceptualization of these (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Technology as Driver for Morally Motivated Conceptual Engineering.Herman Veluwenkamp, Marianna Capasso, Jonne Maas & Lavinia Marin - 2022 - Philosophy and Technology 35 (3):1-25.
    New technologies are the source of uncertainties about the applicability of moral and morally connotated concepts. These uncertainties sometimes call for conceptual engineering, but it is not often recognized when this is the case. We take this to be a missed opportunity, as a recognition that different researchers are working on the same kind of project can help solve methodological questions that one is likely to encounter. In this paper, we present three case studies where philosophers of technology implicitly engage (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology.Piercosma Bisconti, Davide Orsitto, Federica Fedorczyk, Fabio Brau, Marianna Capasso, Lorenzo De Marinis, Hüseyin Eken, Federica Merenda, Mirko Forti, Marco Pacini & Claudia Schettini - 2022 - AI and Society 1 (1):1-10.
    In this paper, we propose a methodology to maximize the benefits of interdisciplinary cooperation in AI research groups. Firstly, we build the case for the importance of interdisciplinarity in research groups as the best means to tackle the social implications brought about by AI systems, against the backdrop of the EU Commission proposal for an Artificial Intelligence Act. As we are an interdisciplinary group, we address the multi-faceted implications of the mass-scale diffusion of AI-driven technologies. The result of our exercise (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Responsibility gaps and the reactive attitudes.Fabio Tollon - 2022 - AI and Ethics 1 (1).
    Artificial Intelligence (AI) systems are ubiquitous. From social media timelines, video recommendations on YouTube, and the kinds of adverts we see online, AI, in a very real sense, filters the world we see. More than that, AI is being embedded in agent-like systems, which might prompt certain reactions from users. Specifically, we might find ourselves feeling frustrated if these systems do not meet our expectations. In normal situations, this might be fine, but with the ever increasing sophistication of AI-systems, this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Role of Engineers in Harmonising Human Values for AI Systems Design.Steven Umbrello - 2022 - Journal of Responsible Technology 10 (July):100031.
    Most engineers Fwork within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Explainable machine learning practices: opening another black box for reliable medical AI.Emanuele Ratti & Mark Graves - 2022 - AI and Ethics:1-14.
    In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Big Tech corporations and AI: A Social License to Operate and Multi-Stakeholder Partnerships in the Digital Age.Marianna Capasso & Steven Umbrello - 2023 - In Francesca Mazzi & Luciano Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer Verlag. pp. 231–249.
    The pervasiveness of AI-empowered technologies across multiple sectors has led to drastic changes concerning traditional social practices and how we relate to one another. Moreover, market-driven Big Tech corporations are now entering public domains, and concerns have been raised that they may even influence public agenda and research. Therefore, this chapter focuses on assessing and evaluating what kind of business model is desirable to incentivise the AI for Social Good (AI4SG) factors. In particular, the chapter explores the implications of this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values incorporated (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Towards a Value Sensitive Design Framework for Attaining Meaningful Human Control over Autonomous Weapons Systems.Steven Umbrello - 2021 - Dissertation, Consortium Fino
    The international debate on the ethics and legality of autonomous weapon systems (AWS) as well as the call for a ban are primarily focused on the nebulous concept of fully autonomous AWS. More specifically, on AWS that are capable of target selection and engagement without human supervision or control. This thesis argues that such a conception of autonomy is divorced both from military planning and decision-making operations as well as the design requirements that govern AWS engineering and subsequently the tracking (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics-based auditing of automated decision-making systems: nature, scope, and limitations.Jakob Mökander, Jessica Morley, Mariarosaria Taddeo & Luciano Floridi - 2021 - Science and Engineering Ethics 27 (4):1–30.
    Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Value Sensitive Design to Achieve the UN SDGs with AI: A Case of Elderly Care Robots.Steven Umbrello, Marianna Capasso, Maurizio Balistreri, Alberto Pirni & Federica Merenda - 2021 - Minds and Machines 31 (3):395-419.
    Healthcare is becoming increasingly automated with the development and deployment of care robots. There are many benefits to care robots but they also pose many challenging ethical issues. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. Using the value sensitive design approach to technology design, this paper extends its application to care robots by integrating the values of care, values that (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Global justice and the use of AI in education: ethical and epistemic aspects.Aleksandra Vučković & Vlasta Sikimić - forthcoming - AI and Society:1-18.
    One of the biggest contemporary challenges in education is the appropriate application of advanced digital solutions. If properly implemented, AI could benefit students, opening the door for personalized study programs. However, we need to ensure that AI in classrooms is used responsibly and that it does not pose a threat to students in any way. More specifically, we need to preserve the moral and epistemic values we wish to pass on to future generations and ensure the inclusion of underprivileged students. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • In Search of a Mission: Artificial Intelligence in Clinical Ethics.Nikola Biller-Andorno, Andrea Ferrario & Sophie Gloeckler - 2022 - American Journal of Bioethics 22 (7):23-25.
    Artificial intelligence has found its way into many areas of human life, serving a range of purposes. Sometimes AI tools are designed to help humans eliminate high-volume, tedious, routine tas...
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Cognitive architectures for artificial intelligence ethics.Steve J. Bickley & Benno Torgler - 2023 - AI and Society 38 (2):501-519.
    As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems.Andrea Ferrario, Alessandro Facchini & Alberto Termine - 2024 - Minds and Machines 34 (3):1-27.
    The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Intentionality gap and preter-intentionality in generative artificial intelligence.Roberto Redaelli - forthcoming - AI and Society:1-8.
    The emergence of generative artificial intelligence, such as large language models and text-to-image models, has had a profound impact on society. The ability of these systems to simulate human capabilities such as text writing and image creation is radically redefining a wide range of practices, from artistic production to education. While there is no doubt that these innovations are beneficial to our lives, the pervasiveness of these technologies should not be underestimated, and raising increasingly pressing ethical questions that require a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What does it mean to trust blockchain technology?Yan Teng - 2022 - Metaphilosophy 54 (1):145-160.
    This paper argues that the widespread belief that interactions between blockchains and their users are trust-free is inaccurate and misleading, since this belief not only overlooks the vital role played by trust in the lack of knowledge and control but also conceals the moral and normative relevance of relying on blockchain applications. The paper reaches this argument by providing a close philosophical examination of the concept referred to as trust in blockchain technology, clarifying the trustor group, the structure, and the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Toward children-centric AI: a case for a growth model in children-AI interactions.Karolina La Fors - 2024 - AI and Society 39 (3):1303-1315.
    This article advocates for a hermeneutic model for children-AI (age group 7–11 years) interactions in which the desirable purpose of children’s interaction with artificial intelligence (AI) systems is children's growth. The article perceives AI systems with machine-learning components as having a recursive element when interacting with children. They can learn from an encounter with children and incorporate data from interaction, not only from prior programming. Given the purpose of growth and this recursive element of AI, the article argues for distinguishing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Contestable AI by Design: Towards a Framework.Kars Alfrink, Ianus Keller, Gerd Kortuem & Neelke Doorn - 2023 - Minds and Machines 33 (4):613-639.
    As the use of AI systems continues to increase, so do concerns over their lack of fairness, legitimacy and accountability. Such harmful automated decision-making can be guarded against by ensuring AI systems are contestable by design: responsive to human intervention throughout the system lifecycle. Contestable AI by design is a small but growing field of research. However, most available knowledge requires a significant amount of translation to be applicable in practice. A proven way of conveying intermediate-level, generative design knowledge is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Explainable Artificial Intelligence in Data Science.Joaquín Borrego-Díaz & Juan Galán-Páez - 2022 - Minds and Machines 32 (3):485-531.
    A widespread need to explain the behavior and outcomes of AI-based systems has emerged, due to their ubiquitous presence. Thus, providing renewed momentum to the relatively new research area of eXplainable AI (XAI). Nowadays, the importance of XAI lies in the fact that the increasing control transference to this kind of system for decision making -or, at least, its use for assisting executive stakeholders- already affects many sensitive realms (as in Politics, Social Sciences, or Law). The decision-making power handover to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Reckoning with assessment: Can we responsibly innovate? [REVIEW]Steven Umbrello - 2021 - Metascience 30 (1):41-43.
    A new edited volume by Emad Yaghmaei and Ibo van de Poel, Assessment of Responsible Innovation: Methods and Practices, is reviewed. Responsible innovation (RI) is a project into the ethical and design issues that emerge during the engineering programs of new technologies. This volume is intended to determine how if at all, RI practices can be validated and assessed for success in context.
    Download  
     
    Export citation  
     
    Bookmark  
  • Impactful Conceptual Engineering: Designing Technological Artefacts Ethically.Herman Veluwenkamp - forthcoming - Ethical Theory and Moral Practice:1-16.
    Conceptual engineering is the design, evaluation and implementation of concepts. Despite its popularity, some have argued that the methodology is not worthwhile, because the implementation of new concepts is both inscrutable and beyond our control. In the recent literature we see different responses to this worry. Some have argued that it is for political reasons just as well that implementation is such a difficult task, while others have challenged the metasemantic and social assumptions that underlie this skepticism about implementation. In (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Beyond the Digital Public Sphere: Towards a Political Ontology of Algorithmic Technologies.Jordi Viader Guerrero - 2024 - Philosophy and Technology 37 (3):1-23.
    The following paper offers a political and philosophical reading of ethically informed technological design practices to critically tackle the implicit regulative ideal in the design of social media as a means to digitally represent the liberal public sphere. The paper proposes that, when it comes to the case of social media platforms, understood along with the machine learning algorithms embedded in them as algorithmic technologies, ethically informed design has an implicit conception of democracy that parallels that of Jürgen Habermas’ procedural (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care.Angeliki Kerasidou, Antoniya Georgieva & Rachel Dlugatch - 2023 - BMC Medical Ethics 24 (1):1-16.
    BackgroundDespite the recognition that developing artificial intelligence (AI) that is trustworthy is necessary for public acceptability and the successful implementation of AI in healthcare contexts, perspectives from key stakeholders are often absent from discourse on the ethical design, development, and deployment of AI. This study explores the perspectives of birth parents and mothers on the introduction of AI-based cardiotocography (CTG) in the context of intrapartum care, focusing on issues pertaining to trust and trustworthiness.MethodsSeventeen semi-structured interviews were conducted with birth parents (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Rethinking Remote Work, Automated Technologies, Meaningful Work and the Future of Work: Making a Case for Relationality.Edmund Terem Ugar - 2023 - Philosophy and Technology 36 (2):1-21.
    Remote work, understood here as a working environment different from the traditional office working space, is a phenomenon that has existed for many years. In the past, workers voluntarily opted, when they were allowed to, to work remotely rather than commuting to their traditional work environment. However, with the emergence of the global pandemic (corona virus-COVID-19), people were forced to work remotely to mitigate the spread of the virus. Consequently, researchers have identified some benefits and adverse effects of remote work, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI for the public. How public interest theory shifts the discourse on AI.Theresa Züger & Hadi Asghari - 2023 - AI and Society 38 (2):815-828.
    AI for social good is a thriving research topic and a frequently declared goal of AI strategies and regulation. This article investigates the requirements necessary in order for AI to actually serve a public interest, and hence be socially good. The authors propose shifting the focus of the discourse towards democratic governance processes when developing and deploying AI systems. The article draws from the rich history of public interest theory in political philosophy and law, and develops a framework for ‘public (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare.Juan M. Durán - 2021 - Artificial Intelligence 297 (C):103498.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Analyzing the justification for using generative AI technology to generate judgments based on the virtue jurisprudence theory.Shilun Zhou - 2024 - Journal of Decision Systems 1:1-24.
    This paper responds to the question of whether judgements generated by judges using ChatGPT can be directly adopted. It posits that it is unjust for judges to rely on and directly adopt ChatGPT-generated judgements based on virtue jurisprudence theory. This paper innovatively applies case-based empirical analysis and is the first to use virtue jurisprudence approach to analyse the question and support its argument. The first section reveals the use of generative AI-based tools in judicial practice and the existence of erroneous (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare.Giorgia Pozzi - 2023 - Ethics and Information Technology 25 (1):1-12.
    Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients’ likelihood (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Separating facts and evaluation: motivation, account, and learnings from a novel approach to evaluating the human impacts of machine learning.Ryan Jenkins, Kristian Hammond, Sarah Spurlock & Leilani Gilpin - forthcoming - AI and Society:1-14.
    In this paper, we outline a new method for evaluating the human impact of machine-learning applications. In partnership with Underwriters Laboratories Inc., we have developed a framework to evaluate the impacts of a particular use of machine learning that is based on the goals and values of the domain in which that application is deployed. By examining the use of artificial intelligence in particular domains, such as journalism, criminal justice, or law, we can develop more nuanced and practically relevant understandings (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics of the algorithmic prediction of goal of care preferences: from theory to practice.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):165-174.
    Artificial intelligence (AI) systems are quickly gaining ground in healthcare and clinical decision-making. However, it is still unclear in what way AI can or should support decision-making that is based on incapacitated patients’ values and goals of care, which often requires input from clinicians and loved ones. Although the use of algorithms to predict patients’ most likely preferred treatment has been discussed in the medical ethics literature, no example has been realised in clinical practice. This is due, arguably, to the (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.Andrea Ferrario - 2022 - Journal of Medical Ethics 48 (7):492-494.
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified beliefs on the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethical considerations in Risk management of autonomous and intelligent systems.Anetta Jedličková - 2024 - Ethics and Bioethics (in Central Europe) 14 (1-2):80-95.
    The rapid development of Artificial Intelligence (AI) has raised concerns regarding the potential risks it may pose to humans, society, and the environment. Recent advancements have intensified these concerns, emphasizing the need for a deeper understanding of the technical, societal, and ethical aspects that could lead to adverse or harmful failures in decisions made by autonomous and intelligent systems (AIS). This paper aims to examine the ethical dimensions of risk management in AIS. Its objective is to highlight the significance of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Apprehending AI moral purpose in practical wisdom.Mark Graves - 2022 - AI and Society:1-14.
    Practical wisdom enables moral decision-making and action by aligning one’s apprehension of proximate goods with a distal, socially embedded interpretation of a more ultimate Good. A focus on purpose within the overall process mutually informs human moral psychology and moral AI development in their examinations of practical wisdom. AI practical wisdom could ground an AI system’s apprehension of reality in a sociotechnical moral process committed to orienting AI development and action in light of a pluralistic, diverse interpretation of that Good. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Computer Says I Don’t Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence.Andreia Martinho, Maarten Kroesen & Caspar Chorus - 2021 - Minds and Machines 31 (2):215-237.
    As AI Systems become increasingly autonomous, they are expected to engage in decision-making processes that have moral implications. In this research we integrate theoretical and empirical lines of thought to address the matters of moral reasoning and moral uncertainty in AI Systems. We reconceptualize the metanormative framework for decision-making under moral uncertainty and we operationalize it through a latent class choice model. The core idea being that moral heterogeneity in society can be codified in terms of a small number of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Basic values in artificial intelligence: comparative factor analysis in Estonia, Germany, and Sweden.Anu Masso, Anne Kaun & Colin van Noordt - 2024 - AI and Society 39 (6):2775-2790.
    Increasing attention is paid to ethical issues and values when designing and deploying artificial intelligence (AI). However, we do not know how those values are embedded in artificial artefacts or how relevant they are to the population exposed to and interacting with AI applications. Based on literature engaging with ethical principles and moral values in AI, we designed an original survey instrument, including 15 value components, to estimate the importance of these values to people in the general population. The article (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics-based auditing of automated decision-making systems: intervention points and policy implications.Jakob Mökander & Maria Axente - 2023 - AI and Society 38 (1):153-171.
    Organisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS. In previous work, we proposed that ethics-based auditing (EBA)—that is, a structured process by which ADMS are assessed for consistency with relevant principles or norms—can (a) help organisations (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Philosophical Inquiry into Computer Intentionality: Machine Learning and Value Sensitive Design.Dmytro Mykhailov - 2023 - Human Affairs 33 (1):115-127.
    Intelligent algorithms together with various machine learning techniques hold a dominant position among major challenges for contemporary value sensitive design. Self-learning capabilities of current AI applications blur the causal link between programmer and computer behavior. This creates a vital challenge for the design, development and implementation of digital technologies nowadays. This paper seeks to provide an account of this challenge. The main question that shapes the current analysis is the following: What conceptual tools can be developed within the value sensitive (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems.Jakob Mökander, Margi Sheth, David S. Watson & Luciano Floridi - 2023 - Minds and Machines 33 (1):221-248.
    Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation