Switch to: References

Add citations

You must login to add citations.
  1. From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Download  
     
    Export citation  
     
    Bookmark   87 citations  
  • How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • (1 other version)Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):239–256.
    As the range of potential uses for Artificial Intelligence, in particular machine learning, has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • (1 other version)Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • (1 other version)Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):323–327.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?Frank Ursin, Cristian Timmermann & Florian Steger - 2022 - Bioethics 36 (2):143-153.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • AI and society: a virtue ethics approach.Mirko Farina, Petr Zhdanov, Artur Karimov & Andrea Lavazza - 2024 - AI and Society 39 (3):1127-1140.
    Advances in artificial intelligence and robotics stand to change many aspects of our lives, including our values. If trends continue as expected, many industries will undergo automation in the near future, calling into question whether we can still value the sense of identity and security our occupations once provided us with. Likewise, the advent of social robots driven by AI, appears to be shifting the meaning of numerous, long-standing values associated with interpersonal relationships, like friendship. Furthermore, powerful actors’ and institutions’ (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Robot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy.Paul Formosa - 2021 - Minds and Machines 31 (4):595-616.
    Social robots are robots that can interact socially with humans. As social robots and the artificial intelligence that powers them becomes more advanced, they will likely take on more social and work roles. This has many important ethical implications. In this paper, we focus on one of the most central of these, the impacts that social robots can have on human autonomy. We argue that, due to their physical presence and social capacities, there is a strong potential for social robots (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Ethics-based auditing of automated decision-making systems: nature, scope, and limitations.Jakob Mökander, Jessica Morley, Mariarosaria Taddeo & Luciano Floridi - 2021 - Science and Engineering Ethics 27 (4):1–30.
    Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • What Are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids.Sabine Salloch & Andreas Eriksen - 2024 - American Journal of Bioethics 24 (9):67-78.
    Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as “human in the loop” or “meaningful human control” are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • (1 other version)AI and its new winter: from myths to realities.Luciano Floridi - 2020 - Philosophy and Technology 33 (1):1-3.
    An AI winter may be defined as the stage when technology, business, and the media come to terms with what AI can or cannot really do as a technology without exaggeration. Through discussion of previous AI winters, this paper examines the hype cycle (which by turn characterises AI as a social panacea or a nightmare of apocalyptic proportions) and argues that AI should be treated as a normal technology, neither as a miracle nor as a plague, but rather as of (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • (2 other versions)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Decolonizing AI Ethics: Relational Autonomy as a Means to Counter AI Harms.Sábëlo Mhlambi & Simona Tiribelli - 2023 - Topoi 42 (3):867-880.
    Many popular artificial intelligence (AI) ethics frameworks center the principle of autonomy as necessary in order to mitigate the harms that might result from the use of AI within society. These harms often disproportionately affect the most marginalized within society. In this paper, we argue that the principle of autonomy, as currently formalized in AI ethics, is itself flawed, as it expresses only a mainstream mainly liberal notion of autonomy as rational self-determination, derived from Western traditional philosophy. In particular, we (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • From human resources to human rights: Impact assessments for hiring algorithms.Josephine Yam & Joshua August Skorburg - 2021 - Ethics and Information Technology 23 (4):611-623.
    Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic for two reasons. (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • To Each Technology Its Own Ethics: The Problem of Ethical Proliferation.Henrik Skaug Sætra & John Danaher - 2022 - Philosophy and Technology 35 (4):1-26.
    Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • (2 other versions)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Defending explicability as a principle for the ethics of artificial intelligence in medicine.Jonathan Adams - 2023 - Medicine, Health Care and Philosophy 26 (4):615-623.
    The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of ‘principlism’. It specifically responds to a recent set of criticisms that (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation.Jakob Mökander, Maria Axente, Federico Casolari & Luciano Floridi - 2022 - Minds and Machines 32 (2):241-268.
    The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the _conformity assessments_ that providers of high-risk AI systems are expected to conduct, and (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • (1 other version)Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - manuscript
    As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial intelligence and work: a critical review of recent research from the social sciences.Jean-Philippe Deranty & Thomas Corbin - forthcoming - AI and Society:1-17.
    This review seeks to present a comprehensive picture of recent discussions in the social sciences of the anticipated impact of AI on the world of work. Issues covered include: technological unemployment, algorithmic management, platform work and the politics of AI work. The review identifies the major disciplinary and methodological perspectives on AI’s impact on work, and the obstacles they face in making predictions. Two parameters influencing the development and deployment of AI in the economy are highlighted: the capitalist imperative and (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • What does it mean to embed ethics in data science? An integrative approach based on the microethics and virtues.Louise Bezuidenhout & Emanuele Ratti - 2021 - AI and Society 36:939–953.
    In the past few years, scholars have been questioning whether the current approach in data ethics based on the higher level case studies and general principles is effective. In particular, some have been complaining that such an approach to ethics is difficult to be applied and to be taught in the context of data science. In response to these concerns, there have been discussions about how ethics should be “embedded” in the practice of data science, in the sense of showing (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • The Ethics of Online Controlled Experiments (A/B Testing).Andrea Polonioli, Riccardo Ghioni, Ciro Greco, Prathm Juneja, Jacopo Tagliabue, David Watson & Luciano Floridi - 2023 - Minds and Machines 33 (4):667-693.
    Online controlled experiments, also known as A/B tests, have become ubiquitous. While many practical challenges in running experiments at scale have been thoroughly discussed, the ethical dimension of A/B testing has been neglected. This article fills this gap in the literature by introducing a new, soft ethics and governance framework that explicitly recognizes how the rise of an experimentation culture in industry settings brings not only unprecedented opportunities to businesses but also significant responsibilities. More precisely, the article (a) introduces a (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Ethical Principles for Artificial Intelligence in National Defence.Mariarosaria Taddeo, David McNeish, Alexander Blanchard & Elizabeth Edgar - 2021 - Philosophy and Technology 34 (4):1707-1729.
    Defence agencies across the globe identify artificial intelligence as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework. It identifies five principles—justified and overridable uses, just and transparent systems and processes, human moral responsibility, meaningful human control (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • The End of an Era: from Self-Regulation to Hard Law for the Digital Industry.Luciano Floridi - 2021 - Philosophy and Technology 34 (4):619-622.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Engineering Equity: How AI Can Help Reduce the Harm of Implicit Bias.Ying-Tung Lin, Tzu-Wei Hung & Linus Ta-Lun Huang - 2020 - Philosophy and Technology 34 (S1):65-90.
    This paper focuses on the potential of “equitech”—AI technology that improves equity. Recently, interventions have been developed to reduce the harm of implicit bias, the automatic form of stereotype or prejudice that contributes to injustice. However, these interventions—some of which are assisted by AI-related technology—have significant limitations, including unintended negative consequences and general inefficacy. To overcome these limitations, we propose a two-dimensional framework to assess current AI-assisted interventions and explore promising new ones. We begin by using the case of human (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Cultivating Moral Attention: a Virtue-Oriented Approach to Responsible Data Science in Healthcare.Emanuele Ratti & Mark Graves - 2021 - Philosophy and Technology 34 (4):1819-1846.
    In the past few years, the ethical ramifications of AI technologies have been at the center of intense debates. Considerable attention has been devoted to understanding how a morally responsible practice of data science can be promoted and which values have to shape it. In this context, ethics and moral responsibility have been mainly conceptualized as compliance to widely shared principles. However, several scholars have highlighted the limitations of such a principled approach. Drawing from microethics and the virtue theory tradition, (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US.Huw Roberts, Josh Cowls, Emmie Hine, Francesca Mazzi, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - 2021 - Science and Engineering Ethics 27 (6):1-25.
    Over the past few years, there has been a proliferation of artificial intelligence strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union and the United States’ AI strategies and considers the visions of a ‘Good AI Society’ that are forwarded in key policy documents and their opportunity costs, the extent to which the implementation of each vision is living up to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Justice by Algorithm: The Limits of AI in Criminal Sentencing.Isaac Taylor - 2023 - Criminal Justice Ethics 42 (3):193-213.
    Criminal justice systems have traditionally relied heavily on human decision-making, but new technologies are increasingly supplementing the human role in this sector. This paper considers what general limits need to be placed on the use of algorithms in sentencing decisions. It argues that, even once we can build algorithms that equal human decision-making capacities, strict constraints need to be placed on how they are designed and developed. The act of condemnation is a valuable element of criminal sentencing, and using algorithms (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence.Alexander Buhmann & Christian Fieseler - forthcoming - Business Ethics Quarterly:1-34.
    Responsible innovation in artificial intelligence calls for public deliberation: well-informed “deep democratic” debate that involves actors from the public, private, and civil society sectors in joint efforts to critically address the goals and means of AI. Adopting such an approach constitutes a challenge, however, due to the opacity of AI and strong knowledge boundaries between experts and citizens. This undermines trust in AI and undercuts key conditions for deliberation. We approach this challenge as a problem of situating the knowledge of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence.Jake B. Telkamp & Marc H. Anderson - 2022 - Journal of Business Ethics 178 (4):961-976.
    Organizations are making massive investments in artificial intelligence, and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Implementing Ethics in Healthcare AI-Based Applications: A Scoping Review.Robyn Clay-Williams, Elizabeth Austin & Magali Goirand - 2021 - Science and Engineering Ethics 27 (5):1-53.
    A number of Artificial Intelligence (AI) ethics frameworks have been published in the last 6 years in response to the growing concerns posed by the adoption of AI in different sectors, including healthcare. While there is a strong culture of medical ethics in healthcare applications, AI-based Healthcare Applications (AIHA) are challenging the existing ethics and regulatory frameworks. This scoping review explores how ethics frameworks have been implemented in AIHA, how these implementations have been evaluated and whether they have been successful. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Harmonizing Artificial Intelligence for Social Good.Nicolas Berberich, Toyoaki Nishida & Shoko Suzuki - 2020 - Philosophy and Technology 33 (4):613-638.
    To become more broadly applicable, positions on AI ethics require perspectives from non-Western regions and cultures such as China and Japan. In this paper, we propose that the addition of the concept of harmony to the discussion on ethical AI would be highly beneficial due to its centrality in East Asian cultures and its applicability to the challenge of designing AI for social good. We first present a synopsis of different definitions of harmony in multiple contexts, such as music and (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Operationalizing the Ethics of Connected and Automated Vehicles. An Engineering Perspective.Fabio Fossa - 2022 - International Journal of Technoethics 13 (1):1-20.
    In response to the many social impacts of automated mobility, in September 2020 the European Commission published Ethics of Connected and Automated Vehicles, a report in which recommendations on road safety, privacy, fairness, explainability, and responsibility are drawn from a set of eight overarching principles. This paper presents the results of an interdisciplinary research where philosophers and engineers joined efforts to operationalize the guidelines advanced in the report. To this aim, we endorse a function-based working approach to support the implementation (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Unintentional intentionality: art and design in the age of artificial intelligence.Kostas Terzidis, Filippo Fabrocini & Hyejin Lee - 2023 - AI and Society 38 (4):1715-1724.
    This paper presents an emerging aspect of intentionality through recent Artificial Intelligence (AI) developments in art and design. Our main thesis is that, if we focus just on the outcome of the artistic process, the intentionality of the artist does not have any relevance. Intention is measured as a result of actions regardless of whether they are human-based or not as long as there is an esthetical value intersubjectively acknowledged. In other words, what matters is the ‘intentio’ embedded in the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Getting into the engine room: a blueprint to investigate the shadowy steps of AI ethics.Johan Rochel & Florian Evéquoz - 2021 - AI and Society 36 (2):609-622.
    Enacting an AI system typically requires three iterative phases where AI engineers are in command: selection and preparation of the data, selection and configuration of algorithmic tools, and fine-tuning of the different parameters on the basis of intermediate results. Our main hypothesis is that these phases involve practices with ethical questions. This paper maps these ethical questions and proposes a way to address them in light of a neo-republican understanding of freedom, defined as absence of domination. We thereby identify different (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Supporting Trustworthy AI Through Machine Unlearning.Emmie Hine, Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2024 - Science and Engineering Ethics 30 (5):1-13.
    Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence in Higher Education in South Africa: Some Ethical Considerations.Tanya de Villiers-Botha - 2024 - Kagisano 15:165-188.
    There are calls from various sectors, including the popular press, industry, and academia, to incorporate artificial intelligence (AI)-based technologies in general, and large language models (LLMs) (such as ChatGPT and Gemini) in particular, into various spheres of the South African higher education sector. Nonetheless, the implementation of such technologies is not without ethical risks, notably those related to bias, unfairness, privacy violations, misinformation, lack of transparency, and threats to autonomy. This paper gives an overview of the more pertinent ethical concerns (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Fairness as Equal Concession: Critical Remarks on Fair AI.Christopher Yeomans & Ryan van Nood - 2021 - Science and Engineering Ethics 27 (6):1-14.
    Although existing work draws attention to a range of obstacles in realizing fair AI, the field lacks an account that emphasizes how these worries hang together in a systematic way. Furthermore, a review of the fair AI and philosophical literature demonstrates the unsuitability of ‘treat like cases alike’ and other intuitive notions as conceptions of fairness. That review then generates three desiderata for a replacement conception of fairness valuable to AI research: (1) It must provide a meta-theory for understanding tradeoffs, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Characteristics and challenges in the industries towards responsible AI: a systematic literature review.Marianna Anagnostou, Olga Karvounidou, Chrysovalantou Katritzidaki, Christina Kechagia, Kyriaki Melidou, Eleni Mpeza, Ioannis Konstantinidis, Eleni Kapantai, Christos Berberidis, Ioannis Magnisalis & Vassilios Peristeras - 2022 - Ethics and Information Technology 24 (3):1-18.
    Today humanity is in the midst of the massive expansion of new and fundamental technology, represented by advanced artificial intelligence (AI) systems. The ongoing revolution of these technologies and their profound impact across various sectors, has triggered discussions about the characteristics and values that should guide their use and development in a responsible manner. In this paper, we conduct a systematic literature review with the aim of pointing out existing challenges and required principles in AI-based systems in different industries. We (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it.Alice Liefgreen, Netta Weinstein, Sandra Wachter & Brent Mittelstadt - 2024 - AI and Society 39 (5):2183-2199.
    Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • An Ethical Framework for the Design, Development, Implementation, and Assessment of Drones Used in Public Healthcare.Dylan Cawthorne & Aimee Robbins-van Wynsberghe - 2020 - Science and Engineering Ethics 26 (5):2867-2891.
    The use of drones in public healthcare is suggested as a means to improve efficiency under constrained resources and personnel. This paper begins by framing drones in healthcare as a social experiment where ethical guidelines are needed to protect those impacted while fully realizing the benefits the technology offers. Then we propose an ethical framework to facilitate the design, development, implementation, and assessment of drones used in public healthcare. Given the healthcare context, we structure the framework according to the four (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation.Jan Gogoll, Niina Zuber, Severin Kacianka, Timo Greger, Alexander Pretschner & Julian Nida-Rümelin - 2021 - Philosophy and Technology 34 (4):1085-1108.
    Software systems play an ever more important role in our lives and software engineers and their companies find themselves in a position where they are held responsible for ethical issues that may arise. In this paper, we try to disentangle ethical considerations that can be performed at the level of the software engineer from those that belong in the wider domain of business ethics. The handling of ethical problems that fall into the responsibility of the engineer has traditionally been addressed (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (1 other version)Jus in bello Necessity, The Requirement of Minimal Force, and Autonomous Weapons Systems.Alexander Blanchard & Mariarosaria Taddeo - 2022 - Journal of Military Ethics 21 (3):286-303.
    In this article we focus on the jus in bello principle of necessity for guiding the use of autonomous weapons systems (AWS). We begin our analysis with an account of the principle of necessity as entailing the requirement of minimal force found in Just War Theory, before highlighting the absence of this principle in existing work on AWS. Overlooking this principle means discounting the obligations that combatants have towards one another in times of war. We argue that the requirement of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The ethical use of artificial intelligence in human resource management: a decision-making framework.Sarah Bankins - 2021 - Ethics and Information Technology 23 (4):841-854.
    Artificial intelligence is increasingly inputting into various human resource management functions, such as sourcing job applicants and selecting staff, allocating work, and offering personalized career coaching. While the use of AI for such tasks can offer many benefits, evidence suggests that without careful and deliberate implementation its use also has the potential to generate significant harms. This raises several ethical concerns regarding the appropriateness of AI deployment to domains such as HRM, which directly deal with managing sometimes sensitive aspects of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • No such thing as one-size-fits-all in AI ethics frameworks: a comparative case study.Vivian Qiang, Jimin Rhim & AJung Moon - forthcoming - AI and Society:1-20.
    Despite the bombardment of AI ethics frameworks (AIEFs) published in the last decade, it is unclear which of the many have been adopted in the industry. What is more, the sheer volume of AIEFs without a clear demonstration of their effectiveness makes it difficult for businesses to select which framework they should adopt. As a first step toward addressing this problem, we employed four different existing frameworks to assess AI ethics concerns of a real-world AI system. We compared the experience (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Neo-Republican Critique of AI ethics.Jonne Maas - 2022 - Journal of Responsible Technology 9 (C):100022.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems.Jakob Mökander, Margi Sheth, David S. Watson & Luciano Floridi - 2023 - Minds and Machines 33 (1):221-248.
    Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Justice and the Normative Standards of Explainability in Healthcare.Saskia K. Nagel, Nils Freyer & Hendrik Kempt - 2022 - Philosophy and Technology 35 (4):1-19.
    Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The German Act on Autonomous Driving: Why Ethics Still Matters.Alexander Kriebitz, Raphael Max & Christoph Lütge - 2022 - Philosophy and Technology 35 (2):1-13.
    The German Act on Autonomous Driving constitutes the first national framework on level four autonomous vehicles and has received attention from policy makers, AI ethics scholars and legal experts in autonomous driving. Owing to Germany’s role as a global hub for car manufacturing, the following paper sheds light on the act’s position within the ethical discourse and how it reconfigures the balance between legislation and ethical frameworks. Specifically, in this paper, we highlight areas that need to be more worked out (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation