Switch to: References

Add citations

You must login to add citations.
  1. Missed opportunities for AI governance: lessons from ELS programs in genomics, nanotechnology, and RRI.Maximilian Braun & Ruth Müller - forthcoming - AI and Society:1-14.
    Since the beginning of the current hype around Artificial Intelligence (AI), governments, research institutions, and the industry invited ethical, legal, and social sciences (ELS) scholars to research AI’s societal challenges from various disciplinary viewpoints and perspectives. This approach builds upon the tradition of supporting research on the societal aspects of emerging sciences and technologies, which started with the Ethical, Legal, and Social Implications (ELSI) Program in the Human Genome Project (HGP) in the early 1990s. However, although a diverse ELS research (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Four Fundamental Components for Intelligibility and Interpretability in AI Ethics.Moto Kamiura - forthcoming - American Philosophical Quarterly.
    Intelligibility and interpretability related to artificial intelligence (AI) are crucial for enabling explicability, which is vital for establishing constructive communication and agreement among various stakeholders, including users and designers of AI. It is essential to overcome the challenges of sharing an understanding of the details of the various structures of diverse AI systems, to facilitate effective communication and collaboration. In this paper, we propose four fundamental terms: “I/O,” “Constraints,” “Objectives,” and “Architecture.” These terms help mitigate the challenges associated with intelligibility (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Subjectness of Intelligence: Quantum-Theoretic Analysis and Ethical Perspective.Ilya A. Surov & Elena N. Melnikova - forthcoming - Foundations of Science.
    Download  
     
    Export citation  
     
    Bookmark  
  • Tailoring responsible research and innovation to the translational context: the case of AI-supported exergaming.Sabrina Blank, Celeste Mason, Frank Steinicke & Christian Herzog - 2024 - Ethics and Information Technology 26 (2):1-16.
    We discuss the implementation of Responsible Research and Innovation (RRI) within a project for the development of an AI-supported exergame for assisted movement training, outline outcomes and reflect on methodological opportunities and limitations. We adopted the responsibility-by-design (RbD) standard (CEN CWA 17796:2021) supplemented by methods for collaborative, ethical reflection to foster and support a shift towards a culture of trustworthiness inherent to the entire development process. An embedded ethicist organised the procedure to instantiate a collaborative learning effort and implement RRI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Augmenting Morality through Ethics Education: the ACTWith model.Jeffrey White - 2024 - AI and Society:1-20.
    Recently in this journal, Jessica Morley and colleagues (AI & SOC 2023 38:411–423) review AI ethics and education, suggesting that a cultural shift is necessary in order to prepare students for their responsibilities in developing technology infrastructure that should shape ways of life for many generations. Current AI ethics guidelines are abstract and difficult to implement as practical moral concerns proliferate. They call for improvements in ethics course design, focusing on real-world cases and perspective-taking tools to immerse students in challenging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Integrating ethics in AI development: a qualitative study.Laura Arbelaez Ossa, Giorgia Lorenzini, Stephen R. Milford, David Shaw, Bernice S. Elger & Michael Rost - 2024 - BMC Medical Ethics 25 (1):1-11.
    Background While the theoretical benefits and harms of Artificial Intelligence (AI) have been widely discussed in academic literature, empirical evidence remains elusive regarding the practical ethical challenges of developing AI for healthcare. Bridging the gap between theory and practice is an essential step in understanding how to ethically align AI for healthcare. Therefore, this research examines the concerns and challenges perceived by experts in developing ethical AI that addresses the healthcare context and needs. Methods We conducted semi-structured interviews with 41 (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI research ethics is in its infancy: the EU’s AI Act can make it a grown-up.Anaïs Resseguier & Fabienne Ufert - 2024 - Research Ethics 20 (2):143-155.
    As the artificial intelligence (AI) ethics field is currently working towards its operationalisation, ethics review as carried out by research ethics committees (RECs) constitutes a powerful, but so far underdeveloped, framework to make AI ethics effective in practice at the research level. This article contributes to the elaboration of research ethics frameworks for research projects developing and/or using AI. It highlights that these frameworks are still in their infancy and in need of a structure and criteria to ensure AI research (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Principle-at-Risk Analysis (PaRA): Operationalising Digital Ethics by Bridging Principles and Operations of a Digital Ethics Advisory Panel.André T. Nemat, Sarah J. Becker, Simon Lucas, Sean Thomas, Isabel Gadea & Jean Enno Charton - 2023 - Minds and Machines 33 (4):737-760.
    Recent attempts to develop and apply digital ethics principles to address the challenges of the digital transformation leave organisations with an operationalisation gap. To successfully implement such guidance, they must find ways to translate high-level ethics frameworks into practical methods and tools that match their specific workflows and needs. Here, we describe the development of a standardised risk assessment tool, the Principle-at-Risk Analysis (PaRA), as a means to close this operationalisation gap for a key level of the ethics infrastructure at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Defending explicability as a principle for the ethics of artificial intelligence in medicine.Jonathan Adams - 2023 - Medicine, Health Care and Philosophy 26 (4):615-623.
    The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of ‘principlism’. It specifically responds to a recent set of criticisms that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Reflections on Putting AI Ethics into Practice: How Three AI Ethics Approaches Conceptualize Theory and Practice.Hannah Bleher & Matthias Braun - 2023 - Science and Engineering Ethics 29 (3):1-21.
    Critics currently argue that applied ethics approaches to artificial intelligence (AI) are too principles-oriented and entail a theory–practice gap. Several applied ethical approaches try to prevent such a gap by conceptually translating ethical theory into practice. In this article, we explore how the currently most prominent approaches of AI ethics translate ethics into practice. Therefore, we examine three approaches to applied AI ethics: the embedded ethics approach, the ethically aligned approach, and the Value Sensitive Design (VSD) approach. We analyze each (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems.Jakob Mökander, Margi Sheth, David S. Watson & Luciano Floridi - 2023 - Minds and Machines 33 (1):221-248.
    Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Characteristics and challenges in the industries towards responsible AI: a systematic literature review.Marianna Anagnostou, Olga Karvounidou, Chrysovalantou Katritzidaki, Christina Kechagia, Kyriaki Melidou, Eleni Mpeza, Ioannis Konstantinidis, Eleni Kapantai, Christos Berberidis, Ioannis Magnisalis & Vassilios Peristeras - 2022 - Ethics and Information Technology 24 (3):1-18.
    Today humanity is in the midst of the massive expansion of new and fundamental technology, represented by advanced artificial intelligence (AI) systems. The ongoing revolution of these technologies and their profound impact across various sectors, has triggered discussions about the characteristics and values that should guide their use and development in a responsible manner. In this paper, we conduct a systematic literature review with the aim of pointing out existing challenges and required principles in AI-based systems in different industries. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Virtue-Based Framework to Support Putting AI Ethics into Practice.Thilo Hagendorff - 2022 - Philosophy and Technology 35 (3):1-24.
    Many ethics initiatives have stipulated sets of principles and standards for good technology development in the AI sector. However, several AI ethics researchers have pointed out a lack of practical realization of these principles. Following that, AI ethics underwent a practical turn, but without deviating from the principled approach. This paper proposes a complementary to the principled approach that is based on virtue ethics. It defines four “basic AI virtues”, namely justice, honesty, responsibility and care, all of which represent specific (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The ethics of algorithms from the perspective of the cultural history of consciousness: first look.Carlos Andres Salazar Martinez & Olga Lucia Quintero Montoya - 2023 - AI and Society 38 (2):763-775.
    Theories related to cognitive sciences, Human-in-the-loop Cyber-physical systems, data analysis for decision-making, and computational ethics make clear the need to create transdisciplinary learning, research, and application strategies to bring coherence to the paradigm of a truly human-oriented technology. Autonomous objects assume more responsibilities for individual and collective phenomena, they have gradually filtered into routines and require the incorporation of ethical practice into the professions related to the development, modeling, and design of algorithms. To make this possible, it is pertinent and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • In Defence of Principlism in AI Ethics and Governance.Elizabeth Seger - 2022 - Philosophy and Technology 35 (2):1-7.
    It is widely acknowledged that high-level AI principles are difficult to translate into practices via explicit rules and design guidelines. Consequently, many AI research and development groups that claim to adopt ethics principles have been accused of unwarranted “ethics washing”. Accordingly, there remains a question as to if and how high-level principles should be expected to influence the development of safe and beneficial AI. In this short commentary I discuss two roles high-level principles might play in AI ethics and governance. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Transformation²: Making software engineering accountable for sustainability.Christoph Schneider & Stefanie Betz - 2022 - Journal of Responsible Technology 10 (C):100027.
    Download  
     
    Export citation  
     
    Bookmark  
  • Four investment areas for ethical AI: Transdisciplinary opportunities to close the publication-to-practice gap.Jana Schaich Borg - 2021 - Big Data and Society 8 (2).
    Big Data and Artificial Intelligence have a symbiotic relationship. Artificial Intelligence needs to be trained on Big Data to be accurate, and Big Data's value is largely realized through its use by Artificial Intelligence. As a result, Big Data and Artificial Intelligence practices are tightly intertwined in real life settings, as are their impacts on society. Unethical uses of Artificial Intelligence are therefore a Big Data problem, at least to some degree. Efforts to address this problem have been dominated by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From the Ground Truth Up: Doing AI Ethics from Practice to Principles.James Brusseau - 2022 - AI and Society 37 (1):1-7.
    Recent AI ethics has focused on applying abstract principles downward to practice. This paper moves in the other direction. Ethical insights are generated from the lived experiences of AI-designers working on tangible human problems, and then cycled upward to influence theoretical debates surrounding these questions: 1) Should AI as trustworthy be sought through explainability, or accurate performance? 2) Should AI be considered trustworthy at all, or is reliability a preferable aim? 3) Should AI ethics be oriented toward establishing protections for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence Ethics and Safety: practical tools for creating "good" models.Nicholas Kluge Corrêa -
    The AI Robotics Ethics Society (AIRES) is a non-profit organization founded in 2018 by Aaron Hui to promote awareness and the importance of ethical implementation and regulation of AI. AIRES is now an organization with chapters at universities such as UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University, and the Pontifical Catholic University of Rio Grande do Sul (Brazil). AIRES at PUCRS is the first international chapter of AIRES, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ética e Segurança da Inteligência Artificial: ferramentas práticas para se criar "bons" modelos.Nicholas Kluge Corrêa - manuscript
    A AI Robotics Ethics Society (AIRES) é uma organização sem fins lucrativos fundada em 2018 por Aaron Hui, com o objetivo de se promover a conscientização e a importância da implementação e regulamentação ética da AI. A AIRES é hoje uma organização com capítulos em universidade como UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University e a Pontifícia Universidade Católica do Rio Grande do Sul (Brasil). AIRES na PUCRS é (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Operationalising AI ethics: barriers, enablers and next steps.Jessica Morley, Libby Kinsey, Anat Elhalal, Francesca Garcia, Marta Ziosi & Luciano Floridi - 2023 - AI and Society 38 (1):411-423.
    By mid-2019 there were more than 80 AI ethics guides available in the public domain. Despite this, 2020 saw numerous news stories break related to ethically questionable uses of AI. In part, this is because AI ethics theory remains highly abstract, and of limited practical applicability to those actually responsible for designing algorithms and AI systems. Our previous research sought to start closing this gap between the ‘what’ and the ‘how’ of AI ethics through the creation of a searchable typology (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Ethics-based auditing of automated decision-making systems: intervention points and policy implications.Jakob Mökander & Maria Axente - 2023 - AI and Society 38 (1):153-171.
    Organisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS. In previous work, we proposed that ethics-based auditing (EBA)—that is, a structured process by which ADMS are assessed for consistency with relevant principles or norms—can (a) help organisations (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The teaching of computer ethics on computer science and related degree programmes. a European survey.Ioannis Stavrakakis, Damian Gordon, Brendan Tierney, Anna Becevel, Emma Murphy, Gordana Dodig-Crnkovic, Radu Dobrin, Viola Schiaffonati, Cristina Pereira, Svetlana Tikhonenko, J. Paul Gibson, Stephane Maag, Francesco Agresta, Andrea Curley, Michael Collins & Dympna O’Sullivan - 2021 - International Journal of Ethics Education 7 (1):101-129.
    Within the Computer Science community, many ethical issues have emerged as significant and critical concerns. Computer ethics is an academic field in its own right and there are unique ethical issues associated with information technology. It encompasses a range of issues and concerns including privacy and agency around personal information, Artificial Intelligence and pervasive technology, the Internet of Things and surveillance applications. As computing technology impacts society at an ever growing pace, there are growing calls for more computer ethics content (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice.Marie Oldfield - 2021 - AI and Ethics 1 (1):1.
    AI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This paper focusses on the funding processes for AI research grants which we have identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Implementing Ethics in Healthcare AI-Based Applications: A Scoping Review.Robyn Clay-Williams, Elizabeth Austin & Magali Goirand - 2021 - Science and Engineering Ethics 27 (5):1-53.
    A number of Artificial Intelligence (AI) ethics frameworks have been published in the last 6 years in response to the growing concerns posed by the adoption of AI in different sectors, including healthcare. While there is a strong culture of medical ethics in healthcare applications, AI-based Healthcare Applications (AIHA) are challenging the existing ethics and regulatory frameworks. This scoping review explores how ethics frameworks have been implemented in AIHA, how these implementations have been evaluated and whether they have been successful. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be derived (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Beyond explainability: justifiability and contestability of algorithmic decision systems.Clément Henin & Daniel Le Métayer - 2022 - AI and Society 37 (4):1397-1410.
    In this paper, we point out that explainability is useful but not sufficient to ensure the legitimacy of algorithmic decision systems. We argue that the key requirements for high-stakes decision systems should be justifiability and contestability. We highlight the conceptual differences between explanations and justifications, provide dual definitions of justifications and contestations, and suggest different ways to operationalize justifiability and contestability.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Ethics-based auditing of automated decision-making systems: nature, scope, and limitations.Jakob Mökander, Jessica Morley, Mariarosaria Taddeo & Luciano Floridi - 2021 - Science and Engineering Ethics 27 (4):1–30.
    Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):239–256.
    As the range of potential uses for Artificial Intelligence, in particular machine learning, has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - manuscript
    As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Achieving Equity with Predictive Policing Algorithms: A Social Safety Net Perspective.Chun-Ping Yen & Tzu-Wei Hung - 2021 - Science and Engineering Ethics 27 (3):1-16.
    Whereas using artificial intelligence (AI) to predict natural hazards is promising, applying a predictive policing algorithm (PPA) to predict human threats to others continues to be debated. Whereas PPAs were reported to be initially successful in Germany and Japan, the killing of Black Americans by police in the US has sparked a call to dismantle AI in law enforcement. However, although PPAs may statistically associate suspects with economically disadvantaged classes and ethnic minorities, the targeted groups they aim to protect are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2021 - AI and Society.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • The case for a broader approach to AI assurance: addressing “hidden” harms in the development of artificial intelligence.Christopher Thomas, Huw Roberts, Jakob Mökander, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - forthcoming - AI and Society:1-16.
    Artificial intelligence (AI) assurance is an umbrella term describing many approaches—such as impact assessment, audit, and certification procedures—used to provide evidence that an AI system is legal, ethical, and technically robust. AI assurance approaches largely focus on two overlapping categories of harms: deployment harms that emerge at, or after, the point of use, and individual harms that directly impact a person as an individual. Current approaches generally overlook upstream collective and societal harms associated with the development of systems, such as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethical governance of artificial intelligence for defence: normative tradeoffs for principle to practice guidance.Alexander Blanchard, Christopher Thomas & Mariarosaria Taddeo - forthcoming - AI and Society:1-14.
    The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance of these systems. A recent shift from the what to the how of AI ethics sees a nascent body of literature published by defence organisations focussed on guidance to implement AI ethics principles. These efforts have neglected a crucial intermediate step between principles and guidance concerning the elicitation of ethical requirements for specifying the guidance. In this article, we outline the key normative (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Predictive policing and algorithmic fairness.Tzu-Wei Hung & Chun-Ping Yen - 2023 - Synthese 201 (6):1-29.
    This paper examines racial discrimination and algorithmic bias in predictive policing algorithms (PPAs), an emerging technology designed to predict threats and suggest solutions in law enforcement. We first describe what discrimination is in a case study of Chicago’s PPA. We then explain their causes with Broadbent’s contrastive model of causation and causal diagrams. Based on the cognitive science literature, we also explain why fairness is not an objective truth discoverable in laboratories but has context-sensitive social meanings that need to be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The limitation of ethics-based approaches to regulating artificial intelligence: regulatory gifting in the context of Russia.Gleb Papyshev & Masaru Yarime - forthcoming - AI and Society:1-16.
    The effects that artificial intelligence (AI) technologies will have on society in the short- and long-term are inherently uncertain. For this reason, many governments are avoiding strict command and control regulations for this technology and instead rely on softer ethics-based approaches. The Russian approach to regulating AI is characterized by the prevalence of unenforceable ethical principles implemented via industry self-regulation. We analyze the emergence of the regulatory regime for AI in Russia to illustrate the limitations of this approach. The article (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Microethics for healthcare data science: attention to capabilities in sociotechnical systems.Mark Graves & Emanuele Ratti - 2021 - The Future of Science and Ethics 6:64-73.
    It has been argued that ethical frameworks for data science often fail to foster ethical behavior, and they can be difficult to implement due to their vague and ambiguous nature. In order to overcome these limitations of current ethical frameworks, we propose to integrate the analysis of the connections between technical choices and sociocultural factors into the data science process, and show how these connections have consequences for what data subjects can do, accomplish, and be. Using healthcare as an example, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What about investors? ESG analyses as tools for ethics-based AI auditing.Matti Minkkinen, Anniina Niukkanen & Matti Mäntymäki - 2024 - AI and Society 39 (1):329-343.
    Artificial intelligence (AI) governance and auditing promise to bridge the gap between AI ethics principles and the responsible use of AI systems, but they require assessment mechanisms and metrics. Effective AI governance is not only about legal compliance; organizations can strive to go beyond legal requirements by proactively considering the risks inherent in their AI systems. In the past decade, investors have become increasingly active in advancing corporate social responsibility and sustainability practices. Including nonfinancial information related to environmental, social, and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Actionable Principles for Artificial Intelligence Policy: Three Pathways.Charlotte Stix - 2021 - Science and Engineering Ethics 27 (1):1-17.
    In the development of governmental policy for artificial intelligence that is informed by ethics, one avenue currently pursued is that of drawing on “AI Ethics Principles”. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of ‘Actionable Principles for AI’. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning.Justin B. Biddle - 2022 - Canadian Journal of Philosophy 52 (3):321-341.
    Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of ML systems (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • The Future of Value Sensitive Design.Batya Friedman, David Hendry, Steven Umbrello, Jeroen Van Den Hoven & Daisy Yoo - 2020 - Paradigm Shifts in ICT Ethics: Proceedings of the 18th International Conference ETHICOMP 2020.
    In this panel, we explore the future of value sensitive design (VSD). The stakes are high. Many in public and private sectors and in civil society are gradually realizing that taking our values seriously implies that we have to ensure that values effectively inform the design of technology which, in turn, shapes people’s lives. Value sensitive design offers a highly developed set of theory, tools, and methods to systematically do so.
    Download  
     
    Export citation  
     
    Bookmark  
  • Value preference profiles and ethical compliance quantification: a new approach for ethics by design in technology-assisted dementia care.Eike Buhr, Johannes Welsch & M. Salman Shaukat - forthcoming - AI and Society:1-17.
    Monitoring and assistive technologies (MATs) are being used more frequently in healthcare. A central ethical concern is the compatibility of these systems with the moral preferences of their users—an issue especially relevant to participatory approaches within the ethics-by-design debate. However, users’ incapacity to communicate preferences or to participate in design processes, e.g., due to dementia, presents a hurdle for participatory ethics-by-design approaches. In this paper, we explore the question of how the value preferences of users in the field of dementia (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Operationalizing the Ethics of Connected and Automated Vehicles. An Engineering Perspective.Fabio Fossa - 2022 - International Journal of Technoethics 13 (1):1-20.
    In response to the many social impacts of automated mobility, in September 2020 the European Commission published Ethics of Connected and Automated Vehicles, a report in which recommendations on road safety, privacy, fairness, explainability, and responsibility are drawn from a set of eight overarching principles. This paper presents the results of an interdisciplinary research where philosophers and engineers joined efforts to operationalize the guidelines advanced in the report. To this aim, we endorse a function-based working approach to support the implementation (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Contextual Integrity as a General Conceptual Tool for Evaluating Technological Change.Elizabeth O’Neill - 2022 - Philosophy and Technology 35 (3):1-25.
    The fast pace of technological change necessitates new evaluative and deliberative tools. This article develops a general, functional approach to evaluating technological change, inspired by Nissenbaum’s theory of contextual integrity. Nissenbaum introduced the concept of contextual integrity to help analyze how technological changes can produce privacy problems. Reinterpreted, the concept of contextual integrity can aid our thinking about how technological changes affect the full range of human concerns and values—not only privacy. I propose a generalized concept of contextual integrity that (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The tragedy of the AI commons.Travis LaCroix & Aydin Mohseni - 2022 - Synthese 200 (4):1-33.
    Policy and guideline proposals for ethical artificial intelligence research have proliferated in recent years. These are supposed to guide the socially-responsible development of AI for a common good. However, there typically exist incentives for non-cooperation ; and, these proposals often lack effective mechanisms to enforce their own normative claims. The situation just described constitutes a social dilemma—namely, a situation where no one has an individual incentive to cooperate, though mutual cooperation would lead to the best outcome for all involved. In (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI for the public. How public interest theory shifts the discourse on AI.Theresa Züger & Hadi Asghari - 2023 - AI and Society 38 (2):815-828.
    AI for social good is a thriving research topic and a frequently declared goal of AI strategies and regulation. This article investigates the requirements necessary in order for AI to actually serve a public interest, and hence be socially good. The authors propose shifting the focus of the discourse towards democratic governance processes when developing and deploying AI systems. The article draws from the rich history of public interest theory in political philosophy and law, and develops a framework for ‘public (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Investing in AI for social good: an analysis of European national strategies.Francesca Foffano, Teresa Scantamburlo & Atia Cortés - 2023 - AI and Society 38 (2):479-500.
    Artificial Intelligence (AI) has become a driving force in modern research, industry and public administration and the European Union (EU) is embracing this technology with a view to creating societal, as well as economic, value. This effort has been shared by EU Member States which were all encouraged to develop their own national AI strategies outlining policies and investment levels. This study focuses on how EU Member States are approaching the promise to develop and use AI for the good of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2022 - AI and Society 37 (1):215-230.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence.Jake B. Telkamp & Marc H. Anderson - 2022 - Journal of Business Ethics 178 (4):961-976.
    Organizations are making massive investments in artificial intelligence, and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations