Switch to: References

Add citations

You must login to add citations.
  1. Operationalising AI ethics: how are companies bridging the gap between practice and principles? An exploratory study.Javier Camacho Ibáñez & Mónica Villas Olmeda - 2022 - AI and Society 37 (4):1663-1687.
    Despite the increase in the research field of ethics in artificial intelligence, most efforts have focused on the debate about principles and guidelines for responsible AI, but not enough attention has been given to the “how” of applied ethics. This paper aims to advance the research exploring the gap between practice and principles in AI ethics by identifying how companies are applying those guidelines and principles in practice. Through a qualitative methodology based on 22 semi-structured interviews and two focus groups, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The ethics of machine learning-based clinical decision support: an analysis through the lens of professionalisation theory.Sabine Salloch & Nils B. Heyen - 2021 - BMC Medical Ethics 22 (1):1-9.
    BackgroundMachine learning-based clinical decision support systems (ML_CDSS) are increasingly employed in various sectors of health care aiming at supporting clinicians’ practice by matching the characteristics of individual patients with a computerised clinical knowledge base. Some studies even indicate that ML_CDSS may surpass physicians’ competencies regarding specific isolated tasks. From an ethical perspective, however, the usage of ML_CDSS in medical practice touches on a range of fundamental normative issues. This article aims to add to the ethical discussion by using professionalisation theory (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethics-based auditing of automated decision-making systems: nature, scope, and limitations.Jakob Mökander, Jessica Morley, Mariarosaria Taddeo & Luciano Floridi - 2021 - Science and Engineering Ethics 27 (4):1–30.
    Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • (1 other version)A united framework of five principles for AI in society.Luciano Floridi & Josh Cowls - 2019 - Harvard Data Science Review 1 (1).
    Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these (...)
    Download  
     
    Export citation  
     
    Bookmark   75 citations  
  • (1 other version)Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - manuscript
    As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective.Erik Hermann - 2022 - Journal of Business Ethics 179 (1):43-61.
    Artificial intelligence is shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems.Atoosa Kasirzadeh & Colin Klein - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21).
    Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Problems with “Friendly AI”.Oliver Li - 2021 - Ethics and Information Technology 23 (3):543-550.
    On virtue ethical grounds, Barbro Fröding and Martin Peterson recently recommended that near-future AIs should be developed as ‘Friendly AI’. AI in social interaction with humans should be programmed such that they mimic aspects of human friendship. While it is a reasonable goal to implement AI systems interacting with humans as Friendly AI, I identify four issues that need to be addressed concerning Friendly AI with Fröding’s and Peterson’s understanding of Friendly AI as a starting point. In a first step, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Machine morality, moral progress, and the looming environmental disaster.Ben Kenward & Thomas Sinclair - forthcoming - Cognitive Computation and Systems.
    The creation of artificial moral systems requires us to make difficult choices about which of varying human value sets should be instantiated. The industry-standard approach is to seek and encode moral consensus. Here we argue, based on evidence from empirical psychology, that encoding current moral consensus risks reinforcing current norms, and thus inhibiting moral progress. However, so do efforts to encode progressive norms. Machine ethics is thus caught between a rock and a hard place. The problem is particularly acute when (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence Regulation: a framework for governance.Patricia Gomes Rêgo de Almeida, Carlos Denner dos Santos & Josivania Silva Farias - 2021 - Ethics and Information Technology 23 (3):505-525.
    This article develops a conceptual framework for regulating Artificial Intelligence (AI) that encompasses all stages of modern public policy-making, from the basics to a sustainable governance. Based on a vast systematic review of the literature on Artificial Intelligence Regulation (AIR) published between 2010 and 2020, a dispersed body of knowledge loosely centred around the “framework” concept was organised, described, and pictured for better understanding. The resulting integrative framework encapsulates 21 prior depictions of the policy-making process, aiming to achieve gold-standard societal (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation.Jan Gogoll, Niina Zuber, Severin Kacianka, Timo Greger, Alexander Pretschner & Julian Nida-Rümelin - 2021 - Philosophy and Technology 34 (4):1085-1108.
    Software systems play an ever more important role in our lives and software engineers and their companies find themselves in a position where they are held responsible for ethical issues that may arise. In this paper, we try to disentangle ethical considerations that can be performed at the level of the software engineer from those that belong in the wider domain of business ethics. The handling of ethical problems that fall into the responsibility of the engineer has traditionally been addressed (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Organisational responses to the ethical issues of artificial intelligence.Bernd Carsten Stahl, Josephina Antoniou, Mark Ryan, Kevin Macnish & Tilimbe Jiya - 2022 - AI and Society 37 (1):23-37.
    The ethics of artificial intelligence is a widely discussed topic. There are numerous initiatives that aim to develop the principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in our current (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions.Nadisha-Marie Aliman, Leon Kester & Roman Yampolskiy - 2021 - Philosophies 6 (1):6.
    In the last years, artificial intelligence (AI) safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently _transdisciplinary_ AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing _concrete practical examples_. Distinguishing between unintentionally and intentionally triggered AI risks (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Good AI for the Present of Humanity Democratizing AI Governance.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2021 - AI Ethics Journal 2 (2):1-16.
    What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Download  
     
    Export citation  
     
    Bookmark   81 citations  
  • Language Agents and Malevolent Design.Inchul Yum - 2024 - Philosophy and Technology 37 (104):1-19.
    Language agents are AI systems capable of understanding and responding to natural language, potentially facilitating the process of encoding human goals into AI systems. However, this paper argues that if language agents can achieve easy alignment, they also increase the risk of malevolent agents building harmful AI systems aligned with destructive intentions. The paper contends that if training AI becomes sufficiently easy or is perceived as such, it enables malicious actors, including rogue states, terrorists, and criminal organizations, to create powerful (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • “Democratizing AI” and the Concern of Algorithmic Injustice.Ting-an Lin - 2024 - Philosophy and Technology 37 (3):1-27.
    The call to make artificial intelligence (AI) more democratic, or to “democratize AI,” is sometimes framed as a promising response for mitigating algorithmic injustice or making AI more aligned with social justice. However, the notion of “democratizing AI” is elusive, as the phrase has been associated with multiple meanings and practices, and the extent to which it may help mitigate algorithmic injustice is still underexplored. In this paper, based on a socio-technical understanding of algorithmic injustice, I examine three notable notions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A place where “You can be who you've always wanted to be…” Examining the ethics of intelligent virtual environments.Danielle Shanley & Darian Meacham - 2024 - Journal of Responsible Technology 18 (C):100085.
    Download  
     
    Export citation  
     
    Bookmark  
  • Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral assumptions of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI as Philosophical Ideology: A Critical look back at John McCarthy’s Program.Marc M. Anderson - 2024 - Philosophy and Technology 37 (2):1-24.
    AI has become the poster child for a certain kind of thinking which holds that some technologies can become objective, independent and emergent entities which can evolve beyond the control of their creators. This thinking is not new however. It is a product of certain philosophical ideas such as materialism, a common-sense world of objective and independent objects, a correspondence theory of truth, and so forth, which are centered around the pre-eminence of science, epistemology, and logical reasoning, among others, as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From AI Ethics Principles to Practices: A Teleological Methodology to Apply AI Ethics Principles in The Defence Domain.Christopher Thomas, Alexander Blanchard & Mariarosaria Taddeo - 2024 - Philosophy and Technology 37 (1):1-21.
    This article provides a methodology for the interpretation of AI ethics principles to specify ethical criteria for the development and deployment of AI systems in high-risk domains. The methodology consists of a three-step process deployed by an independent, multi-stakeholder ethics board to: (1) identify the appropriate level of abstraction for modelling the AI lifecycle; (2) interpret prescribed principles to extract specific requirements to be met at each step of the AI lifecycle; and (3) define the criteria to inform purpose- and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethical governance of artificial intelligence for defence: normative tradeoffs for principle to practice guidance.Alexander Blanchard, Christopher Thomas & Mariarosaria Taddeo - forthcoming - AI and Society:1-14.
    The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance of these systems. A recent shift from the what to the how of AI ethics sees a nascent body of literature published by defence organisations focussed on guidance to implement AI ethics principles. These efforts have neglected a crucial intermediate step between principles and guidance concerning the elicitation of ethical requirements for specifying the guidance. In this article, we outline the key normative (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Challenges of responsible AI in practice: scoping review and recommended actions.Malak Sadek, Emma Kallina, Thomas Bohné, Céline Mougenot, Rafael A. Calvo & Stephen Cave - forthcoming - AI and Society:1-17.
    Responsible AI (RAI) guidelines aim to ensure that AI systems respect democratic values. While a step in the right direction, they currently fail to impact practice. Our work discusses reasons for this lack of impact and clusters them into five areas: (1) the abstract nature of RAI guidelines, (2) the problem of selecting and reconciling values, (3) the difficulty of operationalising RAI success metrics, (4) the fragmentation of the AI pipeline, and (5) the lack of internal advocacy and accountability. Afterwards, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethical and legal challenges of AI in marketing: an exploration of solutions.Dinesh Kumar & Nidhi Suthar - forthcoming - Journal of Information, Communication and Ethics in Society.
    Purpose Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal implications of using AI in marketing. Although previous research has revealed various ethical and legal issues, such as algorithmic discrimination and data privacy, there are no definitive answers. This paper aims to fill this gap by investigating AI’s ethical and legal concerns in marketing and suggesting feasible solutions. Design/methodology/approach The paper synthesises information from academic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The poverty of ethical AI: impact sourcing and AI supply chains.James Muldoon, Callum Cant, Mark Graham & Funda Ustek Spilda - forthcoming - AI and Society:1-15.
    Impact sourcing is the practice of employing socio-economically disadvantaged individuals at business process outsourcing centres to reduce poverty and create secure jobs. One of the pioneers of impact sourcing is Sama, a training-data company that focuses on annotating data for artificial intelligence (AI) systems and claims to support an ethical AI supply chain through its business operations. Drawing on fieldwork undertaken at three of Sama’s East African delivery centres in Kenya and Uganda and follow-up online interviews, this article interrogates Sama’s (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Principle-at-Risk Analysis (PaRA): Operationalising Digital Ethics by Bridging Principles and Operations of a Digital Ethics Advisory Panel.André T. Nemat, Sarah J. Becker, Simon Lucas, Sean Thomas, Isabel Gadea & Jean Enno Charton - 2023 - Minds and Machines 33 (4):737-760.
    Recent attempts to develop and apply digital ethics principles to address the challenges of the digital transformation leave organisations with an operationalisation gap. To successfully implement such guidance, they must find ways to translate high-level ethics frameworks into practical methods and tools that match their specific workflows and needs. Here, we describe the development of a standardised risk assessment tool, the Principle-at-Risk Analysis (PaRA), as a means to close this operationalisation gap for a key level of the ethics infrastructure at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2023 - AI and Society 38 (4):1301-1320.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethics of Decentralized Social Technologies: Lessons from Web3, the Fediverse, and Beyond.Danielle Allen, Woojin Lim, Eli Frankel, Joshua Simons, Divya Siddarth & Glen Weyl - 2023 - Edmond and Lily Safra Center for Ethics.
    This paper argues that the plethora of experiments with decentralized social technologies (DSTs)—clusters of which are sometimes called “the Web 3.0 ecosystem” or “the Fediverse”—have brought us to a constitutional moment. These technologies enable radical innovations in social, economic, and political institutions and practices, with the potential to support transformative approaches to political economy. They demand governance innovation. The paper develops a framework of prudent vigilance for making ethical choices in this space that help to both grasp positive opportunities for (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Asociación en IA en beneficio de las personas y la sociedad, retos y perspectivas.Fabio Morandín-Ahuerma - 2023 - In Principios normativos para una ética de la Inteligencia Artificial. Puebla, México: Consejo de Ciencia y Tecnología del Estado de Puebla (Concytep). pp. 115-126.
    La PAI es la “Asociación sobre inteligencia artificial en beneficio de las personas y la sociedad” (Partnership on AI to Benefit People and Society) y es una organización sin fines de lucro con sede en San Francisco, California, que reúne a organizaciones académicas, de la sociedad civil, a empresas tecnológicas y de los medios de comu- nicación para abordar cuestiones sustanciales, básicamente sobre el futuro de la IA, pero también otros importantes retos mundiales como el cambio climático, la alimentación, la (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Menos, es más: reconstruir una ética clásica normativa para un futuro responsable de la inteligencia artificial.Fabio Morandín-Ahuerma - 2023 - In Principios normativos para una ética de la Inteligencia Artificial. Puebla, México: Consejo de Ciencia y Tecnología del Estado de Puebla (Concytep). pp. 186-205.
    La repetición y la superposición innecesaria de principios éticos similares para el desarrollo de una inteligencia artificial responsable no solo entran en conflicto, sino que esta confusión y ambigüedad pueden llegar, incluso, a resultar peligrosas si los postulados son un mero “lavado de cara” y las verdaderas intenciones se esconden detrás de intereses mezquinos. Esto aplica tanto a particulares, a empresas, como a gobiernos. El proceso de establecer leyes, normas, estándares y mejores prácticas para asegurar que la IA sea benéfica (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ética de la IA desde las empresas globales: Microsoft, Google, Meta y Apple.Fabio Morandín-Ahuerma - 2023 - In Principios normativos para una ética de la Inteligencia Artificial. Puebla, México: Consejo de Ciencia y Tecnología del Estado de Puebla (Concytep). pp. 137-161.
    En este capítulo se analizan las propuestas éticas para el desarrollo digital y empresarial de cuatro grandes corporativos internacionales: Microsoft, Google (Alphabet), Facebook (Meta) y Apple. Se ponderan cada uno de sus compromisos publicados en sus plataformas respectivas o las políticas compartidas por sus direcciones ejecutivas. Si bien cada una de las megaempresas, al menos en el papel, presume una serie de valores incuestionables por su integridad, también es cierto que la mayoría ha tenido que enfrentar crisis por la carencia (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The contested role of AI ethics boards in smart societies: a step towards improvement based on board composition by sortition.Ludovico Giacomo Conti & Peter Seele - 2023 - Ethics and Information Technology 25 (4):1-15.
    The recent proliferation of AI scandals led private and public organisations to implement new ethics guidelines, introduce AI ethics boards, and list ethical principles. Nevertheless, some of these efforts remained a façade not backed by any substantive action. Such behaviour made the public question the legitimacy of the AI industry and prompted scholars to accuse the sector of ethicswashing, machinewashing, and ethics trivialisation—criticisms that spilt over to institutional AI ethics boards. To counter this widespread issue, contributions in the literature have (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence and conversational agent evolution – a cautionary tale of the benefits and pitfalls of advanced technology in education, academic research, and practice.Curtis C. Cain, Carlos D. Buskey & Gloria J. Washington - 2023 - Journal of Information, Communication and Ethics in Society 21 (4):394-405.
    Purpose The purpose of this paper is to demonstrate the advancements in artificial intelligence (AI) and conversational agents, emphasizing their potential benefits while also highlighting the need for vigilant monitoring to prevent unethical applications. Design/methodology/approach As AI becomes more prevalent in academia and research, it is crucial to explore ways to ensure ethical usage of the technology and to identify potentially unethical usage. This manuscript uses a popular AI chatbot to write the introduction and parts of the body of a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Dual-use implications of AI text generation.Julian J. Koplin - 2023 - Ethics and Information Technology 25 (2):1-11.
    AI researchers have developed sophisticated language models capable of generating paragraphs of 'synthetic text' on topics specified by the user. While AI text generation has legitimate benefits, it could also be misused, potentially to grave effect. For example, AI text generators could be used to automate the production of convincing fake news, or to inundate social media platforms with machine-generated disinformation. This paper argues that AI text generators should be conceptualised as a dual-use technology, outlines some relevant lessons from earlier (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Anything new under the sun? Insights from a history of institutionalized AI ethics.Simone Casiraghi - 2023 - Ethics and Information Technology 25 (2):1-14.
    Scholars, policymakers and organizations in the EU, especially at the level of the European Commission, have turned their attention to the ethics of (trustworthy and human-centric) Artificial Intelligence (AI). However, there has been little reflexivity on (1) the history of the ethics of AI as an institutionalized phenomenon and (2) the comparison to similar episodes of “ethification” in other fields, to highlight common (unresolved) challenges.Contrary to some mainstream narratives, which stress how the increasing attention to ethical aspects of AI is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI ethics as subordinated innovation network.James Steinhoff - forthcoming - AI and Society:1-13.
    AI ethics is proposed, by the Big Tech companies which lead AI research and development, as the cure for diverse social problems posed by the commercialization of data-intensive technologies. It aims to reconcile capitalist AI production with ethics. However, AI ethics is itself now the subject of wide criticism; most notably, it is accused of being no more than “ethics washing” a cynical means of dissimulation for Big Tech, while it continues its business operations unchanged. This paper aims to critically (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Technology ethics assessment: Politicising the ‘Socratic approach’.Robert Sparrow - 2023 - Business Ethics, the Environment and Responsibility (2):454-466.
    That technologies may raise ethical issues is now widely recognised. The ‘responsible innovation’ literature – as well as, to a lesser extent, the applied ethics and bioethics literature – has responded to the need for ethical reflection on technologies by developing a number of tools and approaches to facilitate such reflection. Some of these instruments consist of lists of questions that people are encouraged to ask about technologies – a methodology known as the ‘Socratic approach’. However, to date, these instruments (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethical Issues in Near-Future Socially Supportive Smart Assistants for Older Adults.Alex John London - forthcoming - IEEE Transactions on Technology and Society.
    Abstract:This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work.Sarah Bankins & Paul Formosa - 2023 - Journal of Business Ethics (4):1-16.
    The increasing workplace use of artificially intelligent (AI) technologies has implications for the experience of meaningful human work. Meaningful work refers to the perception that one’s work has worth, significance, or a higher purpose. The development and organisational deployment of AI is accelerating, but the ways in which this will support or diminish opportunities for meaningful work and the ethical implications of these changes remain under-explored. This conceptual paper is positioned at the intersection of the meaningful work and ethical AI (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The need for and nature of a normative, cultural psychology of weaponized AI (artificial intelligence).Qin Zhu, Ingvild Bode & Rockwell Clancy - 2023 - Ethics and Information Technology 25 (1):1-6.
    The use of AI in weapons systems raises numerous ethical issues. To date, work on weaponized AI has tended to be theoretical and normative in nature, consisting in critical policy analyses and ethical considerations, carried out by philosophers, legal scholars, and political scientists. However, adequately addressing the cultural and social dimensions of technology requires insights and methods from empirical moral and cultural psychology. To do so, this position piece describes the motivations for and sketches the nature of a normative, cultural (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Connecting ethics and epistemology of AI.Federica Russo, Eric Schliesser & Jean Wagemans - forthcoming - AI and Society:1-19.
    The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an integrated approach premised on the idea that a glass-box epistemology should explicitly consider how to incorporate values and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethical AI at work: the social contract for Artificial Intelligence and its implications for the workplace psychological contract.Sarah Bankins & Paul Formosa - 2021 - In Sarah Bankins & Paul Formosa (eds.), Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract. Cham, Switzerland: pp. 55-72.
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The case for a broader approach to AI assurance: addressing “hidden” harms in the development of artificial intelligence.Christopher Thomas, Huw Roberts, Jakob Mökander, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - forthcoming - AI and Society:1-16.
    Artificial intelligence (AI) assurance is an umbrella term describing many approaches—such as impact assessment, audit, and certification procedures—used to provide evidence that an AI system is legal, ethical, and technically robust. AI assurance approaches largely focus on two overlapping categories of harms: deployment harms that emerge at, or after, the point of use, and individual harms that directly impact a person as an individual. Current approaches generally overlook upstream collective and societal harms associated with the development of systems, such as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A pluralist hybrid model for moral AIs.Fei Song & Shing Hay Felix Yeung - forthcoming - AI and Society:1-10.
    With the increasing degrees A.I.s and machines are applied across different social contexts, the need for implementing ethics in A.I.s is pressing. In this paper, we argue for a pluralist hybrid model for the implementation of moral A.I.s. We first survey current approaches to moral A.I.s and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral A.I.s can be partly alleviated by the pluralist hybrid approach. The core ethical decision-making capacity of an (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Making Sense of the Conceptual Nonsense 'Trustworthy AI'.Ori Freiman - 2022 - AI and Ethics 4.
    Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence of ignoring (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds with Robots.Janina Loh & Wulf Loh (eds.) - 2022 - Transcript Verlag.
    Robots as social companions in close proximity to humans have a strong potential of becoming more and more prevalent in the coming years, especially in the realms of elder day care, child rearing, and education. As human beings, we have the fascinating ability to emotionally bond with various counterparts, not exclusively with other human beings, but also with animals, plants, and sometimes even objects. Therefore, we need to answer the fundamental ethical questions that concern human-robot-interactions per se, and we need (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI ethics with Chinese characteristics? Concerns and preferred solutions in Chinese academia.Junhua Zhu - forthcoming - AI and Society:1-14.
    Since Chinese scholars are playing an increasingly important role in shaping the national landscape of discussion on AI ethics, understanding their ethical concerns and preferred solutions is essential for global cooperation on governance of AI. This article, therefore, provides the first elaborated analysis on the discourse on AI ethics in Chinese academia, via a systematic literature review. This article has three main objectives. to identify the most discussed ethical issues of AI in Chinese academia and those being left out ; (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies.Justyna Stypinska - 2023 - AI and Society 38 (2):665-677.
    In the last few years, we have witnessed a surge in scholarly interest and scientific evidence of how algorithms can produce discriminatory outcomes, especially with regard to gender and race. However, the analysis of fairness and bias in AI, important for the debate of AI for social good, has paid insufficient attention to the category of age and older people. Ageing populations have been largely neglected during the turn to digitality and AI. In this article, the concept of AI ageism (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations