Switch to: References

Add citations

You must login to add citations.
  1. Procedural fairness in algorithmic decision-making: the role of public engagement.Marie Christin Decker, Laila Wegner & Carmen Leicht-Scholten - 2025 - Ethics and Information Technology 27 (1):1-16.
    Despite the widespread use of automated decision-making (ADM) systems, they are often developed without involving the public or those directly affected, leading to concerns about systematic biases that may perpetuate structural injustices. Existing formal fairness approaches primarily focus on statistical outcomes across demographic groups or individual fairness, yet these methods reveal ambiguities and limitations in addressing fairness comprehensively. This paper argues for a holistic approach to algorithmic fairness that integrates procedural fairness, considering both decision-making processes and their outcomes. Procedural fairness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence.Salla Westerstrand - 2024 - Science and Engineering Ethics 30 (5):1-21.
    The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “Democratizing AI” and the Concern of Algorithmic Injustice.Ting-an Lin - 2024 - Philosophy and Technology 37 (3):1-27.
    The call to make artificial intelligence (AI) more democratic, or to “democratize AI,” is sometimes framed as a promising response for mitigating algorithmic injustice or making AI more aligned with social justice. However, the notion of “democratizing AI” is elusive, as the phrase has been associated with multiple meanings and practices, and the extent to which it may help mitigate algorithmic injustice is still underexplored. In this paper, based on a socio-technical understanding of algorithmic injustice, I examine three notable notions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Find the Gap: AI, Responsible Agency and Vulnerability.Shannon Vallor & Tillmann Vierkant - 2024 - Minds and Machines 34 (3):1-23.
    The responsibility gap, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • When can we Kick (Some) Humans “Out of the Loop”? An Examination of the use of AI in Medical Imaging for Lumbar Spinal Stenosis.Kathryn Muyskens, Yonghui Ma, Jerry Menikoff, James Hallinan & Julian Savulescu - forthcoming - Asian Bioethics Review:1-17.
    Artificial intelligence (AI) has attracted an increasing amount of attention, both positive and negative. Its potential applications in healthcare are indeed manifold and revolutionary, and within the realm of medical imaging and radiology (which will be the focus of this paper), significant increases in accuracy and speed, as well as significant savings in cost, stand to be gained through the adoption of this technology. Because of its novelty, a norm of keeping humans “in the loop” wherever AI mechanisms are deployed (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract.Sarah Bankins & Paul Formosa - 2021 - In Sarah Bankins & Paul Formosa (eds.), Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract. Cham, Switzerland:
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Machine agency and representation.Beba Cibralic & James Mattingly - 2024 - AI and Society 39 (1):345-352.
    Theories of action tend to require agents to have mental representations. A common trope in discussions of artificial intelligence (AI) is that they do not, and so cannot be agents. Properly understood there may be something to the requirement, but the trope is badly misguided. Here we provide an account of representation for AI that is sufficient to underwrite attributions to these systems of ownership, action, and responsibility. Existing accounts of mental representation tend to be too demanding and unparsimonious. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The contested role of AI ethics boards in smart societies: a step towards improvement based on board composition by sortition.Ludovico Giacomo Conti & Peter Seele - 2023 - Ethics and Information Technology 25 (4):1-15.
    The recent proliferation of AI scandals led private and public organisations to implement new ethics guidelines, introduce AI ethics boards, and list ethical principles. Nevertheless, some of these efforts remained a façade not backed by any substantive action. Such behaviour made the public question the legitimacy of the AI industry and prompted scholars to accuse the sector of ethicswashing, machinewashing, and ethics trivialisation—criticisms that spilt over to institutional AI ethics boards. To counter this widespread issue, contributions in the literature have (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethical AI at work: the social contract for Artificial Intelligence and its implications for the workplace psychological contract.Sarah Bankins & Paul Formosa - 2021 - In Sarah Bankins & Paul Formosa (eds.), Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract. Cham, Switzerland: pp. 55-72.
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Singular Plurality of Social Goods / La singolare pluralità dei beni sociali.Marco Emilio - 2022 - Dissertation, Université de Neuchâtel
    According to some philosophers and social scientists, mainstream economic theories currently play an unprecedented role in shaping human societies. This phenomenon can be linked to the dissemination of methodological individualism, where common goods are interpreted as reducible to aggregates of individuals' well-being. Nonetheless, some emergent difficulties of economics in coping with global institutional issues have encouraged some authors to revise that paradigm. In the last three decades, there has been a parallel growing philosophical interest in investigating social sciences' epistemological and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Governing algorithmic decisions: The role of decision importance and governance on perceived legitimacy of algorithmic decisions.Kirsten Martin & Ari Waldman - 2022 - Big Data and Society 9 (1).
    The algorithmic accountability literature to date has primarily focused on procedural tools to govern automated decision-making systems. That prescriptive literature elides a fundamentally empirical question: whether and under what circumstances, if any, is the use of algorithmic systems to make public policy decisions perceived as legitimate? The present study begins to answer this question. Using factorial vignette survey methodology, we explore the relative importance of the type of decision, the procedural governance, the input data used, and outcome errors on perceptions (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Concept of Accountability in AI Ethics and Governance.Theodore Lechterman - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    Calls to hold artificial intelligence to account are intensifying. Activists and researchers alike warn of an “accountability gap” or even a “crisis of accountability” in AI. Meanwhile, several prominent scholars maintain that accountability holds the key to governing AI. But usage of the term varies widely in discussions of AI ethics and governance. This chapter begins by disambiguating some different senses and dimensions of accountability, distinguishing it from neighboring concepts, and identifying sources of confusion. It proceeds to explore the idea (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions.Kirsten Martin & Ari Waldman - 2022 - Journal of Business Ethics 183 (3):653-670.
    Firms use algorithms to make important business decisions. To date, the algorithmic accountability literature has elided a fundamentally empirical question important to business ethics and management: Under what circumstances, if any, are algorithmic decision-making systems considered legitimate? The present study begins to answer this question. Using factorial vignette survey methodology, we explore the impact of decision importance, governance, outcomes, and data inputs on perceptions of the legitimacy of algorithmic decisions made by firms. We find that many of the procedural governance (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Against “Democratizing AI”.Johannes Himmelreich - 2023 - AI and Society 38 (4):1333-1346.
    This paper argues against the call to democratize artificial intelligence (AI). Several authors demand to reap purported benefits that rest in direct and broad participation: In the governance of AI, more people should be more involved in more decisions about AI—from development and design to deployment. This paper opposes this call. The paper presents five objections against broadening and deepening public participation in the governance of AI. The paper begins by reviewing the literature and carving out a set of claims (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The ethical use of artificial intelligence in human resource management: a decision-making framework.Sarah Bankins - 2021 - Ethics and Information Technology 23 (4):841-854.
    Artificial intelligence is increasingly inputting into various human resource management functions, such as sourcing job applicants and selecting staff, allocating work, and offering personalized career coaching. While the use of AI for such tasks can offer many benefits, evidence suggests that without careful and deliberate implementation its use also has the potential to generate significant harms. This raises several ethical concerns regarding the appropriateness of AI deployment to domains such as HRM, which directly deal with managing sometimes sensitive aspects of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethical dilemmas are really important to potential adopters of autonomous vehicles.Tripat Gill - 2021 - Ethics and Information Technology 23 (4):657-673.
    The ethical dilemma of whether autonomous vehicles should protect the passengers or pedestrians when harm is unavoidable has been widely researched and debated. Several behavioral scientists have sought public opinion on this issue, based on the premise that EDs are critical to resolve for AV adoption. However, many scholars and industry participants have downplayed the importance of these edge cases. Policy makers also advocate a focus on higher level ethical principles rather than on a specific solution to EDs. But conspicuously (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Psychological consequences of legal responsibility misattribution associated with automated vehicles.Peng Liu, Manqing Du & Tingting Li - 2021 - Ethics and Information Technology 23 (4):763-776.
    A human driver and an automated driving system might share control of automated vehicles in the near future. This raises many concerns associated with the assignment of responsibility for negative outcomes caused by them; one is that the human driver might be required to bear the brunt of moral and legal responsibilities. The psychological consequences of responsibility misattribution have not yet been examined. We designed a hypothetical crash similar to Uber’s 2018 fatal crash. We incorporated five legal responsibility attributions. Participants (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy.Paul Formosa - 2021 - Minds and Machines 31 (4):595-616.
    Social robots are robots that can interact socially with humans. As social robots and the artificial intelligence that powers them becomes more advanced, they will likely take on more social and work roles. This has many important ethical implications. In this paper, we focus on one of the most central of these, the impacts that social robots can have on human autonomy. We argue that, due to their physical presence and social capacities, there is a strong potential for social robots (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • (1 other version)How to design a governable digital health ecosystem.Jessica Morley & Luciano Floridi - manuscript
    It has been suggested that to overcome the challenges facing the UK’s National Health Service (NHS) of an ageing population and reduced available funding, the NHS should be transformed into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions between human, artificial, and hybrid (semi-artificial) agents. This transformation process would offer significant benefit to patients, clinicians, and the overall system, but it would also rely on a fundamental transformation of the healthcare system in a way that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethics-based auditing of automated decision-making systems: nature, scope, and limitations.Jakob Mökander, Jessica Morley, Mariarosaria Taddeo & Luciano Floridi - 2021 - Science and Engineering Ethics 27 (4):1–30.
    Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • (1 other version)Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):239–256.
    As the range of potential uses for Artificial Intelligence, in particular machine learning, has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • (1 other version)Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - manuscript
    As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • (1 other version)The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2021 - AI and Society.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Contesting algorithms: Restoring the public interest in content filtering by artificial intelligence.Niva Elkin-Koren - 2020 - Big Data and Society 7 (2).
    In recent years, artificial intelligence has been deployed by online platforms to prevent the upload of allegedly illegal content or to remove unwarranted expressions. These systems are trained to spot objectionable content and to remove it, block it, or filter it out before it is even uploaded. Artificial intelligence filters offer a robust approach to content moderation which is shaping the public sphere. This dramatic shift in norm setting and law enforcement is potentially game-changing for democracy. Artificial intelligence filters carry (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Designing for human rights in AI.Jeroen van den Hoven & Evgeni Aizenberg - 2020 - Big Data and Society 7 (2).
    In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. Artificial intelligence systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. It is becoming evident that these technological developments are consequential to people’s fundamental human rights. Despite increasing attention to (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Artificial intelligence, culture and education.Sergey B. Kulikov & Anastasiya V. Shirokova - 2021 - AI and Society 36 (1):305-318.
    Sequential transformative design of research :224–235, 2015; Groleau et al. in J Mental Health 16:731–741, 2007; Robson and McCartan in Real world research: a resource for users of social research methods in applied settings, Wiley, Chichester, 2016) allows testing a group of theoretical assumptions about the connections of artificial intelligence with culture and education. In the course of research, semiotics ensures the description of self-organizing systems of cultural signs and symbols in terms of artificial intelligence as a special set of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2021 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent (AI) systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • Dissecting the Algorithmic Leviathan: On the Socio-Political Anatomy of Algorithmic Governance.Pascal D. König - 2020 - Philosophy and Technology 33 (3):467-485.
    A growing literature is taking an institutionalist and governance perspective on how algorithms shape society based on unprecedented capacities for managing social complexity. Algorithmic governance altogether emerges as a novel and distinctive kind of societal steering. It appears to transcend established categories and modes of governance—and thus seems to call for new ways of thinking about how social relations can be regulated and ordered. However, as this paper argues, despite its novel way of realizing outcomes of collective steering and coordination, (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The limits of empowerment: how to reframe the role of mHealth tools in the healthcare ecosystem.Jessica Morley & Luciano Floridi - 2020 - Science and Engineering Ethics 26 (3):1159-1183.
    This article highlights the limitations of the tendency to frame health- and wellbeing-related digital tools (mHealth technologies) as empowering devices, especially as they play an increasingly important role in the National Health Service (NHS) in the UK. It argues that mHealth technologies should instead be framed as digital companions. This shift from empowerment to companionship is advocated by showing the conceptual, ethical, and methodological issues challenging the narrative of empowerment, and by arguing that such challenges, as well as the risk (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - 2024 - Science and Engineering Ethics 30 (6):1-19.
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Scapegoat-in-the-Loop? Human Control over Medical AI and the (Mis)Attribution of Responsibility.Robert Ranisch - 2024 - American Journal of Bioethics 24 (9):116-117.
    The paper by Salloch and Eriksen (2024) offers an insightful contribution to the ethical debate on Machine Learning-driven Clinical Decision Support Systems (ML_CDSS) and provides much-needed conce...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Challenges of responsible AI in practice: scoping review and recommended actions.Malak Sadek, Emma Kallina, Thomas Bohné, Céline Mougenot, Rafael A. Calvo & Stephen Cave - forthcoming - AI and Society:1-17.
    Responsible AI (RAI) guidelines aim to ensure that AI systems respect democratic values. While a step in the right direction, they currently fail to impact practice. Our work discusses reasons for this lack of impact and clusters them into five areas: (1) the abstract nature of RAI guidelines, (2) the problem of selecting and reconciling values, (3) the difficulty of operationalising RAI success metrics, (4) the fragmentation of the AI pipeline, and (5) the lack of internal advocacy and accountability. Afterwards, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Characteristics and challenges in the industries towards responsible AI: a systematic literature review.Marianna Anagnostou, Olga Karvounidou, Chrysovalantou Katritzidaki, Christina Kechagia, Kyriaki Melidou, Eleni Mpeza, Ioannis Konstantinidis, Eleni Kapantai, Christos Berberidis, Ioannis Magnisalis & Vassilios Peristeras - 2022 - Ethics and Information Technology 24 (3):1-18.
    Today humanity is in the midst of the massive expansion of new and fundamental technology, represented by advanced artificial intelligence (AI) systems. The ongoing revolution of these technologies and their profound impact across various sectors, has triggered discussions about the characteristics and values that should guide their use and development in a responsible manner. In this paper, we conduct a systematic literature review with the aim of pointing out existing challenges and required principles in AI-based systems in different industries. We (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The problem with trust: on the discursive commodification of trust in AI.Steffen Krüger & Christopher Wilson - forthcoming - AI and Society:1-9.
    This commentary draws critical attention to the ongoing commodification of trust in policy and scholarly discourses of artificial intelligence (AI) and society. Based on an assessment of publications discussing the implementation of AI in governmental and private services, our findings indicate that this discursive trend towards commodification is driven by the need for a trusting population of service users to harvest data at scale and leads to the discursive construction of trust as an essential good on a par with data (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence.Jake B. Telkamp & Marc H. Anderson - 2022 - Journal of Business Ethics 178 (4):961-976.
    Organizations are making massive investments in artificial intelligence, and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective.Erik Hermann - 2022 - Journal of Business Ethics 179 (1):43-61.
    Artificial intelligence is shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Expanding Nallur's Landscape of Machine Implemented Ethics.William A. Bauer - 2020 - Science and Engineering Ethics 26 (5):2401-2410.
    What ethical principles should autonomous machines follow? How do we implement these principles, and how do we evaluate these implementations? These are some of the critical questions Vivek Nallur asks in his essay “Landscape of Machine Implemented Ethics (2020).” He provides a broad, insightful survey of answers to these questions, especially focused on the implementation question. In this commentary, I will first critically summarize the main themes and conclusions of Nallur’s essay and then expand upon the landscape that Nallur presents (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial intelligence and medical research databases: ethical review by data access committees.Nina Hallowell, Darren Treanor, Daljeet Bansal, Graham Prestwich, Bethany J. Williams & Francis McKay - 2023 - BMC Medical Ethics 24 (1):1-7.
    BackgroundIt has been argued that ethics review committees—e.g., Research Ethics Committees, Institutional Review Boards, etc.— have weaknesses in reviewing big data and artificial intelligence research. For instance, they may, due to the novelty of the area, lack the relevant expertise for judging collective risks and benefits of such research, or they may exempt it from review in instances involving de-identified data.Main bodyFocusing on the example of medical research databases we highlight here ethical issues around de-identified data sharing which motivate the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable.Auste Simkute, Ewa Luger, Bronwyn Jones, Michael Evans & Rhianne Jones - 2021 - Journal of Responsible Technology 7-8 (C):100017.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence.Alexander Buhmann & Christian Fieseler - forthcoming - Business Ethics Quarterly:1-34.
    Responsible innovation in artificial intelligence calls for public deliberation: well-informed “deep democratic” debate that involves actors from the public, private, and civil society sectors in joint efforts to critically address the goals and means of AI. Adopting such an approach constitutes a challenge, however, due to the opacity of AI and strong knowledge boundaries between experts and citizens. This undermines trust in AI and undercuts key conditions for deliberation. We approach this challenge as a problem of situating the knowledge of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • ‘Toward a Global Social Contract for Trade’ - a Rawlsian approach to Blockchain Systems Design and Responsible Trade Facilitation in the New Bretton Woods era.Arnold Lim & Enrong Pan - 2021 - Journal of Responsible Technology 6 (C):100011.
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence Regulation: a framework for governance.Patricia Gomes Rêgo de Almeida, Carlos Denner dos Santos & Josivania Silva Farias - 2021 - Ethics and Information Technology 23 (3):505-525.
    This article develops a conceptual framework for regulating Artificial Intelligence (AI) that encompasses all stages of modern public policy-making, from the basics to a sustainable governance. Based on a vast systematic review of the literature on Artificial Intelligence Regulation (AIR) published between 2010 and 2020, a dispersed body of knowledge loosely centred around the “framework” concept was organised, described, and pictured for better understanding. The resulting integrative framework encapsulates 21 prior depictions of the policy-making process, aiming to achieve gold-standard societal (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Democratizing AI from a Sociotechnical Perspective.Merel Noorman & Tsjalling Swierstra - 2023 - Minds and Machines 33 (4):563-586.
    Artificial Intelligence (AI) technologies offer new ways of conducting decision-making tasks that influence the daily lives of citizens, such as coordinating traffic, energy distributions, and crowd flows. They can sort, rank, and prioritize the distribution of fines or public funds and resources. Many of the changes that AI technologies promise to bring to such tasks pertain to decisions that are collectively binding. When these technologies become part of critical infrastructures, such as energy networks, citizens are affected by these decisions whether (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Engineering responsibility.Nicholas Sars - 2022 - Ethics and Information Technology 24 (3):1-10.
    Many optimistic responses have been proposed to bridge the threat of responsibility gaps which artificial systems create. This paper identifies a question which arises if this optimistic project proves successful. On a response-dependent understanding of responsibility, our responsibility practices themselves at least partially determine who counts as a responsible agent. On this basis, if AI or robot technology advance such that AI or robot agents become fitting participants within responsibility exchanges, then responsibility itself might be engineered. If we have good (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2022 - AI and Society 37 (1):215-230.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • Equity Issues in Educational Data Mining from an Epistemological Perspective.Esdras L. Bispo Jr - unknown
    Educational Data Mining (EDM) has shown interesting scientific results lately. However, little has been discussed about philosophical questions regarding the type of knowledge produced in this area. Bispo Jr. (2019) presented two epistemological issues that emerged from EDM research. This paper aims to deepen this discussion by presenting the equity issues that originated from this initial work.
    Download  
     
    Export citation  
     
    Bookmark  
  • No wheel but a dial: why and how passengers in self-driving cars should decide how their car drives.Johannes Himmelreich - 2022 - Ethics and Information Technology 24 (4):1-12.
    Much of the debate on the ethics of self-driving cars has revolved around trolley scenarios. This paper instead takes up the political or institutional question of who should decide how a self-driving car drives. Specifically, this paper is on the question of whether and why passengers should be able to control how their car drives. The paper reviews existing arguments—those for passenger ethics settings and for mandatory ethics settings respectively—and argues that they fail. Although the arguments are not successful, they (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Knowledge and support for AI in the public sector: a deliberative poll experiment.Sveinung Arnesen, Troy Saghaug Broderstad, James S. Fishkin, Mikael Poul Johannesson & Alice Siu - forthcoming - AI and Society:1-17.
    We are on the verge of a revolution in public sector decision-making processes, where computers will take over many of the governance tasks previously assigned to human bureaucrats. Governance decisions based on algorithmic information processing are increasing in numbers and scope, contributing to decisions that impact the lives of individual citizens. While significant attention in the recent few years has been devoted to normative discussions on fairness, accountability, and transparency related to algorithmic decision-making based on artificial intelligence, less is known (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI Assistants and the Paradox of Internal Automaticity.William A. Bauer & Veljko Dubljević - 2019 - Neuroethics 13 (3):303-310.
    What is the ethical impact of artificial intelligence assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. He takes (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations