Switch to: References

Add citations

You must login to add citations.
  1. Manipulation, Algorithm Design, and the Multiple Dimensions of Autonomy.Reuben Sass - 2024 - Philosophy and Technology 37 (3):1-20.
    Much discussion of the ethics of algorithms has focused on harms to autonomy—especially harms stemming from manipulation. Nonetheless, although manipulation can often be harmful, we suggest that in certain contexts it may not impair autonomy. To fully assess the impact of algorithm design on autonomy, we argue for a need to move beyond a focus on manipulation towards a multidimensional account of autonomy itself. Drawing on the autonomy literature and recent data ethics, we propose a novel account which takes autonomy (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI, Radical Ignorance, and the Institutional Approach to Consent.Etye Steinberg - 2024 - Philosophy and Technology 37 (3):1-26.
    More and more, we face AI-based products and services. Using these services often requires our explicit consent, e.g., by agreeing to the services’ Terms and Conditions clause. Current advances introduce the ability of AI to evolve and change its own modus operandi over time in such a way that we cannot know, at the moment of consent, what it is in the future to which we are now agreeing. Therefore, informed consent is impossible regarding certain kinds of AI. Call this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Filter Bubbles and the Unfeeling: How AI for Social Media Can Foster Extremism and Polarization.Ermelinda Rodilosso - 2024 - Philosophy and Technology 37 (2):1-21.
    Social media have undoubtedly changed our ways of living. Their presence concerns an increasing number of users (over 4,74 billion) and pervasively expands in the most diverse areas of human life. Marketing, education, news, data, and sociality are just a few of the many areas in which social media play now a central role. Recently, some attention toward the link between social media and political participation has emerged. Works in the field of artificial intelligence have already pointed out that there (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Using artificial intelligence to enhance patient autonomy in healthcare decision-making.Jose Luis Guerrero Quiñones - forthcoming - AI and Society:1-10.
    The use of artificial intelligence in healthcare contexts is highly controversial for the (bio)ethical conundrums it creates. One of the main problems arising from its implementation is the lack of transparency of machine learning algorithms, which is thought to impede the patient’s autonomous choice regarding their medical decisions. If the patient is unable to clearly understand why and how an AI algorithm reached certain medical decision, their autonomy is being hovered. However, there are alternatives to prevent the negative impact of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI and the need for justification (to the patient).Anantharaman Muralidharan, Julian Savulescu & G. Owen Schaefer - 2024 - Ethics and Information Technology 26 (1):1-12.
    This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Challenges of responsible AI in practice: scoping review and recommended actions.Malak Sadek, Emma Kallina, Thomas Bohné, Céline Mougenot, Rafael A. Calvo & Stephen Cave - forthcoming - AI and Society:1-17.
    Responsible AI (RAI) guidelines aim to ensure that AI systems respect democratic values. While a step in the right direction, they currently fail to impact practice. Our work discusses reasons for this lack of impact and clusters them into five areas: (1) the abstract nature of RAI guidelines, (2) the problem of selecting and reconciling values, (3) the difficulty of operationalising RAI success metrics, (4) the fragmentation of the AI pipeline, and (5) the lack of internal advocacy and accountability. Afterwards, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Against the Double Standard Argument in AI Ethics.Scott Hill - 2024 - Philosophy and Technology 37 (1):1-5.
    In an important and widely cited paper, Zerilli, Knott, Maclaurin, and Gavaghan (2019) argue that opaque AI decision makers are at least as transparent as human decision makers and therefore the concern that opaque AI is not sufficiently transparent is mistaken. I argue that the concern about opaque AI should not be understood as the concern that such AI fails to be transparent in a way that humans are transparent. Rather, the concern is that the way in which opaque AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Jaz u odgovornosti u informatičkoj eri.Jelena Mijić - 2023 - Društvo I Politika 4 (4):25-38.
    Odgovornost pripisujemo sa namerom da postignemo neki cilj. Jedno od opših mesta u filozofskoj literaturi je da osobi možemo pripisati moralnu odgovornost ako su zadovoljena bar dva uslova: da subjekt delanja ima kontrolu nad svojim postupcima i da je u stanju da navede razloge u prilog svog postupka. Međutim, četvrtu industrijsku revoluciju karakterišu sociotehnološke pojave koje nas potencijalno suočavaju sa tzv. problemom jaza u odgovornosti. Rasprave o odgovornosti u kontekstu veštačke inteligencije karakteriše nejasna i neodređena upotreba ovog pojma. Da bismo (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence in the Colonial Matrix of Power.James Muldoon & Boxi A. Wu - 2023 - Philosophy and Technology 36 (4):1-24.
    Drawing on the analytic of the “colonial matrix of power” developed by Aníbal Quijano within the Latin American modernity/coloniality research program, this article theorises how a system of coloniality underpins the structuring logic of artificial intelligence (AI) systems. We develop a framework for critiquing the regimes of global labour exploitation and knowledge extraction that are rendered invisible through discourses of the purported universality and objectivity of AI. ​​Through bringing the political economy literature on AI production into conversation with scholarly work (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Democratizing AI from a Sociotechnical Perspective.Merel Noorman & Tsjalling Swierstra - 2023 - Minds and Machines 33 (4):563-586.
    Artificial Intelligence (AI) technologies offer new ways of conducting decision-making tasks that influence the daily lives of citizens, such as coordinating traffic, energy distributions, and crowd flows. They can sort, rank, and prioritize the distribution of fines or public funds and resources. Many of the changes that AI technologies promise to bring to such tasks pertain to decisions that are collectively binding. When these technologies become part of critical infrastructures, such as energy networks, citizens are affected by these decisions whether (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics of Artificial Intelligence.Stefan Buijsman, Michael Klenk & Jeroen van den Hoven - forthcoming - In Nathalie Smuha (ed.), Cambridge Handbook on the Law, Ethics and Policy of AI. Cambridge University Press.
    Artificial Intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but frequently fail to suffice due to the context (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The latent space of data ethics.Enrico Panai - forthcoming - AI and Society:1-19.
    In informationally mature societies, almost all organisations record, generate, process, use, share and disseminate data. In particular, the rise of AI and autonomous systems has corresponded to an improvement in computational power and in solving complex problems. However, the resulting possibilities have been coupled with an upsurge of ethical risks. To avoid the misuse, underuse, and harmful use of data and data-based systems like AI, we should use an ethical framework appropriate to the object of its reasoning. Unfortunately, in recent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Promoting responsible AI : A European perspective on the governance of artificial intelligence in media and journalism.Colin Porlezza - 2023 - Communications 48 (3):370-394.
    Artificial intelligence and automation have become pervasive in news media, influencing journalism from news gathering to news distribution. As algorithms are increasingly determining editorial decisions, specific concerns have been raised with regard to the responsible and accountable use of AI-driven tools by news media, encompassing new regulatory and ethical questions. This contribution aims to analyze whether and to what extent the use of AI technology in news media and journalism is currently regulated and debated within the European Union and the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Others’ information and my privacy: an ethical discussion.Yuanye Ma - 2023 - Journal of Information, Communication and Ethics in Society 21 (3):259-270.
    Purpose Privacy has been understood as about one’s own information, information that is not one’s own is not typically considered with regards to an individual’s privacy. This paper aims to draw attention to this issue for conceptualizing privacy when one’s privacy is breached by others’ information. Design/methodology/approach To illustrate the issue that others' information can breach one's own privacy, this paper uses real-world applications of forensic genealogy and recommender systems to motivate the discussion. Findings In both forensic genealogy and recommender (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Immune moral models? Pro-social rule breaking as a moral enhancement approach for ethical AI.Rajitha Ramanayake, Philipp Wicke & Vivek Nallur - 2023 - AI and Society 38 (2):801-813.
    We are moving towards a future where Artificial Intelligence (AI) based agents make many decisions on behalf of humans. From healthcare decision-making to social media censoring, these agents face problems, and make decisions with ethical and societal implications. Ethical behaviour is a critical characteristic that we would like in a human-centric AI. A common observation in human-centric industries, like the service industry and healthcare, is that their professionals tend to break rules, if necessary, for pro-social reasons. This behaviour among humans (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Rechtliche Aspekte des Einsatzes von KI und Robotik in Medizin und Pflege.Susanne Beck, Michelle Faber & Simon Gerndt - 2023 - Ethik in der Medizin 35 (2):247-263.
    Zusammenfassung Die rasanten Entwicklungen im Bereich der Künstlichen Intelligenz und Robotik stellen nicht nur die Ethik, sondern auch das Recht vor neue Herausforderungen, gerade im Bereich der Medizin und Pflege. Grundsätzlich hat der Einsatz von KI dabei das Potenzial, sowohl die Heilbehandlungen als auch den adäquaten Umgang im Rahmen der Pflege zu erleichtern, wenn nicht sogar zu verbessern. Verwaltungsaufgaben, die Überwachung von Vitalfunktionen und deren Parameter sowie die Untersuchung von Gewebeproben etwa könnten autonom ablaufen. In Diagnostik und Therapie können Systeme (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Engineering Trustworthiness in the Online Environment.Hugh Desmond - 2023 - In Mark Alfano & David Collins (eds.), The Moral Psychology of Trust. Lexington Books. pp. 215-237.
    Algorithm engineering is sometimes portrayed as a new 21st century return of manipulative social engineering. Yet algorithms are necessary tools for individuals to navigate online platforms. Algorithms are like a sensory apparatus through which we perceive online platforms: this is also why individuals can be subtly but pervasively manipulated by biased algorithms. How can we better understand the nature of algorithm engineering and its proper function? In this chapter I argue that algorithm engineering can be best conceptualized as a type (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Going beyond the “common suspects”: to be presumed innocent in the era of algorithms, big data and artificial intelligence.Athina Sachoulidou - forthcoming - Artificial Intelligence and Law:1-54.
    This article explores the trend of increasing automation in law enforcement and criminal justice settings through three use cases: predictive policing, machine evidence and recidivism algorithms. The focus lies on artificial-intelligence-driven tools and technologies employed, whether at pre-investigation stages or within criminal proceedings, in order to decode human behaviour and facilitate decision-making as to whom to investigate, arrest, prosecute, and eventually punish. In this context, this article first underlines the existence of a persistent dilemma between the goal of increasing the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reframing data ethics in research methods education: a pathway to critical data literacy.Javiera Atenas, Leo Havemann & Cristian Timmermann - 2023 - International Journal of Educational Technology in Higher Education 20:11.
    This paper presents an ethical framework designed to support the development of critical data literacy for research methods courses and data training programmes in higher education. The framework we present draws upon our reviews of literature, course syllabi and existing frameworks on data ethics. For this research we reviewed 250 research methods syllabi from across the disciplines, as well as 80 syllabi from data science programmes to understand how or if data ethics was taught. We also reviewed 12 data ethics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The paradoxical transparency of opaque machine learning.Felix Tun Han Lo - forthcoming - AI and Society:1-13.
    This paper examines the paradoxical transparency involved in training machine-learning models. Existing literature typically critiques the opacity of machine-learning models such as neural networks or collaborative filtering, a type of critique that parallels the black-box critique in technology studies. Accordingly, people in power may leverage the models’ opacity to justify a biased result without subjecting the technical operations to public scrutiny, in what Dan McQuillan metaphorically depicts as an “algorithmic state of exception”. This paper attempts to differentiate the black-box abstraction (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Politicizing Algorithms by Other Means: Toward Inquiries for Affective Dissensions.Florian Jaton & Dominique Vinck - 2023 - Perspectives on Science 31 (1):84-118.
    In this paper, we build upon Bruno Latour’s political writings to address the current impasse regarding algorithms in public life. We assert that the increasing difficulties at governing algorithms—be they qualified as “machine learning,” “big data,” or “artificial intelligence”—can be related to their current ontological thinness: deriving from constricted views on theoretical practices, algorithms’ standard definition as problem-solving computerized methods provides poor grips for affective dissensions. We then emphasize on the role historical and ethnographic studies of algorithms can potentially play (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Politics of data reuse in machine learning systems: Theorizing reuse entanglements.Louise Amoore, Mikkel Flyverbom, Kristian Bondo Hansen & Nanna Bonde Thylstrup - 2022 - Big Data and Society 9 (2).
    Policy discussions and corporate strategies on machine learning are increasingly championing data reuse as a key element in digital transformations. These aspirations are often coupled with a focus on responsibility, ethics and transparency, as well as emergent forms of regulation that seek to set demands for corporate conduct and the protection of civic rights. And the Protective measures include methods of traceability and assessments of ‘good’ and ‘bad’ datasets and algorithms that are considered to be traceable, stable and contained. However, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Toward children-centric AI: a case for a growth model in children-AI interactions.Karolina La Fors - forthcoming - AI and Society:1-13.
    This article advocates for a hermeneutic model for children-AI interactions in which the desirable purpose of children’s interaction with artificial intelligence systems is children's growth. The article perceives AI systems with machine-learning components as having a recursive element when interacting with children. They can learn from an encounter with children and incorporate data from interaction, not only from prior programming. Given the purpose of growth and this recursive element of AI, the article argues for distinguishing the interpretation of bias within (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A Hippocratic Oath for mathematicians? Mapping the landscape of ethics in mathematics.Dennis Müller, Maurice Chiodo & James Franklin - 2022 - Science and Engineering Ethics 28 (5):1-30.
    While the consequences of mathematically-based software, algorithms and strategies have become ever wider and better appreciated, ethical reflection on mathematics has remained primitive. We review the somewhat disconnected suggestions of commentators in recent decades with a view to piecing together a coherent approach to ethics in mathematics. Calls for a Hippocratic Oath for mathematicians are examined and it is concluded that while lessons can be learned from the medical profession, the relation of mathematicians to those affected by their work is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Effectiveness of Embedded Values Analysis Modules in Computer Science Education: An Empirical Study.Matthew Kopec, Meica Magnani, Vance Ricks, Roben Torosyan, John Basl, Nicholas Miklaucic, Felix Muzny, Ronald Sandler, Christo Wilson, Adam Wisniewski-Jensen, Cora Lundgren, Kevin Mills & Mark Wells - 2023 - Big Data and Society 10 (1).
    Embedding ethics modules within computer science courses has become a popular response to the growing recognition that CS programs need to better equip their students to navigate the ethical dimensions of computing technologies like AI, machine learning, and big data analytics. However, the popularity of this approach has outpaced the evidence of its positive outcomes. To help close that gap, this empirical study reports positive results from Northeastern’s program that embeds values analysis modules into CS courses. The resulting data suggest (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Algorithmic Political Bias—an Entrenchment Concern.Ulrik Franke - 2022 - Philosophy and Technology 35 (3):1-6.
    This short commentary on Peters identifies the entrenchment of political positions as one additional concern related to algorithmic political bias, beyond those identified by Peters. First, it is observed that the political positions detected and predicted by algorithms are typically contingent and largely explained by “political tribalism”, as argued by Brennan. Second, following Hacking, the social construction of political identities is analyzed and it is concluded that algorithmic political bias can contribute to such identities. Third, following Nozick, it is argued (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings.Georgina Curto, Mario Fernando Jojoa Acosta, Flavio Comim & Begoña Garcia-Zapirain - forthcoming - AI and Society:1-16.
    Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual framework of the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition.Rosalie Waelen & Michał Wieczorek - 2022 - Philosophy and Technology 35 (2).
    AI systems have often been found to contain gender biases. As a result of these gender biases, AI routinely fails to adequately recognize the needs, rights, and accomplishments of women. In this article, we use Axel Honneth’s theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Furthermore, we argue that Honneth’s theory of recognition (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Exposing implicit biases and stereotypes in human and artificial intelligence: state of the art and challenges with a focus on gender.Ludovica Marinucci, Claudia Mazzuca & Aldo Gangemi - 2023 - AI and Society 38 (2):747-761.
    Biases in cognition are ubiquitous. Social psychologists suggested biases and stereotypes serve a multifarious set of cognitive goals, while at the same time stressing their potential harmfulness. Recently, biases and stereotypes became the purview of heated debates in the machine learning community too. Researchers and developers are becoming increasingly aware of the fact that some biases, like gender and race biases, are entrenched in the algorithms some AI applications rely upon. Here, taking into account several existing approaches that address the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Promises and Pitfalls of Algorithm Use by State Authorities.Maryam Amir Haeri, Kathrin Hartmann, Jürgen Sirsch, Georg Wenzelburger & Katharina A. Zweig - 2022 - Philosophy and Technology 35 (2):1-31.
    Algorithmic systems are increasingly used by state agencies to inform decisions about humans. They produce scores on risks of recidivism in criminal justice, indicate the probability for a job seeker to find a job in the labor market, or calculate whether an applicant should get access to a certain university program. In this contribution, we take an interdisciplinary perspective, provide a bird’s eye view of the different key decisions that are to be taken when state actors decide to use an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine learning and power relations.Jonne Maas - forthcoming - AI and Society.
    There has been an increased focus within the AI ethics literature on questions of power, reflected in the ideal of accountability supported by many Responsible AI guidelines. While this recent debate points towards the power asymmetry between those who shape AI systems and those affected by them, the literature lacks normative grounding and misses conceptual clarity on how these power dynamics take shape. In this paper, I develop a workable conceptualization of said power dynamics according to Cristiano Castelfranchi’s conceptual framework (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • The Ethics of Algorithms in Healthcare.Christina Oxholm, Anne-Marie S. Christensen & Anette S. Nielsen - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):119-130.
    The amount of data available to healthcare practitioners is growing, and the rapid increase in available patient data is becoming a problem for healthcare practitioners, as they are often unable to fully survey and process the data relevant for the treatment or care of a patient. Consequently, there are currently several efforts to develop systems that can aid healthcare practitioners with reading and processing patient data and, in this way, provide them with a better foundation for decision-making about the treatment (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence.Alexander Buhmann & Christian Fieseler - forthcoming - Business Ethics Quarterly:1-34.
    Responsible innovation in artificial intelligence calls for public deliberation: well-informed “deep democratic” debate that involves actors from the public, private, and civil society sectors in joint efforts to critically address the goals and means of AI. Adopting such an approach constitutes a challenge, however, due to the opacity of AI and strong knowledge boundaries between experts and citizens. This undermines trust in AI and undercuts key conditions for deliberation. We approach this challenge as a problem of situating the knowledge of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Epistemic injustice and data science technologies.John Symons & Ramón Alvarado - 2022 - Synthese 200 (2):1-26.
    Technologies that deploy data science methods are liable to result in epistemic harms involving the diminution of individuals with respect to their standing as knowers or their credibility as sources of testimony. Not all harms of this kind are unjust but when they are we ought to try to prevent or correct them. Epistemically unjust harms will typically intersect with other more familiar and well-studied kinds of harm that result from the design, development, and use of data science technologies. However, (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Binding the Smart City Human-Digital System with Communicative Processes.Brandt Dainow - 2021 - In Michael Nagenborg, Taylor Stone, Margoth González Woge & Pieter E. Vermaas (eds.), Technology and the City: Towards a Philosophy of Urban Technologies. Springer Verlag. pp. 389-411.
    This chapter will explore the dynamics of power underpinning ethical issues within smart cities via a new paradigm derived from Systems Theory. The smart city is an expression of technology as a socio-technical system. The vision of the smart city contains a deep fusion of many different technical systems into a single integrated “ambient intelligence”. ETICA Project, 2010, p. 102). Citizens of the smart city will not experience a succession of different technologies, but a single intelligent and responsive environment through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2022 - AI and Society 37 (1):215-230.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Ethics of AI-Enabled Recruiting and Selection: A Review and Research Agenda.Anna Lena Hunkenschroer & Christoph Luetge - 2022 - Journal of Business Ethics 178 (4):977-1007.
    Companies increasingly deploy artificial intelligence technologies in their personnel recruiting and selection process to streamline it, making it faster and more efficient. AI applications can be found in various stages of recruiting, such as writing job ads, screening of applicant resumes, and analyzing video interviews via face recognition software. As these new technologies significantly impact people’s lives and careers but often trigger ethical concerns, the ethicality of these AI applications needs to be comprehensively understood. However, given the novelty of AI (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • People Prefer Moral Discretion to Algorithms: Algorithm Aversion Beyond Intransparency.Johanna Jauernig, Matthias Uhl & Gari Walkowitz - 2022 - Philosophy and Technology 35 (1):1-25.
    We explore aversion to the use of algorithms in moral decision-making. So far, this aversion has been explained mainly by the fear of opaque decisions that are potentially biased. Using incentivized experiments, we study which role the desire for human discretion in moral decision-making plays. This seems justified in light of evidence suggesting that people might not doubt the quality of algorithmic decisions, but still reject them. In our first study, we found that people prefer humans with decision-making discretion to (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)The epistemological foundations of data science: a critical analysis.Jules Desai, David Watson, Vincent Wang, Mariarosaria Taddeo & Luciano Floridi - manuscript
    The modern abundance and prominence of data has led to the development of “data science” as a new field of enquiry, along with a body of epistemological reflections upon its foundations, methods, and consequences. This article provides a systematic analysis and critical review of significant open problems and debates in the epistemology of data science. We propose a partition of the epistemology of data science into the following five domains: (i) the constitution of data science; (ii) the kind of enquiry (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Code of Digital Ethics: laying the foundation for digital ethics in a science and technology company.Sarah J. Becker, André T. Nemat, Simon Lucas, René M. Heinitz, Manfred Klevesath & Jean Enno Charton - 2023 - AI and Society 38 (6):2629-2639.
    The rapid and dynamic nature of digital transformation challenges companies that wish to develop and deploy novel digital technologies. Like other actors faced with this transformation, companies need to find robust ways to ethically guide their innovations and business decisions. Digital ethics has recently featured in a plethora of both practical corporate guidelines and compilations of high-level principles, but there remains a gap concerning the development of sound ethical guidance in specific business contexts. As a multinational science and technology company (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms.Bas de Boer & Olya Kudina - 2021 - Theoretical Medicine and Bioethics 42 (5):245-266.
    In this paper, we examine the qualitative moral impact of machine learning-based clinical decision support systems in the process of medical diagnosis. To date, discussions about machine learning in this context have focused on problems that can be measured and assessed quantitatively, such as by estimating the extent of potential harm or calculating incurred risks. We maintain that such discussions neglect the qualitative moral impact of these technologies. Drawing on the philosophical approaches of technomoral change and technological mediation theory, which (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Remote Assessment of Depression Using Digital Biomarkers From Cognitive Tasks.Regan L. Mandryk, Max V. Birk, Sarah Vedress, Katelyn Wiley, Elizabeth Reid, Phaedra Berger & Julian Frommel - 2021 - Frontiers in Psychology 12.
    We describe the design and evaluation of a sub-clinical digital assessment tool that integrates digital biomarkers of depression. Based on three standard cognitive tasks on which people with depression have been known to perform differently than a control group, we iteratively designed a digital assessment tool that could be deployed outside of laboratory contexts, in uncontrolled home environments on computer systems with widely varying system characteristics. We conducted two online studies, in which participants used the assessment tool in their own (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Predictive privacy: towards an applied ethics of data analytics.Rainer Mühlhoff - 2021 - Ethics and Information Technology 23 (4):675-690.
    Data analytics and data-driven approaches in Machine Learning are now among the most hailed computing technologies in many industrial domains. One major application is predictive analytics, which is used to predict sensitive attributes, future behavior, or cost, risk and utility functions associated with target groups or individuals based on large sets of behavioral and usage data. This paper stresses the severe ethical and data protection implications of predictive analytics if it is used to predict sensitive information about single individuals or (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Speeding up to keep up: exploring the use of AI in the research process.Jennifer Chubb, Peter Cowling & Darren Reed - 2022 - AI and Society 37 (4):1439-1457.
    There is a long history of the science of intelligent machines and its potential to provide scientific insights have been debated since the dawn of AI. In particular, there is renewed interest in the role of AI in research and research policy as an enabler of new methods, processes, management and evaluation which is still relatively under-explored. This empirical paper explores interviews with leading scholars on the potential impact of AI on research practice and culture through deductive, thematic analysis to (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations