View topic on PhilPapers for more information
Related categories

184 found
Order:
More results on PhilPapers
1 — 50 / 184
  1. Explainable AI is Indispensable in Areas Where Liability is an Issue.Nelson Brochado - manuscript
    What is explainable artificial intelligence and why is it indispensable in areas where liability is an issue?
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing.Benjamin Davies & Thomas Douglas - manuscript
    It is often thought that traditional recidivism prediction tools used in criminal sentencing, though biased in many ways, can straightforwardly avoid one particularly pernicious type of bias: direct racial discrimination. They can avoid this by excluding race from the list of variables employed to predict recidivism. A similar approach could be taken to the design of newer, machine learning-based (ML) tools for predicting recidivism: information about race could be withheld from the ML tool during its training phase, ensuring that the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Algorithmic Fairness From a Non-Ideal Perspective.Sina Fazelpour & Zachary C. Lipton - manuscript
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. The Debate on the Ethics of AI in Health Care: A Reconstruction and Critical Review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies From the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based on (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. AI Alignment Problem: “Human Values” Don’T Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Back to the Future: Curing Past Sufferings and S-Risks Via Indexical Uncertainty.Alexey Turchin - manuscript
    The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Message to Any Future AI: “There Are Several Instrumental Reasons Why Exterminating Humanity is Not in Your Interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. HARMONIZING LAW AND INNOVATIONS IN NANOMEDICINE, ARTIFICIAL INTELLIGENCE (AI) AND BIOMEDICAL ROBOTICS: A CENTRAL ASIAN PERSPECTIVE.Ammar Younas & Tegizbekova Zhyldyz Chynarbekovna - manuscript
    The recent progression in AI, nanomedicine and robotics have increased concerns about ethics, policy and law. The increasing complexity and hybrid nature of AI and nanotechnologies impact the functionality of “law in action” which can lead to legal uncertainty and ultimately to a public distrust. There is an immediate need of collaboration between Central Asian biomedical scientists, AI engineers and academic lawyers for the harmonization of AI, nanomedicines and robotics in Central Asian legal system.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. The Ethics of Digital Well-Being: A Multidisciplinary Perspective.Christopher Burr & Luciano Floridi - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-Being: A Multidisciplinary Perspective. Springer.
    This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Supporting Human Autonomy in AI Systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Shortcuts to Artificial Intelligence.Nello Cristianini - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust. MIT Press.
    The current paradigm of Artificial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of separate problems, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Minding the Future: Artificial Intelligence, Philosophical Visions and Science Fiction.Barry Francis Dainton, Will Slocombe & Attila Tanyi (eds.) - forthcoming - Springer.
    Bringing together literary scholars, computer scientists, ethicists, philosophers of mind, and scholars from affiliated disciplines, this collection of essays offers important and timely insights into the pasts, presents, and, above all, possible futures of Artificial Intelligence. This book covers topics such as ethics and morality, identity and selfhood, and broader issues about AI, addressing questions about the individual, social, and existential impacts of such technologies. Through the works of science fiction authors such as Isaac Asimov, Stanislaw Lem, Ann Leckie, Iain (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. The Ethics of Algorithmic Outsourcing in Everyday Life.John Danaher - forthcoming - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford, UK: Oxford University Press.
    We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  17. Freedom in an Age of Algocracy.John Danaher - forthcoming - In Shannon Vallor (ed.), Oxford Handbook of Philosophy of Technology. Oxford, UK: Oxford University Press.
    There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Sexuality.John Danaher - forthcoming - In Markus Dubber, Frank Pasquale & Sunit Das (eds.), Oxford Handbook of the Ethics of Artificial Intelligence. Oxford: Oxford University Press.
    Sex is an important part of human life. It is a source of pleasure and intimacy, and is integral to many people’s self-identity. This chapter examines the opportunities and challenges posed by the use of AI in how humans express and enact their sexualities. It does so by focusing on three main issues. First, it considers the idea of digisexuality, which according to McArthur and Twist (2017) is the label that should be applied to those ‘whose primary sexual identity comes (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  19. Artificial Intelligence and Legal Disruption: A New Model for Analysis.John Danaher, Hin-Yan Liu, Matthijs Maas, Luisa Scarcella, Michaela Lexer & Leonard Van Rompaey - forthcoming - Law, Innovation and Technology.
    Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. Automation, Work and the Achievement Gap.John Danaher & Sven Nyholm - forthcoming - AI and Ethics.
    Rapid advances in AI-based automation have led to a number of existential and economic concerns. In particular, as automating technologies develop enhanced competency they seem to threaten the values associated with meaningful work. In this article, we focus on one such value: the value of achievement. We argue that achievement is a key part of what makes work meaningful and that advances in AI and automation give rise to a number achievement gaps in the workplace. This could limit people’s ability (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. The Ethics of Biomedical Military Research: Therapy, Prevention, Enhancement, and Risk.Alexandre Erler & Vincent C. Müller - forthcoming - In Daniel Messelken & David Winkler (eds.), Health care in contexts of risk, uncertainty, and hybridity. Berlin: Springer. pp. 1-20.
    What proper role should considerations of risk, particularly to research subjects, play when it comes to conducting research on human enhancement in the military context? We introduce the currently visible military enhancement techniques (1) and the standard discussion of risk for these (2), in particular what we refer to as the ‘Assumption’, which states that the demands for risk-avoidance are higher for enhancement than for therapy. We challenge the Assumption through the introduction of three categories of enhancements (3): therapeutic, preventive, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. Who Should Bear the Risk When Self‐Driving Vehicles Crash?Antti Kauppinen - forthcoming - Journal of Applied Philosophy.
    The moral importance of liability to harm has so far been ignored in the lively debate about what self-driving vehicles should be programmed to do when an accident is inevitable. But liability matters a great deal to just distribution of risk of harm. While morality sometimes requires simply minimizing relevant harms, this is not so when one party is liable to harm in virtue of voluntarily engaging in activity that foreseeably creates a risky situation, while having reasonable alternatives. On plausible (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  23. Digital Well-Being and Manipulation Online.Michael Klenk - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach. Springer.
    Social media use is soaring globally. Existing research of its ethical implications predominantly focuses on the relationships amongst human users online, and their effects. The nature of the software-to-human relationship and its impact on digital well-being, however, has not been sufficiently addressed yet. This paper aims to close the gap. I argue that some intelligent software agents, such as newsfeed curator algorithms in social media, manipulate human users because they do not intend their means of influence to reveal the user’s (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  24. (Online) Manipulation: Sometimes Hidden, Always Careless.Michael Klenk - forthcoming - Review of Social Economy.
    Ever-increasing numbers of human interactions with intelligent software agents, online and offline, and their increasing ability to influence humans have prompted a surge in attention toward the concept of (online) manipulation. Several scholars have argued that manipulative influence is always hidden. But manipulation is sometimes overt, and when this is acknowledged the distinction between manipulation and other forms of social influence becomes problematic. Therefore, we need a better conceptualisation of manipulation that allows it to be overt and yet clearly distinct (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Ethics of Artificial Intelligence.Vincent C. Müller - forthcoming - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 1-20.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. History of Digital Ethics.Vincent C. Müller - forthcoming - In Oxford handbook of digital ethics. Oxford University Press. pp. 1-18.
    Digital ethics, also known as computer ethics or information ethics, is now a lively field that draws a lot of attention, but how did it come about and what were the developments that lead to its existence? What are the traditions, the concerns, the technological and social developments that pushed digital ethics? How did ethical issues change with digitalisation of human life? How did the traditional discipline of philosophy respond? The article provides an overview, proposing historical epochs: ‘pre-modernity’ prior to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. Automation, Basic Income and Merit.Katharina Nieswandt - forthcoming - In Keith Breen & Jean-Philippe Deranty (eds.), Whither Work? The Politics and Ethics of Contemporary Work.
    A recent wave of academic and popular publications say that utopia is within reach: Automation will progress to such an extent and include so many high-skill tasks that much human work will soon become superfluous. The gains from this highly automated economy, authors suggest, could be used to fund a universal basic income (UBI). Today's employees would live off the robots' products and spend their days on intrinsically valuable pursuits. I argue that this prediction is unlikely to come true. Historical (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. Automated Influence and the Challenge of Cognitive Security.Sarah Rajtmajer & Daniel Susser - forthcoming - HoTSoS: ACM Symposium on Hot Topics in the Science of Security.
    Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  29. What We Informationally Owe Each Other.Alan Rubel, Clinton Castro & Adam Pham - forthcoming - In Algorithms & Autonomy: The Ethics of Automated Decision Systems. Cambridge University Press: Cambridge University Press. pp. 21-42.
    ABSTRACT: One important criticism of algorithmic systems is that they lack transparency. Such systems can be opaque because they are complex, protected by patent or trade secret, or deliberately obscure. In the EU, there is a debate about whether the General Data Protection Regulation (GDPR) contains a “right to explanation,” and if so what such a right entails. Our task in this chapter is to address this informational component of algorithmic systems. We argue that information access is integral for respecting (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. How AI Can AID Bioethics.Walter Sinnott Armstrong & Joshua August Skorburg - forthcoming - Journal of Practical Ethics.
    This paper explores some ways in which artificial intelligence (AI) could be used to improve human moral judgments in bioethics by avoiding some of the most common sources of error in moral judgment, including ignorance, confusion, and bias. It surveys three existing proposals for building human morality into AI: Top-down, bottom-up, and hybrid approaches. Then it proposes a multi-step, hybrid method, using the example of kidney allocations for transplants as a test case. The paper concludes with brief remarks about how (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. Is There an App for That?: Ethical Issues in the Digital Mental Health Response to COVID-19.Joshua August Skorburg & Josephine Yam - forthcoming - American Journal of Bioethics Neuroscience:1-14.
    As COVID-19 spread, clinicians warned of mental illness epidemics within the coronavirus pandemic. Funding for digital mental health is surging and researchers are calling for widespread adoption to address the mental health sequalae of COVID-19. -/- We consider whether these technologies improve mental health outcomes and whether they exacerbate existing health inequalities laid bare by the pandemic. We argue the evidence for efficacy is weak and the likelihood of increasing inequalities is high. -/- First, we review recent trends in digital (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Ethical Issues in Text Mining for Mental Health.Joshua Skorburg & Phoebe Friesen - forthcoming - In M. Dehghani & R. Boyd (ed.), The Atlas of Language Analysis in Psychology.
    A recent systematic review of Machine Learning (ML) approaches to health data, containing over 100 studies, found that the most investigated problem was mental health (Yin et al., 2019). Relatedly, recent estimates suggest that between 165,000 and 325,000 health and wellness apps are now commercially available, with over 10,000 of those designed specifically for mental health (Carlo et al., 2019). In light of these trends, the present chapter has three aims: (1) provide an informative overview of some of the recent (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Measuring Automated Influence: Between Empirical Evidence and Ethical Values.Daniel Susser & Vincent Grimaldi - forthcoming - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.
    Automated influence, delivered by digital targeting technologies such as targeted advertising, digital nudges, and recommender systems, has attracted significant interest from both empirical researchers, on one hand, and critical scholars and policymakers on the other. In this paper, we argue for closer integration of these efforts. Critical scholars and policymakers, who focus primarily on the social, ethical, and political effects of these technologies, need empirical evidence to substantiate and motivate their concerns. However, existing empirical research investigating the effectiveness of these (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Iudicium Ex Machinae – The Ethical Challenges of Automated Decision-Making in Criminal Sentencing.Frej Thomsen - forthcoming - In Julian Roberts & Jesper Ryberg (eds.), Principled Sentencing and Artificial Intelligence. Oxford: Oxford University Press.
    Automated decision making for sentencing is the use of a software algorithm to analyse a convicted offender’s case and deliver a sentence. This chapter reviews the moral arguments for and against employing automated decision making for sentencing and finds that its use is in principle morally permissible. Specifically, it argues that well-designed automated decision making for sentencing will better approximate the just sentence than human sentencers. Moreover, it dismisses common concerns about transparency, privacy and bias as unpersuasive or inapplicable. The (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  35. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - forthcoming - International Journal of Social Robotics:1-15.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Moral Zombies: Why Algorithms Are Not Moral Agents.Carissa Véliz - forthcoming - AI and Society:1-11.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Ethics of Artificial Intelligence in Brain and Mental Health.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. From Human Resources to Human Rights: Impact Assessments for Hiring Algorithms.Josephine Yam & Joshua August Skorburg - forthcoming - Ethics and Information Technology.
    Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic for two reasons. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. Sustainability of Artificial Intelligence: Reconciling Human Rights with Legal Rights of Robots.Ammar Younas & Rehan Younas - forthcoming - In Zhyldyzbek Zhakshylykov & Aizhan Baibolot (eds.), Quality Time 18. Bishkek: International Alatoo University Kyrgyzstan. pp. 25-28.
    With the advancement of artificial intelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational people) will (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. Proceed with Caution.Annette Zimmermann & Chad Lee-Stronach - forthcoming - Canadian Journal of Philosophy.
    It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  41. The Algorithm Audit: Scoring the Algorithms That Score Us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do not (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. What's Fair About Individual Fairness?Will Fleisher - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.
    One of the main lines of research in algorithmic fairness involves individual fairness (IF) methods. Individual fairness is motivated by an intuitive principle, similar treatment, which requires that similar individuals be treated similarly. IF offers a precise account of this principle using distance metrics to evaluate the similarity of individuals. Proponents of individual fairness have argued that it gives the correct definition of algorithmic fairness, and that it should therefore be preferred to other methods for determining fairness. I argue that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  43. Bigger Isn’T Better: The Ethical and Scientific Vices of Extra-Large Datasets in Language Models.Trystan S. Goetze & Darren Abramson - 2021 - WebSci '21 Proceedings of the 13th Annual ACM Web Science Conference (Companion Volume).
    The use of language models in Web applications and other areas of computing and business have grown significantly over the last five years. One reason for this growth is the improvement in performance of language models on a number of benchmarks — but a side effect of these advances has been the adoption of a “bigger is always better” paradigm when it comes to the size of training, testing, and challenge datasets. Drawing on previous criticisms of this paradigm as applied (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  44. The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-Making Systems.Atoosa Kasirzadeh & Colin Klein - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21).
    Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. Osaammeko rakentaa moraalisia toimijoita?Antti Kauppinen - 2021 - In Panu Raatikainen (ed.), Tekoäly, ihminen ja yhteiskunta.
    Jotta olisimme moraalisesti vastuussa teoistamme, meidän on kyettävä muodostamaan käsityksiä oikeasta ja väärästä ja toimimaan ainakin jossain määrin niiden mukaisesti. Jos olemme täysivaltaisia moraalitoimijoita, myös ymmärrämme miksi jotkin teot ovat väärin, ja kykenemme siten joustavasti mukauttamaan toimintaamme eri tilanteisiin. Esitän, ettei näköpiirissä ole tekoälyjärjestelmiä, jotka kykenisivät aidosti välittämään oikein tekemisestä tai ymmärtämään moraalin vaatimuksia, koska nämä kyvyt vaativat kokemustietoisuutta ja kokonaisvaltaista arvostelukykyä. Emme siten voi sysätä koneille vastuuta teoistaan. Meidän on sen sijaan pyrittävä rakentamaan keinotekoisia oikeintekijöitä - järjestelmiä, jotka eivät (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  46. Playing the Blame Game with Robots.Markus Kneer & Michael T. Stuart - 2021 - In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21 Companion). New York, NY, USA:
    Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]–[4]. In this paper, we explore the moral- psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. Tecno-especies: la humanidad que se hace a sí misma y los desechables.Mateja Kovacic & María G. Navarro - 2021 - Bajo Palabra. Revista de Filosofía 27 (II Epoca):45-62.
    Popular culture continues fuelling public imagination with things, human and non-human, that we might beco-me or confront. Besides robots, other significant tropes in popular fiction that generated images include non-human humans and cyborgs, wired into his-torically varying sociocultural realities. Robots and artificial intelligence are re-defining the natural order and its hierar-chical structure. This is not surprising, as natural order is always in flux, shaped by new scientific discoveries, especially the reading of the genetic code, that reveal and redefine relationships between (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  48. What Do We Want From Explainable Artificial Intelligence (XAI)? – A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296:103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  49. African Reasons Why Artificial Intelligence Should Not Maximize Utility.Thaddeus Metz - 2021 - In Beatrice Okyere-Manu (ed.), African Values, Ethics, and Technology: Questions, Issues, and Approaches. Palgrave Macmillan. pp. 55-72.
    Insofar as artificial intelligence is to be used to guide automated systems in their interactions with humans, the dominant view is probably that it would be appropriate to programme them to maximize (expected) utility. According to utilitarianism, which is a characteristically western conception of moral reason, machines should be programmed to do whatever they could in a given circumstance to produce in the long run the highest net balance of what is good for human beings minus what is bad for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. Ethical Aspects of Multi-Stakeholder Recommendation Systems.Silvia Milano, Mariarosaria Taddeo & Luciano Floridi - 2021 - The Information Society 37 (1):35-45.
    This article analyses the ethical aspects of multistakeholder recommendation systems (RSs). Following the most common approach in the literature, we assume a consequentialist framework to introduce the main concepts of multistakeholder recommendation. We then consider three research questions: who are the stakeholders in a RS? How are their interests taken into account when formulating a recommendation? And, what is the scientific paradigm underlying RSs? Our main finding is that multistakeholder RSs (MRSs) are designed and theorised, methodologically, according to neoclassical welfare (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 184