View topic on PhilPapers for more information
Related categories

140 found
Order:
More results on PhilPapers
1 — 50 / 140
  1. Explainable AI is Indispensable in Areas Where Liability is an Issue.Nelson Brochado - manuscript
    What is explainable artificial intelligence and why is it indispensable in areas where liability is an issue?
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing.Benjamin Davies & Thomas Douglas - manuscript
    It is often thought that traditional recidivism prediction tools used in criminal sentencing, though biased in many ways, can straightforwardly avoid one particularly pernicious type of bias: direct racial discrimination. They can avoid this by excluding race from the list of variables employed to predict recidivism. A similar approach could be taken to the design of newer, machine learning-based (ML) tools for predicting recidivism: information about race could be withheld from the ML tool during its training phase, ensuring that the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Algorithmic Fairness From a Non-Ideal Perspective.Sina Fazelpour & Zachary C. Lipton - manuscript
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies From the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based on (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Message to Any Future AI: “There Are Several Instrumental Reasons Why Exterminating Humanity is Not in Your Interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. AI Alignment Problem: “Human Values” Don’T Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Back to the Future: Curing Past Sufferings and S-Risks Via Indexical Uncertainty.Alexey Turchin - manuscript
    The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - manuscript
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways of autonomous systems. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper proposes the Value Sensitive Design (VSD) approach as a principled framework for incorporating these values in design. The example of autonomous vehicles is used as a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. The Ethics of Digital Well-Being: A Multidisciplinary Perspective.Christopher Burr & Luciano Floridi - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-Being: A Multidisciplinary Perspective. Springer.
    This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. Supporting Human Autonomy in AI Systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Shortcuts to Artificial Intelligence.Nello Cristianini - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust. MIT Press.
    The current paradigm of Artificial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of separate problems, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  14. Freedom in an Age of Algocracy.John Danaher - forthcoming - In Shannon Vallor (ed.), Oxford Handbook of Philosophy of Technology. Oxford, UK: Oxford University Press.
    There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Sexuality.John Danaher - forthcoming - In Markus Dubber, Frank Pasquale & Sunit Das (eds.), Oxford Handbook of the Ethics of Artificial Intelligence. Oxford: Oxford University Press.
    Sex is an important part of human life. It is a source of pleasure and intimacy, and is integral to many people’s self-identity. This chapter examines the opportunities and challenges posed by the use of AI in how humans express and enact their sexualities. It does so by focusing on three main issues. First, it considers the idea of digisexuality, which according to McArthur and Twist (2017) is the label that should be applied to those ‘whose primary sexual identity comes (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  16. The Ethics of Algorithmic Outsourcing in Everyday Life.John Danaher - forthcoming - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford, UK: Oxford University Press.
    We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. The Algorithm Audit: Scoring the Algorithms That Score Us.Jovana Davidovic, Shea Brown & Ali Hasan - forthcoming - Big Data and Society.
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high-level to be put into practice without further guidance, or they focus on very specific notions of fairness or transparency that don’t consider multiple stakeholders or the (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  18. Osaammeko rakentaa moraalisia toimijoita?Antti Kauppinen - forthcoming - In Panu Raatikainen (ed.), Tekoäly, ihminen ja yhteiskunta.
    Jotta olisimme moraalisesti vastuussa teoistamme, meidän on kyettävä muodostamaan käsityksiä oikeasta ja väärästä ja toimimaan ainakin jossain määrin niiden mukaisesti. Jos olemme täysivaltaisia moraalitoimijoita, myös ymmärrämme miksi jotkin teot ovat väärin, ja kykenemme siten joustavasti mukauttamaan toimintaamme eri tilanteisiin. Esitän, ettei näköpiirissä ole tekoälyjärjestelmiä, jotka kykenisivät aidosti välittämään oikein tekemisestä tai ymmärtämään moraalin vaatimuksia, koska nämä kyvyt vaativat kokemustietoisuutta ja kokonaisvaltaista arvostelukykyä. Emme siten voi sysätä koneille vastuuta teoistaan. Meidän on sen sijaan pyrittävä rakentamaan keinotekoisia oikeintekijöitä - järjestelmiä, jotka eivät (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  19. Who Should Bear the Risk When Self‐Driving Vehicles Crash?Antti Kauppinen - forthcoming - Journal of Applied Philosophy.
    The moral importance of liability to harm has so far been ignored in the lively debate about what self-driving vehicles should be programmed to do when an accident is inevitable. But liability matters a great deal to just distribution of risk of harm. While morality sometimes requires simply minimizing relevant harms, this is not so when one party is liable to harm in virtue of voluntarily engaging in activity that foreseeably creates a risky situation, while having reasonable alternatives. On plausible (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. Digital Well-Being and Manipulation Online.Michael Klenk - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach. Springer.
    Social media use is soaring globally. Existing research of its ethical implications predominantly focuses on the relationships amongst human users online, and their effects. The nature of the software-to-human relationship and its impact on digital well-being, however, has not been sufficiently addressed yet. This paper aims to close the gap. I argue that some intelligent software agents, such as newsfeed curator algorithms in social media, manipulate human users because they do not intend their means of influence to reveal the user’s (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Ethical Aspects of Multi-Stakeholder Recommendation Systems.Silvia Milano, Mariarosaria Taddeo & Luciano Floridi - forthcoming - The Information Society.
    This article analyses the ethical aspects of multistakeholder recommendation systems (RSs). Following the most common approach in the literature, we assume a consequentialist framework to introduce the main concepts of multistakeholder recommendation. We then consider three research questions: who are the stakeholders in a RS? How are their interests taken into account when formulating a recommendation? And, what is the scientific paradigm underlying RSs? Our main finding is that multistakeholder RSs (MRSs) are designed and theorised, methodologically, according to neoclassical welfare (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. Ethics of Artificial Intelligence.Vincent C. Müller - forthcoming - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 1-20.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Automated Influence and the Challenge of Cognitive Security.Sarah Rajtmajer & Daniel Susser - forthcoming - HoTSoS: ACM Symposium on Hot Topics in the Science of Security.
    Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Democratic Obligations and Technological Threats to Legitimacy: PredPol, Cambridge Analytica, and Internet Research Agency.Alan Rubel, Clinton Castro & Adam Pham - forthcoming - In Algorithms & Autonomy: The Ethics of Automated Decision Systems. Cambridge University Press:
    NOTE: This material is forthcoming in revised form in Algorithms & Autonomy: The Ethics of Automated Decision Systems by Alan Rubel, Clinton Castro, and Adam Pham (Cambridge University Press). This version is free to view and download for private research and study only. It is not for re-distribution or re-use. Please cite to the final version when available. © Alan Rubel, Clinton Castro, and Adam Pham. ABSTRACT: So far in this book, we have examined algorithmic decision systems from three autonomy-based (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. What We Informationally Owe Each Other.Alan Rubel, Clinton Castro & Adam Pham - forthcoming - In Algorithms & Autonomy: The Ethics of Automated Decision Systems. Cambridge University Press:
    NOTE: This material is forthcoming in revised form in Algorithms & Autonomy: The Ethics of Automated Decision Systems by Alan Rubel, Clinton Castro, and Adam Pham (Cambridge University Press). This version is free to view and download for private research and study only. It is not for re-distribution or re-use. Please cite to the final version when available. © Alan Rubel, Clinton Castro, and Adam Pham. ABSTRACT: One important criticism of algorithmic systems is that they lack transparency. Such systems can (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Ethical Issues in Text Mining for Mental Health.Joshua Skorburg & Phoebe Friesen - forthcoming - In M. Dehghani & R. Boyd (ed.), The Atlas of Language Analysis in Psychology.
    A recent systematic review of Machine Learning (ML) approaches to health data, containing over 100 studies, found that the most investigated problem was mental health (Yin et al., 2019). Relatedly, recent estimates suggest that between 165,000 and 325,000 health and wellness apps are now commercially available, with over 10,000 of those designed specifically for mental health (Carlo et al., 2019). In light of these trends, the present chapter has three aims: (1) provide an informative overview of some of the recent (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Iudicium Ex Machinae – The Ethical Challenges of Automated Decision-Making in Criminal Sentencing.Frej Thomsen - forthcoming - In Julian Roberts & Jesper Ryberg (eds.), Principled Sentencing and Artificial Intelligence. Oxford: Oxford University Press.
    Automated decision making for sentencing is the use of a software algorithm to analyse a convicted offender’s case and deliver a sentence. This chapter reviews the moral arguments for and against employing automated decision making for sentencing and finds that its use is in principle morally permissible. Specifically, it argues that well-designed automated decision making for sentencing will better approximate the just sentence than human sentencers. Moreover, it dismisses common concerns about transparency, privacy and bias as unpersuasive or inapplicable. The (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. Mapping Value Sensitive Design Onto AI for Social Good Principles.Steven Umbrello & Ibo van de Poel - forthcoming - AI and Ethics.
    Value Sensitive Design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  29. AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Ethics of Artificial Intelligence in Brain and Mental Health.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Sustainability of Artificial Intelligence: Reconciling Human Rights with Legal Rights of Robots.Ammar Younas & Rehan Younas - forthcoming - In Zhyldyzbek Zhakshylykov & Aizhan Baibolot (eds.), Quality Time 18. Bishkek: International Alatoo University Kyrgyzstan. pp. 25-28.
    With the advancement of artificial intelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational people) will (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. Empowerment or Engagement? Digital Health Technologies for Mental Healthcare.Christopher Burr & Jessica Morley - 2020 - In Christopher Burr & Silvia Milano (eds.), The 2019 Yearbook of the Digital Ethics Lab. pp. 67-88.
    We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  32. The Rise of Artificial Intelligence and the Crisis of Moral Passivity.Berman Chan - 2020 - AI and Society 35 (4):991-993.
    Set aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in all our (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  33. Modelos Dinâmicos Aplicados à Aprendizagem de Valores em Inteligência Artificial.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2020 - Veritas – Revista de Filosofia da Pucrs 2 (65):1-15.
    Experts in Artificial Intelligence (AI) development predict that advances in the development of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance is not made prudently and critically-reflexively, it can result in negative outcomes for humanity. For this reason, several researchers in the area have developed a robust, beneficial, and safe concept of AI for the preservation of humanity and the environment. Currently, several of the open problems in the field of AI research (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  34. Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up approaches to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  35. Presumptuous Aim Attribution, Conformity, and the Ethics of Artificial Social Cognition.Owen C. King - 2020 - Ethics and Information Technology 22 (1):25-37.
    Imagine you are casually browsing an online bookstore, looking for an interesting novel. Suppose the store predicts you will want to buy a particular novel: the one most chosen by people of your same age, gender, location, and occupational status. The store recommends the book, it appeals to you, and so you choose it. Central to this scenario is an automated prediction of what you desire. This article raises moral concerns about such predictions. More generally, this article examines the ethics (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. Consequences of Unexplainable Machine Learning for the Notions of a Trusted Doctor and Patient Autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
    This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. Towards a Middle-Ground Theory of Agency for Artificial Intelligence.Louis Longin - 2020 - In M. Nørskov, J. Seibt & O. Quick (eds.), Culturally Sustainable Social Robotics — Proceedings of Robophilosophy 2020. Series Frontiers of AI and Its Applications. Amsterdam, Netherlands: pp. 17-26.
    The recent rise of artificial intelligence (AI) systems has led to intense discussions on their ability to achieve higher-level mental states or the ethics of their implementation. One question, which so far has been neglected in the literature, is the question of whether AI systems are capable of action. While the philosophical tradition appeals to intentional mental states, others have argued for a widely inclusive theory of agency. In this paper, I will argue for a gradual concept of agency because (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Excavating “Excavating AI”: The Elephant in the Gallery.Michael J. Lyons - 2020 - arXiv 2009:1-15.
    Two art exhibitions, “Training Humans” and “Making Faces,” and the accompanying essay “Excavating AI: The politics of images in machine learning training sets” by Kate Crawford and Trevor Paglen, are making substantial impact on discourse taking place in the social and mass media networks, and some scholarly circles. Critical scrutiny reveals, however, a self-contradictory stance regarding informed consent for the use of facial images, as well as serious flaws in their critique of ML training sets. Our analysis underlines the non-negotiability (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Recommender Systems and Their Ethical Challenges.Silvia Milano, Mariarosaria Taddeo & Luciano Floridi - 2020 - AI and Society (4):957-967.
    This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  40. BCI-Mediated Behavior, Moral Luck, and Punishment.Daniel J. Miller - 2020 - American Journal of Bioethics Neuroscience 11 (1):72-74.
    An ongoing debate in the philosophy of action concerns the prevalence of moral luck: instances in which an agent’s moral responsibility is due, at least in part, to factors beyond his control. I point to a unique problem of moral luck for agents who depend upon Brain Computer Interfaces (BCIs) for bodily movement. BCIs may misrecognize a voluntarily formed distal intention (e.g., a plan to commit some illicit act in the future) as a control command to perform some overt behavior (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  41. The Temptation of Data-Enabled Surveillance: Are Universities the Next Cautionary Tale?Alan Rubel - 2020 - Communications of the Acm 4 (63):22-24.
    There is increasing concern about “surveillance capitalism,” whereby for-profit companies generate value from data, while individuals are unable to resist (Zuboff 2019). Non-profits using data-enabled surveillance receive less attention. Higher education institutions (HEIs) have embraced data analytics, but the wide latitude that private, profit-oriented enterprises have to collect data is inappropriate. HEIs have a fiduciary relationship to students, not a narrowly transactional one (see Jones et al, forthcoming). They are responsible for facets of student life beyond education. In addition to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. Algorithms, Agency, and Respect for Persons.Alan Rubel, Clinton Castro & Adam Pham - 2020 - Social Theory and Practice 46 (3):547-572.
    Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four key ways in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  43. AI Methods in Bioethics.Joshua August Skorburg, Walter Sinnott-Armstrong & Vincent Conitzer - 2020 - American Journal of Bioethics: Empirical Bioethics 1 (11):37-39.
    Commentary about the role of AI in bioethics for the 10th anniversary issue of AJOB: Empirical Bioethics.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  44. Ethical Considerations for Digitally Targeted Public Health Interventions.Daniel Susser - 2020 - American Journal of Public Health 110 (S3).
    Public health scholars and public health officials increasingly worry about health-related misinformation online, and they are searching for ways to mitigate it. Some have suggested that the tools of digital influence are themselves a possible answer: we can use targeted, automated digital messaging to counter health-related misinformation and promote accurate information. In this commentary, I raise a number of ethical questions prompted by such proposals—and familiar from the ethics of influence and ethics of AI—highlighting hidden costs of targeted digital messaging (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. Empathy and Instrumentalization: Late Ancient Cultural Critique and the Challenge of Apparently Personal Robots.Jordan Joseph Wales - 2020 - In Marco Nørskov, Johanna Seibt & Oliver Santiago Quick (eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020. Amsterdam: pp. 114-124.
    According to a tradition that we hold variously today, the relational person lives most personally in affective and cognitive empathy, whereby we enter subjective communion with another person. Near future social AIs, including social robots, will give us this experience without possessing any subjectivity of their own. They will also be consumer products, designed to be subservient instruments of their users’ satisfaction. This would seem inevitable. Yet we cannot live as personal when caught between instrumentalizing apparent persons (slaveholding) or numbly (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers have (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  47. Technological Innovation and Natural Law.Philip Woodward - 2020 - Philosophia Reformata 85 (2):138-156.
    I discuss three tiers of technological innovation: mild innovation, or the acceleration by technology of a human activity aimed at a good; moderate innovation, or the obviation by technology of an activity aimed at a good; and radical innovation, or the altering by technology of the human condition so as to change what counts as a good. I argue that it is impossible to morally assess proposed innovations within any of these three tiers unless we rehabilitate a natural-law ethical framework. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  48. Права и свободы человека и гражданина в контексте развития и внедрения систем искусственного интеллекта.Valentin Balanovskiy - 2019 - In Конституция и общественный прогресс. Вторые Прокопьевские чтения. pp. 209-217.
    The author considers ethical and legal aspects of a developing of AI systems. He examines the ethical aspect through the prism of Kant’s philosophy and outlines moral prospects of (quasi)intelligent robots. The author considers the legal aspect in context of normative regulation of risks that arise with a creating of AI systems. In conclusion the author makes an assumption on forthcoming transformation of a legal system because of a new type of legal acts that combine classical form of legal act (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  49. When AI Meets PC: Exploring the Implications of Workplace Social Robots and a Human-Robot Psychological Contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. Strange Loops: Apparent Versus Actual Human Involvement in Automated Decision-Making.Kiel Brennan-Marquez, Karen Levy & Daniel Susser - 2019 - Berkeley Technology Law Journal 34 (3).
    The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 140