Switch to: References

Add citations

You must login to add citations.
  1. Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
    Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Can robots be trustworthy?Ines Schröder, Oliver Müller, Helena Scholl, Shelly Levy-Tzedek & Philipp Kellmeyer - 2023 - Ethik in der Medizin 35 (2):221-246.
    Definition of the problem This article critically addresses the conceptualization of trust in the ethical discussion on artificial intelligence (AI) in the specific context of social robots in care. First, we attempt to define in which respect we can speak of ‘social’ robots and how their ‘social affordances’ affect the human propensity to trust in human–robot interaction. Against this background, we examine the use of the concept of ‘trust’ and ‘trustworthiness’ with respect to the guidelines and recommendations of the High-Level (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Agency and the Game of Semantic Extension.Fossa Fabio - 2021 - Interdisciplinary Science Reviews 46 (4):440-457.
    Artificial agents are commonly described by using words that traditionally belong to the semantic field of organisms, particularly of animal and human life. I call this phenomenon the game of semantic extension. However, the semantic extension of words as crucial as “autonomous”, “intelligent”, “creative”, “moral”, and so on, is often perceived as unsatisfactory, which is signalled with the extensive use of inverted commas or other syntactical cues. Such practice, in turn, has provoked harsh criticism that usually refers back to the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Explaining Epistemic Opacity.Ramón Alvarado - unknown
    Conventional accounts of epistemic opacity, particularly those that stem from the definitive work of Paul Humphreys, typically point to limitations on the part of epistemic agents to account for the distinct ways in which systems, such as computational methods and devices, are opaque. They point, for example, to the lack of technical skill on the part of an agent, the failure to meet standards of best practice, or even the nature of an agent as reasons why epistemically relevant elements of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • A philosophical perspective on visualization for digital humanities.Hein Van Den Berg, Arianna Betti, Thom Castermans, Rob Koopman, Bettina Speckmann, K. A. B. Verbeek, Titia Van der Werf, Shenghui Wang & Michel A. Westenberg - 2018 - 3Rd Workshop on Visualization for the Digital Humanities.
    In this position paper, we describe a number of methodological and philosophical challenges that arose within our interdisciplinary Digital Humanities project CatVis, which is a collaboration between applied geometric algorithms and visualization researchers, data scientists working at OCLC, and philosophers who have a strong interest in the methodological foundations of visualization research. The challenges we describe concern aspects of one single epistemic need: that of methodologically securing (an increase in) trust in visualizations. We discuss the lack of ground truths in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Accessing Online Data for Youth Mental Health Research: Meeting the Ethical Challenges.Elvira Perez Vallejos, Ansgar Koene, Christopher James Carter, Daniel Hunt, Christopher Woodard, Lachlan Urquhart, Aislinn Bergin & Ramona Statache - 2019 - Philosophy and Technology 32 (1):87-110.
    This article addresses the general ethical issues of accessing online personal data for research purposes. The authors discuss the practical aspects of online research with a specific case study that illustrates the ethical challenges encountered when accessing data from Kooth, an online youth web-counselling service. This paper firstly highlights the relevance of a process-based approach to ethics when accessing highly sensitive data and then discusses the ethical considerations and potential challenges regarding the accessing of public data from Digital Mental Health (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Designing the Health-related Internet of Things: Ethical Principles and Guidelines.Brent Mittelstadt - 2017 - Information 8 (3):77.
    The conjunction of wireless computing, ubiquitous Internet access, and the miniaturisation of sensors have opened the door for technological applications that can monitor health and well-being outside of formal healthcare systems. The health-related Internet of Things (H-IoT) increasingly plays a key role in health management by providing real-time tele-monitoring of patients, testing of treatments, actuation of medical devices, and fitness and well-being monitoring. Given its numerous applications and proposed benefits, adoption by medical and social care institutions and consumers may be (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Ethics of the health-related internet of things: a narrative review.Brent Mittelstadt - 2017 - Ethics and Information Technology 19 (3):1-19.
    The internet of things is increasingly spreading into the domain of medical and social care. Internet-enabled devices for monitoring and managing the health and well-being of users outside of traditional medical institutions have rapidly become common tools to support healthcare. Health-related internet of things (H-IoT) technologies increasingly play a key role in health management, for purposes including disease prevention, real-time tele-monitoring of patient’s functions, testing of treatments, fitness and well-being monitoring, medication dispensation, and health research data collection. H-IoT promises many (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • What is data ethics?Luciano Floridi & Mariarosaria Taddeo - 2016 - Philosophical Transactions of the Royal Society A 374 (2083):20160360.
    This theme issue has the founding ambition of landscaping Data Ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and use), algorithms (including AI, artificial agents, machine learning, and robots), and corresponding practices (including responsible innovation, programming, hacking, and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data Ethics builds on the foundation provided by Computer and Information (...)
    Download  
     
    Export citation  
     
    Bookmark   58 citations  
  • How the EU AI Act Seeks to Establish an Epistemic Environment of Trust.Calvin Wai-Loon Ho & Karel Caals - 2024 - Asian Bioethics Review 16 (3):345-372.
    With focus on the development and use of artificial intelligence (AI) systems in the digital health context, we consider the following questions: How does the European Union (EU) seek to facilitate the development and uptake of trustworthy AI systems through the AI Act? What does trustworthiness and trust mean in the AI Act, and how are they linked to some of the ongoing discussions of these terms in bioethics, law, and philosophy? What are the normative components of trustworthiness? And how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Accessing Online Data for Youth Mental Health Research: Meeting the Ethical Challenges.Elvira Perez Vallejos, Ansgar Koene, Christopher James Carter, Daniel Hunt, Christopher Woodard, Lachlan Urquhart, Aislinn Bergin & Ramona Statache - 2019 - Philosophy and Technology 32 (1):87-110.
    This article addresses the general ethical issues of accessing online personal data for research purposes. The authors discuss the practical aspects of online research with a specific case study that illustrates the ethical challenges encountered when accessing data from Kooth, an online youth web-counselling service. This paper firstly highlights the relevance of a process-based approach to ethics when accessing highly sensitive data and then discusses the ethical considerations and potential challenges regarding the accessing of public data from Digital Mental Health (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis.Melvin Chen - 2021 - Philosophy and Technology 34 (4):1429-1447.
    In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will suggest how trust-engineering, a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Reviewing the Case of Online Interpersonal Trust.Mirko Tagliaferri - 2023 - Foundations of Science 28 (1):225-254.
    The aim of this paper is to better qualify the problem of online trust. The problem of online trust is that of evaluating whether online environments have the proper design to enable trust. This paper tries to better qualify this problem by showing that there is no unique answer, but only conditional considerations that depend on the conception of trust assumed and the features that are included in the environments themselves. In fact, the major issue concerning traditional debates surrounding online (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial agents’ explainability to support trust: considerations on timing and context.Guglielmo Papagni, Jesse de Pagter, Setareh Zafari, Michael Filzmoser & Sabine T. Koeszegi - 2023 - AI and Society 38 (2):947-960.
    Strategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception of artificial agents’ reliability. Particularly, this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Data Philanthropy and Individual Rights.Mariarosaria Taddeo - 2017 - Minds and Machines 27 (1):1-5.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Information Societies, Ethical Enquiries.Mariarosaria Taddeo & Elizabeth Buchanan - 2015 - Philosophy and Technology 28 (1):5-10.
    The special issue collects a selection of papers presented during the Computer Ethics: Philosophical Enquiries 2013 conference. This is a series of conferences organized by the International Association for Ethics and Information Technology , a professional organization formed in 2001 and which gathers experts in information and computer ethics prompting interdisciplinary research and discussions on ethical problems related to design and deployment of information and communication technologies . During the past two decades, CEPE conferences have been a focal point for (...)
    Download  
     
    Export citation  
     
    Bookmark