Switch to: References

Add citations

You must login to add citations.
  1. The German Act on Autonomous Driving: Why Ethics Still Matters.Alexander Kriebitz, Raphael Max & Christoph Lütge - 2022 - Philosophy and Technology 35 (2):1-13.
    The German Act on Autonomous Driving constitutes the first national framework on level four autonomous vehicles and has received attention from policy makers, AI ethics scholars and legal experts in autonomous driving. Owing to Germany’s role as a global hub for car manufacturing, the following paper sheds light on the act’s position within the ethical discourse and how it reconfigures the balance between legislation and ethical frameworks. Specifically, in this paper, we highlight areas that need to be more worked out (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Biased Humans, (Un)Biased Algorithms?Florian Pethig & Julia Kroenung - 2022 - Journal of Business Ethics 183 (3):637-652.
    Previous research has shown that algorithmic decisions can reflect gender bias. The increasingly widespread utilization of algorithms in critical decision-making domains (e.g., healthcare or hiring) can thus lead to broad and structural disadvantages for women. However, women often experience bias and discrimination through human decisions and may turn to algorithms in the hope of receiving neutral and objective evaluations. Across three studies (N = 1107), we examine whether women’s receptivity to algorithms is affected by situations in which they believe that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system.Doron Kliger, Tsvi Kuflik & Avital Shulner-Tal - 2022 - Ethics and Information Technology 24 (1).
    In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their underlying decision-making process and resulting outcomes. Since oftentimes these systems are being considered as black boxes, adding explanations to their outcomes may contribute to the perception of their transparency and, as a result, increase users’ trust and fairness perception towards the system, regardless of its actual fairness, which can be measured using various fairness tests and measurements. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective.Erik Hermann - 2022 - Journal of Business Ethics 179 (1):43-61.
    Artificial intelligence is shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Interacting with Machines: Can an Artificially Intelligent Agent Be a Partner?Philipp Schmidt & Sophie Loidolt - 2023 - Philosophy and Technology 36 (3):1-32.
    In the past decade, the fields of machine learning and artificial intelligence (AI) have seen unprecedented developments that raise human-machine interactions (HMI) to the next level.Smart machines, i.e., machines endowed with artificially intelligent systems, have lost their character as mere instruments. This, at least, seems to be the case if one considers how humans experience their interactions with them. Smart machines are construed to serve complex functions involving increasing degrees of freedom, and they generate solutions not fully anticipated by humans. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How do people judge the credibility of algorithmic sources?Donghee Shin - 2022 - AI and Society 37 (1):81-96.
    The exponential growth of algorithms has made establishing a trusted relationship between human and artificial intelligence increasingly important. Algorithm systems such as chatbots can play an important role in assessing a user’s credibility on algorithms. Unless users believe the chatbot’s information is credible, they are not likely to be willing to act on the recommendation. This study examines how literacy and user trust influence perceptions of chatbot information credibility. Results confirm that algorithmic literacy and users’ trust play a pivotal role (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Algorithms Don’t Have A Future: On the Relation of Judgement and Calculation.Daniel Stader - 2024 - Philosophy and Technology 37 (1):1-29.
    This paper is about the opposite of judgement and calculation. This opposition has been a traditional anchor of critiques concerned with the rise of AI decision making over human judgement. Contrary to these approaches, it is argued that human judgement is not and cannot be replaced by calculation, but that it is human judgement that contextualises computational structures and gives them meaning and purpose. The article focuses on the epistemic structure of algorithms and artificial neural networks to find that they (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Creating meaningful work in the age of AI: explainable AI, explainability, and why it matters to organizational designers.Kristin Wulff & Hanne Finnestrand - forthcoming - AI and Society:1-14.
    In this paper, we contribute to research on enterprise artificial intelligence (AI), specifically to organizations improving the customer experiences and their internal processes through using the type of AI called machine learning (ML). Many organizations are struggling to get enough value from their AI efforts, and part of this is related to the area of explainability. The need for explainability is especially high in what is called black-box ML models, where decisions are made without anyone understanding how an AI reached (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations