Switch to: References

Add citations

You must login to add citations.
  1. Evil and roboethics in management studies.Enrico Beltramini - 2019 - AI and Society 34 (4):921-929.
    In this article, I address the issue of evil and roboethics in the context of management studies and suggest that management scholars should locate evil in the realm of the human rather than of the artificial. After discussing the possibility of addressing the reality of evil machines in ontological terms, I explore users’ reaction to robots in a social context. I conclude that the issue of evil machines in management is more precisely a case of technology anthropomorphization.
    Download  
     
    Export citation  
     
    Bookmark  
  • A Code of Digital Ethics: laying the foundation for digital ethics in a science and technology company.Sarah J. Becker, André T. Nemat, Simon Lucas, René M. Heinitz, Manfred Klevesath & Jean Enno Charton - 2023 - AI and Society 38 (6):2629-2639.
    The rapid and dynamic nature of digital transformation challenges companies that wish to develop and deploy novel digital technologies. Like other actors faced with this transformation, companies need to find robust ways to ethically guide their innovations and business decisions. Digital ethics has recently featured in a plethora of both practical corporate guidelines and compilations of high-level principles, but there remains a gap concerning the development of sound ethical guidance in specific business contexts. As a multinational science and technology company (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Training philosopher engineers for better AI.Brian Ball & Alexandros Koliousis - 2023 - AI and Society 38 (2):861-868.
    There is a deluge of AI-assisted decision-making systems, where our data serve as proxy to our actions, suggested by AI. The closer we investigate our data (raw input, or their learned representations, or the suggested actions), we begin to discover “bugs”. Outside of their test, controlled environments, AI systems may encounter situations investigated primarily by those in other disciplines, but experts in those fields are typically excluded from the design process and are only invited to attest to the ethical features (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Beyond mystery: Putting algorithmic accountability in context.Andrea Ballestero, Baki Cakici & Elizabeth Reddy - 2019 - Big Data and Society 6 (1).
    Critical algorithm scholarship has demonstrated the difficulties of attributing accountability for the actions and effects of algorithmic systems. In this commentary, we argue that we cannot stop at denouncing the lack of accountability for algorithms and their effects but must engage the broader systems and distributed agencies that algorithmic systems exist within; including standards, regulations, technologies, and social relations. To this end, we explore accountability in “the Generated Detective,” an algorithmically generated comic. Taking up the mantle of detectives ourselves, we (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations.Kristina Astromskė, Eimantas Peičius & Paulius Astromskis - 2021 - AI and Society 36 (2):509-520.
    This paper inquiries into the complex issue of informed consent applying artificial intelligence in medical diagnostic consultations. The aim is to expose the main ethical and legal concerns of the New Health phenomenon, powered by intelligent machines. To achieve this objective, the first part of the paper analyzes ethical aspects of the alleged right to explanation, privacy, and informed consent, applying artificial intelligence in medical diagnostic consultations. This analysis is followed by a legal analysis of the limits and requirements for (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Mental time-travel, semantic flexibility, and A.I. ethics.Marcus Arvan - 2023 - AI and Society 38 (6):2577-2596.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Characteristics and challenges in the industries towards responsible AI: a systematic literature review.Marianna Anagnostou, Olga Karvounidou, Chrysovalantou Katritzidaki, Christina Kechagia, Kyriaki Melidou, Eleni Mpeza, Ioannis Konstantinidis, Eleni Kapantai, Christos Berberidis, Ioannis Magnisalis & Vassilios Peristeras - 2022 - Ethics and Information Technology 24 (3):1-18.
    Today humanity is in the midst of the massive expansion of new and fundamental technology, represented by advanced artificial intelligence (AI) systems. The ongoing revolution of these technologies and their profound impact across various sectors, has triggered discussions about the characteristics and values that should guide their use and development in a responsible manner. In this paper, we conduct a systematic literature review with the aim of pointing out existing challenges and required principles in AI-based systems in different industries. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Politics of data reuse in machine learning systems: Theorizing reuse entanglements.Louise Amoore, Mikkel Flyverbom, Kristian Bondo Hansen & Nanna Bonde Thylstrup - 2022 - Big Data and Society 9 (2).
    Policy discussions and corporate strategies on machine learning are increasingly championing data reuse as a key element in digital transformations. These aspirations are often coupled with a focus on responsibility, ethics and transparency, as well as emergent forms of regulation that seek to set demands for corporate conduct and the protection of civic rights. And the Protective measures include methods of traceability and assessments of ‘good’ and ‘bad’ datasets and algorithms that are considered to be traceable, stable and contained. However, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Epistemology of Non-distributive Profiles.Patrick Allo - 2020 - Philosophy and Technology 33 (3):379-409.
    The distinction between distributive and non-distributive profiles figures prominently in current evaluations of the ethical and epistemological risks that are associated with automated profiling practices. The diagnosis that non-distributive profiles may coincidentally situate an individual in the wrong category is often perceived as the central shortcoming of such profiles. According to this diagnosis, most risks can be retraced to the use of non-universal generalisations and various other statistical associations. This article develops a top-down analysis of non-distributive profiles in which this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Constructionist Philosophy of Logic.Patrick Allo - 2017 - Minds and Machines 27 (3):545-564.
    This paper develops and refines the suggestion that logical systems are conceptual artefacts that are the outcome of a design-process by exploring how a constructionist epistemology and meta-philosophy can be integrated within the philosophy of logic.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Tensions in transparent urban AI: designing a smart electric vehicle charge point.Kars Alfrink, Ianus Keller, Neelke Doorn & Gerd Kortuem - 2023 - AI and Society 38 (3):1049-1065.
    The increasing use of artificial intelligence (AI) by public actors has led to a push for more transparency. Previous research has conceptualized AI transparency as knowledge that empowers citizens and experts to make informed choices about the use and governance of AI. Conversely, in this paper, we critically examine if transparency-as-knowledge is an appropriate concept for a public realm where private interests intersect with democratic concerns. We conduct a practice-based design research study in which we prototype and evaluate a transparent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Influence of Business Incentives and Attitudes on Ethics Discourse in the Information Technology Industry.Sanju Ahuja & Jyoti Kumar - 2021 - Philosophy and Technology 34 (4):941-966.
    As information technologies have become synonymous with progress in modern society, several ethical concerns have surfaced about their societal implications. In the past few decades, information technologies have had a value-laden impact on social evolution. However, there is limited agreement on the responsibility of businesses and innovators concerning the ethical aspects of information technologies. There is a need to understand the role of business incentives and attitudes in driving technological progress and to understand how they steer the ethics discourse on (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Big Data in the workplace: Privacy Due Diligence as a human rights-based approach to employee privacy protection.Jeremias Adams-Prassl, Isabelle Wildhaber & Isabel Ebert - 2021 - Big Data and Society 8 (1).
    Data-driven technologies have come to pervade almost every aspect of business life, extending to employee monitoring and algorithmic management. How can employee privacy be protected in the age of datafication? This article surveys the potential and shortcomings of a number of legal and technical solutions to show the advantages of human rights-based approaches in addressing corporate responsibility to respect privacy and strengthen human agency. Based on this notion, we develop a process-oriented model of Privacy Due Diligence to complement existing frameworks (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Engineering Trustworthiness in the Online Environment.Hugh Desmond - 2023 - In Mark Alfano & David Collins (eds.), The Moral Psychology of Trust. Rowman and Littlefield. pp. 215-237.
    Algorithm engineering is sometimes portrayed as a new 21st century return of manipulative social engineering. Yet algorithms are necessary tools for individuals to navigate online platforms. Algorithms are like a sensory apparatus through which we perceive online platforms: this is also why individuals can be subtly but pervasively manipulated by biased algorithms. How can we better understand the nature of algorithm engineering and its proper function? In this chapter I argue that algorithm engineering can be best conceptualized as a type (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reframing data ethics in research methods education: a pathway to critical data literacy.Javiera Atenas, Leo Havemann & Cristian Timmermann - 2023 - International Journal of Educational Technology in Higher Education 20:11.
    This paper presents an ethical framework designed to support the development of critical data literacy for research methods courses and data training programmes in higher education. The framework we present draws upon our reviews of literature, course syllabi and existing frameworks on data ethics. For this research we reviewed 250 research methods syllabi from across the disciplines, as well as 80 syllabi from data science programmes to understand how or if data ethics was taught. We also reviewed 12 data ethics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trust and Trustworthiness in Al Ethics.Karoline Reinhardt - 2022 - In Al and Ethics. Springer.
    Download  
     
    Export citation  
     
    Bookmark  
  • A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automated Influence and the Challenge of Cognitive Security.Sarah Rajtmajer & Daniel Susser - forthcoming - HoTSoS: ACM Symposium on Hot Topics in the Science of Security.
    Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Jaz u odgovornosti u informatičkoj eri.Jelena Mijić - 2023 - Društvo I Politika 4 (4):25-38.
    Odgovornost pripisujemo sa namerom da postignemo neki cilj. Jedno od opših mesta u filozofskoj literaturi je da osobi možemo pripisati moralnu odgovornost ako su zadovoljena bar dva uslova: da subjekt delanja ima kontrolu nad svojim postupcima i da je u stanju da navede razloge u prilog svog postupka. Međutim, četvrtu industrijsku revoluciju karakterišu sociotehnološke pojave koje nas potencijalno suočavaju sa tzv. problemom jaza u odgovornosti. Rasprave o odgovornosti u kontekstu veštačke inteligencije karakteriše nejasna i neodređena upotreba ovog pojma. Da bismo (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Agency Laundering and Algorithmic Decision Systems.Alan Rubel, Adam Pham & Clinton Castro - 2019 - In N. Taylor, C. Christian-Lamb, M. Martin & B. Nardi (eds.), Information in Contemporary Society (Lecture Notes in Computer Science). Springer Nature. pp. 590-598.
    This paper has two aims. The first is to explain a type of wrong that arises when agents obscure responsibility for their actions. Call it “agency laundering.” The second is to use the concept of agency laundering to understand the underlying moral issues in a number of recent cases involving algorithmic decision systems. From the Proceedings of the 14th International Conference, iConference 2019, Washington D.C., March 31-April 3, 2019.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Effectiveness of Embedded Values Analysis Modules in Computer Science Education: An Empirical Study.Matthew Kopec, Meica Magnani, Vance Ricks, Roben Torosyan, John Basl, Nicholas Miklaucic, Felix Muzny, Ronald Sandler, Christo Wilson, Adam Wisniewski-Jensen, Cora Lundgren, Kevin Mills & Mark Wells - 2023 - Big Data and Society 10 (1).
    Embedding ethics modules within computer science courses has become a popular response to the growing recognition that CS programs need to better equip their students to navigate the ethical dimensions of computing technologies like AI, machine learning, and big data analytics. However, the popularity of this approach has outpaced the evidence of its positive outcomes. To help close that gap, this empirical study reports positive results from Northeastern’s program that embeds values analysis modules into CS courses. The resulting data suggest (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Philosophy of technology.Maarten Franssen - 2010 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Ethics of Artificial Intelligence.Stefan Buijsman, Michael Klenk & Jeroen van den Hoven - forthcoming - In Nathalie Smuha (ed.), Cambridge Handbook on the Law, Ethics and Policy of AI. Cambridge University Press.
    Artificial Intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but frequently fail to suffice due to the context (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The epistemological foundations of data science: a critical analysis.Jules Desai, David Watson, Vincent Wang, Mariarosaria Taddeo & Luciano Floridi - manuscript
    The modern abundance and prominence of data has led to the development of “data science” as a new field of enquiry, along with a body of epistemological reflections upon its foundations, methods, and consequences. This article provides a systematic analysis and critical review of significant open problems and debates in the epistemology of data science. We propose a partition of the epistemology of data science into the following five domains: (i) the constitution of data science; (ii) the kind of enquiry (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Recognizing Argument Types and Adding Missing Reasons.Christoph Lumer - 2019 - In Bart J. Garssen, David Godden, Gordon Mitchell & Jean Wagemans (eds.), Proceedings of the Ninth Conference of the International Society for the Study of Argumentation (ISSA). [Amsterdam, July 3-6, 2018.]. Amsterdam (Netherlands): pp. 769-777.
    The article develops and justifies, on the basis of the epistemological argumentation theory, two central pieces of the theory of evaluative argumentation interpretation: 1. criteria for recognizing argument types and 2. rules for adding reasons to create ideal arguments. Ad 1: The criteria for identifying argument types are a selection of essential elements from the definitions of the respective argument types. Ad 2: After presenting the general principles for adding reasons (benevolence, authenticity, immanence, optimization), heuristics are proposed for finding missing (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • On Social Machines for Algorithmic Regulation.Nello Cristianini & Teresa Scantamburlo - manuscript
    Autonomous mechanisms have been proposed to regulate certain aspects of society and are already being used to regulate business organisations. We take seriously recent proposals for algorithmic regulation of society, and we identify the existing technologies that can be used to implement them, most of them originally introduced in business contexts. We build on the notion of 'social machine' and we connect it to various ongoing trends and ideas, including crowdsourced task-work, social compiler, mechanism design, reputation management systems, and social (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Machine Decisions and Human Consequences.Teresa Scantamburlo, Andrew Charlesworth & Nello Cristianini - 2019 - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford: Oxford University Press.
    As we increasingly delegate decision-making to algorithms, whether directly or indirectly, important questions emerge in circumstances where those decisions have direct consequences for individual rights and personal opportunities, as well as for the collective good. A key problem for policymakers is that the social implications of these new methods can only be grasped if there is an adequate comprehension of their general technical underpinnings. The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, but (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems.Atoosa Kasirzadeh & Colin Klein - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21).
    Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more (...)
    Download  
     
    Export citation  
     
    Bookmark