Switch to: References

Add citations

You must login to add citations.
  1. Editors' Overview: Moral Responsibility in Technology and Engineering.Neelke Doorn & Ibo van de Poel - 2012 - Science and Engineering Ethics 18 (1):1-11.
    Editors’ Overview: Moral Responsibility in Technology and Engineering Content Type Journal Article Category Original Paper Pages 1-11 DOI 10.1007/s11948-011-9285-z Authors Neelke Doorn, Department of Technology, Policy and Management, Delft University of Technology, P.O. Box 5015, 2600 GA Delft, The Netherlands Ibo van de Poel, Department of Technology, Policy and Management, Delft University of Technology, P.O. Box 5015, 2600 GA Delft, The Netherlands Journal Science and Engineering Ethics Online ISSN 1471-5546 Print ISSN 1353-3452 Journal Volume Volume 18 Journal Issue Volume 18, (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Philosophy of technology.Maarten Franssen - 2010 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Anonymity.Kathleen Wallace - 1999 - Ethics and Information Technology 1 (1):21-31.
    Anonymity is a form of nonidentifiability which I define as noncoordinatability of traits in a given respect. This definition broadens the concept, freeing it from its primary association with naming. I analyze different ways anonymity can be realized. I also discuss some ethical issues, such as privacy, accountability and other values which anonymity may serve or undermine. My theory can also conceptualize anonymity in information systems where, for example, privacy and accountability are at issue.
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Generative Artificial Intelligence and Authorship Gaps.Tamer Nawar - 2024 - American Philosophical Quarterly 61 (4):355-367.
    The ever increasing use of generative artificial intelligence raises significant questions about authorship and related issues such as credit and accountability. In this paper, I consider whether works produced by means of users inputting natural language prompts into Generative Adversarial Networks are works of authorship. I argue that they are not. This is not due to concerns about randomness or machine-assistance compromising human labor or intellectual vision, but instead due to the syntactical and compositional limitations of existing AI systems in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Owning Decisions: AI Decision-Support and the Attributability-Gap.Jannik Zeiser - 2024 - Science and Engineering Ethics 30 (4):1-19.
    Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine’s behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today’s AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap.Tianqi Kou - manuscript
    Two goals - improving replicability and accountability of Machine Learning research respectively, have accrued much attention from the AI ethics and the Machine Learning community. Despite sharing the measures of improving transparency, the two goals are discussed in different registers - replicability registers with scientific reasoning whereas accountability registers with ethical reasoning. Given the existing challenge of the Responsibility Gap - holding Machine Learning scientists accountable for Machine Learning harms due to them being far from sites of application, this paper (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automated decision-making and the problem of evil.Andrea Berber - 2023 - AI and Society:1-10.
    The intention of this paper is to point to the dilemma humanity may face in light of AI advancements. The dilemma is whether to create a world with less evil or maintain the human status of moral agents. This dilemma may arise as a consequence of using automated decision-making systems for high-stakes decisions. The use of automated decision-making bears the risk of eliminating human moral agency and autonomy and reducing humans to mere moral patients. On the other hand, it also (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Moral distance, AI, and the ethics of care.Carolina Villegas-Galaviz & Kirsten Martin - forthcoming - AI and Society:1-12.
    This paper investigates how the introduction of AI to decision making increases moral distance and recommends the ethics of care to augment the ethical examination of AI decision making. With AI decision making, face-to-face interactions are minimized, and decisions are part of a more opaque process that humans do not always understand. Within decision-making research, the concept of moral distance is used to explain why individuals behave unethically towards those who are not seen. Moral distance abstracts those who are impacted (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Self-fulfilling Prophecy in Practical and Automated Prediction.Owen C. King & Mayli Mertens - 2023 - Ethical Theory and Moral Practice 26 (1):127-152.
    A self-fulfilling prophecy is, roughly, a prediction that brings about its own truth. Although true predictions are hard to fault, self-fulfilling prophecies are often regarded with suspicion. In this article, we vindicate this suspicion by explaining what self-fulfilling prophecies are and what is problematic about them, paying special attention to how their problems are exacerbated through automated prediction. Our descriptive account of self-fulfilling prophecies articulates the four elements that define them. Based on this account, we begin our critique by showing (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Concept of Accountability in AI Ethics and Governance.Theodore Lechterman - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    Calls to hold artificial intelligence to account are intensifying. Activists and researchers alike warn of an “accountability gap” or even a “crisis of accountability” in AI. Meanwhile, several prominent scholars maintain that accountability holds the key to governing AI. But usage of the term varies widely in discussions of AI ethics and governance. This chapter begins by disambiguating some different senses and dimensions of accountability, distinguishing it from neighboring concepts, and identifying sources of confusion. It proceeds to explore the idea (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Towards Transparency by Design for Artificial Intelligence.Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz & Aurelia Tamò-Larrieux - 2020 - Science and Engineering Ethics 26 (6):3333-3361.
    In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making environments. With the rise of artificial intelligence and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Future of Value Sensitive Design.Batya Friedman, David Hendry, Steven Umbrello, Jeroen Van Den Hoven & Daisy Yoo - 2020 - Paradigm Shifts in ICT Ethics: Proceedings of the 18th International Conference ETHICOMP 2020.
    In this panel, we explore the future of value sensitive design (VSD). The stakes are high. Many in public and private sectors and in civil society are gradually realizing that taking our values seriously implies that we have to ensure that values effectively inform the design of technology which, in turn, shapes people’s lives. Value sensitive design offers a highly developed set of theory, tools, and methods to systematically do so.
    Download  
     
    Export citation  
     
    Bookmark  
  • Primer on an ethics of AI-based decision support systems in the clinic.Matthias Braun, Patrik Hummel, Susanne Beck & Peter Dabrock - 2021 - Journal of Medical Ethics 47 (12):3-3.
    Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as achievements in medicine and healthcare (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence.Hin-Yan Liu & Karolina Zawieska - 2020 - Ethics and Information Technology 22 (4):321-333.
    As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Transformation²: Making software engineering accountable for sustainability.Christoph Schneider & Stefanie Betz - 2022 - Journal of Responsible Technology 10 (C):100027.
    Download  
     
    Export citation  
     
    Bookmark  
  • Psychological consequences of legal responsibility misattribution associated with automated vehicles.Peng Liu, Manqing Du & Tingting Li - 2021 - Ethics and Information Technology 23 (4):763-776.
    A human driver and an automated driving system might share control of automated vehicles in the near future. This raises many concerns associated with the assignment of responsibility for negative outcomes caused by them; one is that the human driver might be required to bear the brunt of moral and legal responsibilities. The psychological consequences of responsibility misattribution have not yet been examined. We designed a hypothetical crash similar to Uber’s 2018 fatal crash. We incorporated five legal responsibility attributions. Participants (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Boeing 737 MAX: Lessons for Engineering Ethics.Joseph Herkert, Jason Borenstein & Keith Miller - 2020 - Science and Engineering Ethics 26 (6):2957-2974.
    The crash of two 737 MAX passenger aircraft in late 2018 and early 2019, and subsequent grounding of the entire fleet of 737 MAX jets, turned a global spotlight on Boeing’s practices and culture. Explanations for the crashes include: design flaws within the MAX’s new flight control software system designed to prevent stalls; internal pressure to keep pace with Boeing’s chief competitor, Airbus; Boeing’s lack of transparency about the new software; and the lack of adequate monitoring of Boeing by the (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Massimo Durante & Marcello D'Agostino - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Perspectives of patients and clinicians on big data and AI in health: a comparative empirical investigation.Patrik Hummel, Matthias Braun, Serena Bischoff, David Samhammer, Katharina Seitz, Peter A. Fasching & Peter Dabrock - forthcoming - AI and Society:1-15.
    Background Big data and AI applications now play a major role in many health contexts. Much research has already been conducted on ethical and social challenges associated with these technologies. Likewise, there are already some studies that investigate empirically which values and attitudes play a role in connection with their design and implementation. What is still in its infancy, however, is the comparative investigation of the perspectives of different stakeholders. Methods To explore this issue in a multi-faceted manner, we conducted (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Explaining Epistemic Opacity.Ramón Alvarado - unknown
    Conventional accounts of epistemic opacity, particularly those that stem from the definitive work of Paul Humphreys, typically point to limitations on the part of epistemic agents to account for the distinct ways in which systems, such as computational methods and devices, are opaque. They point, for example, to the lack of technical skill on the part of an agent, the failure to meet standards of best practice, or even the nature of an agent as reasons why epistemically relevant elements of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Computing and moral responsibility.Merel Noorman - forthcoming - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Computing and moral responsibility.Kari Gwen Coleman - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul B. de Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Indigenous, feminine and technologist relational philosophies in the time of machine learning.Troy A. Richardson - 2023 - Ethics and Education 18 (1):6-22.
    Machine Learning (ML) and Artificial Intelligence (AI) are for many the defining features of the early twenty-first century. With such a provocation, this essay considers how one might understand the relational philosophies articulated by Indigenous learning scientists, Indigenous technologists and feminine philosophers of education as co-constitutive of an ensemble mediating or regulating an educative philosophy interfacing with ML/AI. In these mediations, differing vocabularies – kin, the one caring, cooperative – are recognized for their ethical commitments, yet challenging epistemic claims in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Implementation of Ethical Decision Procedures in Autonomous Systems : the Case of the Autonomous Vehicle.Katherine Evans - 2021 - Dissertation, Sorbonne Université
    The ethics of emerging forms of artificial intelligence has become a prolific subject in both academic and public spheres. A great deal of these concerns flow from the need to ensure that these technologies do not cause harm—physical, emotional or otherwise—to the human agents with which they will interact. In the literature, this challenge has been met with the creation of artificial moral agents: embodied or virtual forms of artificial intelligence whose decision procedures are constrained by explicit normative principles, requiring (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers.Daniel W. Tigard - 2021 - Science and Engineering Ethics 27 (5):1-20.
    Artificial intelligence and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the ‘severance problem’—the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • “Ain’t No One Here But Us Social Forces”: Constructing the Professional Responsibility of Engineers. [REVIEW]Michael Davis - 2012 - Science and Engineering Ethics 18 (1):13-34.
    There are many ways to avoid responsibility, for example, explaining what happens as the work of the gods, fate, society, or the system. For engineers, “technology” or “the organization” will serve this purpose quite well. We may distinguish at least nine (related) senses of “responsibility”, the most important of which are: (a) responsibility-as-causation (the storm is responsible for flooding), (b) responsibility-as-liability (he is the person responsible and will have to pay), (c) responsibility-as-competency (he’s a responsible person, that is, he’s rational), (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Can Retributivism and Risk Assessment Be Reconciled?Toby Napoletano & Hanna Kiri Gunn - 2024 - Criminal Justice Ethics 43 (1):37-56.
    In this paper we explore whether or not the use of risk assessment tools in criminal sentencing can be made compatible with a retributivist justification of punishment. While there has been considerable discussion of the accuracy and fairness of these tools, such discussion assumes that one’s recidivism risk is relevant to the severity of punishment that one should receive. But this assumption only holds on certain accounts of punishment, and seems to conflict with retributivist justifications of punishment. Drawing on the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithms and values in justice and security.Paul Hayes, Ibo van de Poel & Marc Steen - 2020 - AI and Society 35 (3):533-555.
    This article presents a conceptual investigation into the value impacts and relations of algorithms in the domain of justice and security. As a conceptual investigation, it represents one step in a value sensitive design based methodology. Here, we explicate and analyse the expression of values of accuracy, privacy, fairness and equality, property and ownership, and accountability and transparency in this context. We find that values are sensitive to disvalue if algorithms are designed, implemented or deployed inappropriately or without sufficient consideration (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • How to cross boundaries in the information society: vulnerability, responsiveness, and accountability.Massimo Durante - 2013 - Acm Sigcas Computers and Society 43 (1):9-21.
    The paper examines how the current evolution and growth of ICTs enables a greater number of individuals to communicate and interact with each other on a larger scale: this phenomenon enables people to cross the conventional boundaries set up across modernity. The presence of diverse barriers does not however disappear, and we therefore still experience cultural, political, legal and moral boundaries in the globalised Information Society. The paper suggests that the issue of boundaries is to be understood, primarily, in philosophical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Precaution as a Risk in Data Gaps and Sustainable Nanotechnology Decision Support Systems: a Case Study of Nano-Enabled Textiles Production.Irini Furxhi, Finbarr Murphy, Craig A. Poland, Martin Cunneen & Martin Mullins - 2021 - NanoEthics 15 (3):245-270.
    In light of the potential long-term societal and economic benefits of novel nano-enabled products, there is an evident need for research and development to focus on closing the gap in nano-materials safety. Concurrent reflection on the impact of decision-making tools, which may lack the capability to assist sophisticated judgements around the risks and benefits of the introduction of novel products, is essential. This paper addresses the potential for extant decision support tools to default to a precautionary principle position in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data.Reuben Binns & Michael Veale - 2017 - Big Data and Society 4 (2):205395171774353.
    Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining and fairness, accountability and transparency machine learning, their practical implementation faces real-world challenges. For legal, institutional or commercial reasons, organisations might not hold the data on sensitive attributes such as gender, ethnicity, sexuality or disability needed to diagnose and mitigate emergent indirect discrimination-by-proxy, such (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Beyond Connectivity: The Internet of Food Architecture Between Ethics and the EU Citizenry.Luca Leone - 2017 - Journal of Agricultural and Environmental Ethics 30 (3):423-438.
    This contribution deals with the ethical challenges arising from the IoT landscape with reference to a specific context, i.e. the realm of agri-food. In this sector, innumerable web-connected tools, platforms and sensors are constantly interacting with consumers/users/citizens, by reshaping and redefining the core elements and functions of machine–human being relationships. By sketching out the main pillars which ethics of the Internet of Food is founded on, my argument posits that the civic hybridization of knowledge production mediated by IoT technologies may (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves (“gaming the system” in particular), the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • The effects of social ties on coordination: conceptual foundations for an empirical analysis. [REVIEW]Giuseppe Attanasi, Astrid Hopfensitz, Emiliano Lorini & Frédéric Moisan - 2014 - Phenomenology and the Cognitive Sciences 13 (1):47-73.
    This paper investigates the influence that social ties can have on behavior. After defining the concept of social ties that we consider, we introduce an original model of social ties. The impact of such ties on social preferences is studied in a coordination game with outside option. We provide a detailed game theoretical analysis of this game while considering various types of players, i.e., self-interest maximizing, inequity averse, and fair agents. In addition to these approaches that require strategic reasoning in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The perfect technological storm: artificial intelligence and moral complacency.Marten H. L. Kaas - 2024 - Ethics and Information Technology 26 (3):1-12.
    Artificially intelligent machines are different in kind from all previous machines and tools. While many are used for relatively benign purposes, the types of artificially intelligent machines that we should care about, the ones that are worth focusing on, are the machines that purport to replace humans entirely and thereby engage in what Brian Cantwell Smith calls “judgment.” As impressive as artificially intelligent machines are, their abilities are still derived from humans and as such lack the sort of normative commitments (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Public procurement of artificial intelligence systems: new risks and future proofing.Merve Hickok - forthcoming - AI and Society:1-15.
    Public entities around the world are increasingly deploying artificial intelligence and algorithmic decision-making systems to provide public services or to use their enforcement powers. The rationale for the public sector to use these systems is similar to private sector: increase efficiency and speed of transactions and lower the costs. However, public entities are first and foremost established to meet the needs of the members of society and protect the safety, fundamental rights, and wellbeing of those they serve. Currently AI systems (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Legal Issues and Risks of the Artificial Intelligence Use in Space Activity.Larysa Soroka, Anna Danylenko & Maksym Sokiran - 2022 - Философия И Космология 28:118-135.
    Ever since the use of artificial intelligence in the space sector, there have been progressive changes that brought both benefits and risks. AI technologies have started to influence human rights and freedoms, relationships with public authorities and the private sector. Therefore, the use of AI involves legal obligations caused by the influence and consequences of the dependency on the specified technologies. This indicates the need to investigate the issue of the legal problems and risks of using AI in space activity (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Implementing moral decision making faculties in computers and robots.Wendell Wallach - 2008 - AI and Society 22 (4):463-475.
    The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate one facet or (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned?Chang Ho Yoon, Robert Torrance & Naomi Scheinerman - 2022 - Journal of Medical Ethics 48 (9):581-585.
    We argue why interpretability should have primacy alongside empiricism for several reasons: first, if machine learning models are beginning to render some of the high-risk healthcare decisions instead of clinicians, these models pose a novel medicolegal and ethical frontier that is incompletely addressed by current methods of appraising medical interventions like pharmacological therapies; second, a number of judicial precedents underpinning medical liability and negligence are compromised when ‘autonomous’ ML recommendations are considered to be en par with human instruction in specific (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Trusting the (ro)botic other.Paul B. de Laat - 2015 - Acm Sigcas Computers and Society 45 (3):255-260.
    How may human agents come to trust artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The agency of the forum: Mechanisms for algorithmic accountability through the lens of agency.Florian Cech - 2021 - Journal of Responsible Technology 7:100015.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Infrastructural justice for responsible software engineering.Sarah Robinson, Jim Buckley, Luigina Ciolfi, Conor Linehan, Clare McInerney, Bashar Nuseibeh, John Twomey, Irum Rauf & John McCarthy - 2024 - Journal of Responsible Technology 19 (C):100087.
    Download  
     
    Export citation  
     
    Bookmark