Switch to: References

Add citations

You must login to add citations.
  1. Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • The other question: can and should robots have rights?David J. Gunkel - 2018 - Ethics and Information Technology 20 (2):87-99.
    This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. (...)
    Download  
     
    Export citation  
     
    Bookmark   62 citations  
  • (1 other version)Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force of (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • (1 other version)Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
    Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots “ethical” and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • A Rawlsian algorithm for autonomous vehicles.Derek Leben - 2017 - Ethics and Information Technology 19 (2):107-115.
    Autonomous vehicles must be programmed with procedures for dealing with trolley-style dilemmas where actions result in harm to either pedestrians or passengers. This paper outlines a Rawlsian algorithm as an alternative to the Utilitarian solution. The algorithm will gather the vehicle’s estimation of probability of survival for each person in each action, then calculate which action a self-interested person would agree to if he or she were in an original bargaining position of fairness. I will employ Rawls’ assumption that the (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Artificial moral and legal personhood.John-Stewart Gordon - forthcoming - AI and Society:1-15.
    This paper considers the hotly debated issue of whether one should grant moral and legal personhood to intelligent robots once they have achieved a certain standard of sophistication based on such criteria as rationality, autonomy, and social relations. The starting point for the analysis is the European Parliament’s resolution on Civil Law Rules on Robotics and its recommendation that robots be granted legal status and electronic personhood. The resolution is discussed against the background of the so-called Robotics Open Letter, which (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • To Each Technology Its Own Ethics: The Problem of Ethical Proliferation.Henrik Skaug Sætra & John Danaher - 2022 - Philosophy and Technology 35 (4):1-26.
    Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Artificial Intelligence as a Means to Moral Enhancement.Michał Klincewicz - 2016 - Studies in Logic, Grammar and Rhetoric 48 (1):171-187.
    This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as Kantianism or utilitarianism, (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Why robots should not be treated like animals.Deborah G. Johnson & Mario Verdicchio - 2018 - Ethics and Information Technology 20 (4):291-301.
    Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2023 - AI and Society 38 (4):1301-1320.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Integrating robot ethics and machine morality: the study and design of moral competence in robots.Bertram F. Malle - 2016 - Ethics and Information Technology 18 (4):243-256.
    Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like and discuss a number (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Fully Autonomous AI.Wolfhart Totschnig - 2020 - Science and Engineering Ethics 26 (5):2473-2485.
    In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or “utility function” with respect to which the appropriateness of its actions will be evaluated. From a philosophical perspective, this notion of autonomy seems oddly weak. For, in philosophy, the term is generally used to refer to a stronger capacity, namely the (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Automated decision-making and the problem of evil.Andrea Berber - 2023 - AI and Society:1-10.
    The intention of this paper is to point to the dilemma humanity may face in light of AI advancements. The dilemma is whether to create a world with less evil or maintain the human status of moral agents. This dilemma may arise as a consequence of using automated decision-making systems for high-stakes decisions. The use of automated decision-making bears the risk of eliminating human moral agency and autonomy and reducing humans to mere moral patients. On the other hand, it also (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • On the problem of making autonomous vehicles conform to traffic law.Henry Prakken - 2017 - Artificial Intelligence and Law 25 (3):341-363.
    Autonomous vehicles are one of the most spectacular recent developments of Artificial Intelligence. Among the problems that still need to be solved before they can fully autonomously participate in traffic is the one of making their behaviour conform to the traffic laws. This paper discusses this problem by way of a case study of Dutch traffic law. First it is discussed to what extent Dutch traffic law exhibits features that are traditionally said to pose challenges for AI & Law models, (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • How AI can AID bioethics.Walter Sinnott Armstrong & Joshua August Skorburg - forthcoming - Journal of Practical Ethics.
    This paper explores some ways in which artificial intelligence (AI) could be used to improve human moral judgments in bioethics by avoiding some of the most common sources of error in moral judgment, including ignorance, confusion, and bias. It surveys three existing proposals for building human morality into AI: Top-down, bottom-up, and hybrid approaches. Then it proposes a multi-step, hybrid method, using the example of kidney allocations for transplants as a test case. The paper concludes with brief remarks about how (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then provide the details of a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A pluralist hybrid model for moral AIs.Fei Song & Shing Hay Felix Yeung - forthcoming - AI and Society:1-10.
    With the increasing degrees A.I.s and machines are applied across different social contexts, the need for implementing ethics in A.I.s is pressing. In this paper, we argue for a pluralist hybrid model for the implementation of moral A.I.s. We first survey current approaches to moral A.I.s and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral A.I.s can be partly alleviated by the pluralist hybrid approach. The core ethical decision-making capacity of an (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Moral Turing Test: a defense.Einar Duenger Bohn - 2024 - Philosophy and Technology 37 (3):1-13.
    In this paper, I raise the question whether an artificial intelligence can act morally. I first sketch and defend a general picture of what is at stake in this question. I then sketch and defend a behavioral test, known as the Moral Turing Test, as a good sufficiency test for an artificial intelligence acting morally. I end by discussing some general anticipated objections.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Computer Says I Don’t Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence.Andreia Martinho, Maarten Kroesen & Caspar Chorus - 2021 - Minds and Machines 31 (2):215-237.
    As AI Systems become increasingly autonomous, they are expected to engage in decision-making processes that have moral implications. In this research we integrate theoretical and empirical lines of thought to address the matters of moral reasoning and moral uncertainty in AI Systems. We reconceptualize the metanormative framework for decision-making under moral uncertainty and we operationalize it through a latent class choice model. The core idea being that moral heterogeneity in society can be codified in terms of a small number of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (1 other version)From posthumanism to ethics of artificial intelligence.Rajakishore Nath & Riya Manna - 2023 - AI and Society 38 (1):185-196.
    Posthumanism is one of the well-known and significant concepts in the present day. It impacted numerous contemporary fields like philosophy, literary theories, art, and culture for the last few decades. The movement has been concentrated around the technological development of present days due to industrial advancement in society and the current proliferated daily usage of technology. Posthumanism indicated a deconstruction of our radical conception of ‘human’, and it further shifts our societal value alignment system to a novel dimension. The majority (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Phronetic Ethics in Social Robotics: A New Approach to Building Ethical Robots.Roman Krzanowski & Paweł Polak - 2020 - Studies in Logic, Grammar and Rhetoric 63 (1):165-183.
    Social robotics are autonomous robots or Artificial Moral Agents (AMA), that will interact respect and embody human ethical values. However, the conceptual and practical problems of building such systems have not yet been resolved, playing a role of significant challenge for computational modeling. It seems that the lack of success in constructing robots, ceteris paribus, is due to the conceptual and algorithmic limitations of the current design of ethical robots. This paper proposes a new approach for developing ethical capacities in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Automatisierte Ungleichheit: Ethik der Künstlichen Intelligenz in der biopolitischen Wende des Digitalen Kapitalismus.Rainer Mühlhoff - 2020 - Deutsche Zeitschrift für Philosophie 68 (6):867-890.
    This paper sets out the notion of a current “biopolitical turn of digital capitalism” resulting from the increasing deployment of AI and data analytics technologies in the public sector. With applications of AI-based automated decisions currently shifting from the domain of business to customer (B2C) relations to government to citizen (G2C) relations, a new form of governance arises that operates through “algorithmic social selection”. Moreover, the paper describes how the ethics of AI is at an impasse concerning these larger societal (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Friendly Critique of Levinasian Machine Ethics.Patrick Gamez - 2022 - Southern Journal of Philosophy 60 (1):118-149.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 118-149, March 2022.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2025 - Philosophical Studies 182 (1):55-85.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Do we want AI judges? The acceptance of AI judges’ judicial decision-making on moral foundations.Taenyun Kim & Wei Peng - forthcoming - AI and Society:1-14.
    This study explored the acceptance of artificial intelligence-based judicial decision-making (AI-JDM) as compared to human judges, focusing on the moral foundations of the cases involved using within-subject experiments. The study found a general aversion toward AI-JDM regarding perceived risk, permissibility, and social approval. However, when cases are rooted in the moral foundation of fairness, AI-JDM receives slightly higher social approval, though the effect size remains small. The study also found that demographic factors like racial/ethnic status and age significantly affect these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mind the gap: bridging the divide between computer scientists and ethicists in shaping moral machines.Pablo Muruzábal Lamberti, Gunter Bombaerts & Wijnand IJsselsteijn - 2025 - Ethics and Information Technology 27 (1):1-11.
    This paper examines the ongoing challenges of interdisciplinary collaboration in Machine Ethics (ME), particularly the integration of ethical decision-making capacities into AI systems. Despite increasing demands for ethical AI, ethicists often remain on the sidelines, contributing primarily to metaethical discussions without directly influencing the development of moral machines. This paper revisits concerns highlighted by Tolmeijer et al. (2020), who identified the pitfall that computer scientists may misinterpret ethical theories without philosophical input. Using the MACHIAVELLI moral benchmark and the Delphi artificial (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral competence in a manner inspired by Aristotle. Although (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Digitized Future of Medicine: Challenges for Bioethics.Елена Георгиевна Гребенщикова & Павел Дмитриевич Тищенко - 2020 - Russian Journal of Philosophical Sciences 63 (2):83-103.
    The article discusses the challenges, benefits, and risks that, from a bioethical perspective, arise because of the the development of eHealth projects. The conceptual framework of the research is based on H. Jonas’ principles of the ethics of responsibility and B.G. Yudin’s anthropological ideas on human beings as agents who constantly change their own boundaries in the “zone of phase transitions.” The article focuses on the events taking place in the zone of phase transitions between humans and machines in eHealth. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Digitized Future of Medicine: Challenges for Bioethics.Elena G. Grebenshchikova & Pavel D. Tishchenko - 2020 - Russian Journal of Philosophical Sciences 63 (2):83-103.
    The article discusses the challenges, benefits, and risks that, from a bioethical perspective, arise because of the the development of eHealth projects. The conceptual framework of the research is based on H. Jonas’ principles of the ethics of responsibility and B.G. Yudin’s anthropological ideas on human beings as agents who constantly change their own boundaries in the “zone of phase transitions.” The article focuses on the events taking place in the zone of phase transitions between humans and machines in eHealth. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Apprehending AI moral purpose in practical wisdom.Mark Graves - 2024 - AI and Society 39 (3):1335-1348.
    Practical wisdom enables moral decision-making and action by aligning one’s apprehension of proximate goods with a distal, socially embedded interpretation of a more ultimate Good. A focus on purpose within the overall process mutually informs human moral psychology and moral AI development in their examinations of practical wisdom. AI practical wisdom could ground an AI system’s apprehension of reality in a sociotechnical moral process committed to orienting AI development and action in light of a pluralistic, diverse interpretation of that Good. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Formalizing preference utilitarianism in physical world models.Caspar Oesterheld - 2016 - Synthese 193 (9).
    Most ethical work is done at a low level of formality. This makes practical moral questions inaccessible to formal and natural sciences and can lead to misunderstandings in ethical discussion. In this paper, we use Bayesian inference to introduce a formalization of preference utilitarianism in physical world models, specifically cellular automata. Even though our formalization is not immediately applicable, it is a first step in providing ethics and ultimately the question of how to “make the world better” with a formal (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Encoding Ethics to Compute Value-Aligned Norms.Marc Serramia, Manel Rodriguez-Soto, Maite Lopez-Sanchez, Juan A. Rodriguez-Aguilar, Filippo Bistaffa, Paula Boddington, Michael Wooldridge & Carlos Ansotegui - 2023 - Minds and Machines 33 (4):761-790.
    Norms have been widely enacted in human and agent societies to regulate individuals’ actions. However, although legislators may have ethics in mind when establishing norms, moral values are only sometimes explicitly considered. This paper advances the state of the art by providing a method for selecting the norms to enact within a society that best aligns with the moral values of such a society. Our approach to aligning norms and values is grounded in the ethics literature. Specifically, from the literature’s (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Caring in an Algorithmic World: Ethical Perspectives for Designers and Developers in Building AI Algorithms to Fight Fake News.Galit Wellner & Dmytro Mykhailov - 2023 - Science and Engineering Ethics 29 (4):1-16.
    This article suggests several design principles intended to assist in the development of ethical algorithms exemplified by the task of fighting fake news. Although numerous algorithmic solutions have been proposed, fake news still remains a wicked socio-technical problem that begs not only engineering but also ethical considerations. We suggest employing insights from ethics of care while maintaining its speculative stance to ask how algorithms and design processes would be different if they generated care and fight fake news. After reviewing the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The machine’s role in human’s service automation and knowledge sharing.Mihály Héder - 2014 - AI and Society 29 (2):185-192.
    The possibility of interacting with remote services in natural language opens up new opportunities for sharing knowledge and for automating services. Easy-to-use, text-based interfaces might provide more democratic access to legal information, government services, and everyday knowledge as well. However, the methodology of engineering robust natural language interfaces is very diverse, and widely deployed solutions are still yet to come. The main contribution is a detailed problem analysis on the theoretical level, which reveals that a text-based interface is best understood (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From a variety of ethics to the integrity and congruence of research on biodiversity conservation.Claire Lajaunie - 2018 - Asian Bioethics Review 10 (4):313-332.
    This article aims to find the elements that are required for a common ethical approach that is suitable for the different perspectives adopted in integrative biodiversity conservation research. A general reflection on the integrity of research is a priority worldwide, with a common aim to promote good research practice. Beyond the relationship between researcher and research subject, the integrity of research is considered in a broader perspective which entails scientific integrity towards society. In research involving a variety of disciplines and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Review of Artificial Intelligence: Reflections in Philosophy, Theology and the Social Sciences by Benedikt P. Göcke and Astrid Rosenthal-von der Pütten. [REVIEW]John-Stewart Gordon - 2021 - AI and Society 36 (2):655-659.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Comprehensive Definition of Technology from an Ethological Perspective.La Shun L. Carroll - 2017 - Social Sciences.
    Definitions, uses, and understanding of technology have varied tremendously since Jacob Bigelow’s Elements of Technology in 1829. In addition to providing a frame of reference for understanding technology, the purpose of this study was to define or describe it conceptually. A determination of dimensions comprising technology was made by critiquing historical and contemporary examples of definition by Bigelow and Volti. An analytic-synthetic method was employed to deconstruct both definitions spanning two centuries to derive aspects of technology. Definitions relying on an (...)
    Download  
     
    Export citation  
     
    Bookmark