Switch to: Citations

Add references

You must login to add references.
  1. Information ethics: an environmental approach to the digital divide.Luciano Floridi - 2002 - Philosophy in the Contemporary World 9 (1):39–45.
    As a full expression of techne, the information society has already posed fundamental ethical problems, whose complexity and global dimensions are rapidlyevolving. What is the best strategy to construct an information society that is ethically sound? This is the question I discuss in this paper. The task is to formulate aninformation ethics that can treat the world of data, information, knowledge and communication as a new environment, the infosphere. This information ethics must be able to address and solve the ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Ethics needs principles—four can encompass the rest—and respect for autonomy should be “first among equals”.R. Gillon - 2003 - Journal of Medical Ethics 29 (5):307-312.
    It is hypothesised and argued that “the four principles of medical ethics” can explain and justify, alone or in combination, all the substantive and universalisable claims of medical ethics and probably of ethics more generally. A request is renewed for falsification of this hypothesis showing reason to reject any one of the principles or to require any additional principle(s) that can’t be explained by one or some combination of the four principles. This approach is argued to be compatible with a (...)
    Download  
     
    Export citation  
     
    Bookmark   148 citations  
  • Common morality versus specified principlism: Reply to Richardson.Bernard Gert, Charles M. Culver & K. Danner Clouser - 2000 - Journal of Medicine and Philosophy 25 (3):308 – 322.
    In his article 'Specifying, balancing and interpreting bioethical principles' (Richardson, 2000), Henry Richardson claims that the two dominant theories in bioethics - principlism, put forward by Beauchamp and Childress in Principles of Bioethics , and common morality, put forward by Gert, Culver and Clouser in Bioethics: A Return to Fundamentals - are deficient because they employ balancing rather than specification to resolve disputes between principles or rules. We show that, contrary to Richardson's claim, the major problem with principlism, either the (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • The global landscape of AI ethics guidelines.A. Jobin, M. Ienca & E. Vayena - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   228 citations  
  • Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?Frank Ursin, Cristian Timmermann & Florian Steger - 2022 - Bioethics 36 (2):143-153.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be derived (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • (1 other version)A united framework of five principles for AI in society.Luciano Floridi & Josh Cowls - 2019 - Harvard Data Science Review 1 (1).
    Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these (...)
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
  • Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour.Cameron Buckner - 2023 - British Journal for the Philosophy of Science 74 (3):681-712.
    The last 5 years have seen a series of remarkable achievements in deep-neural-network-based artificial intelligence research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike human cognition for four reasons: they are (i) data-hungry, (ii) brittle, and (iii) inscrutable black boxes that merely (iv) reward-hack rather than learn real solutions to problems. This article rebuts these criticisms by exposing comparative bias within them, in the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Algorithmic and human decision making: for a double standard of transparency.Mario Günther & Atoosa Kasirzadeh - 2022 - AI and Society 37 (1):375-381.
    Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    Download  
     
    Export citation  
     
    Bookmark   210 citations  
  • The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Download  
     
    Export citation  
     
    Bookmark   86 citations  
  • AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Download  
     
    Export citation  
     
    Bookmark   199 citations  
  • Principles of Biomedical Ethics.Ezekiel J. Emanuel, Tom L. Beauchamp & James F. Childress - 1995 - Hastings Center Report 25 (4):37.
    Book reviewed in this article: Principles of Biomedical Ethics. By Tom L. Beauchamp and James F. Childress.
    Download  
     
    Export citation  
     
    Bookmark   2267 citations  
  • Algorithmic Accountability and Public Reason.Reuben Binns - 2018 - Philosophy and Technology 31 (4):543-556.
    The ever-increasing application of algorithms to decision-making in a range of social contexts has prompted demands for algorithmic accountability. Accountable decision-makers must provide their decision-subjects with justifications for their automated system’s outputs, but what kinds of broader principles should we expect such justifications to appeal to? Drawing from political philosophy, I present an account of algorithmic accountability in terms of the democratic ideal of ‘public reason’. I argue that situating demands for algorithmic accountability within this justificatory framework enables us to (...)
    Download  
     
    Export citation  
     
    Bookmark   52 citations  
  • Principlism and Its Alleged Competitors.Tom L. Beauchamp - 1995 - Kennedy Institute of Ethics Journal 5 (3):181-198.
    Principles that provide general normative frameworks in bioethics have been criticized since the late 1980s, when several different methods and types of moral philosophy began to be proposed as alternatives or substitutes. Several accounts have emerged in recent years, including: (1) Impartial Rule Theory (supported in this issue by K. Danner Clouser), (2) Casuistry (supported in this issue by Albert Jonsen), and (3) Virtue Ethics (supported in this issue by Edmund D. Pellegrino). Although often presented as rival methods or theories, (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Principles alone cannot guarantee ethical AI.Brent Mittelstadt - 2019 - Nature Machine Intelligence 1 (11):501-507.
    Download  
     
    Export citation  
     
    Bookmark   102 citations  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   79 citations  
  • Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability.Alex John London - 2019 - Hastings Center Report 49 (1):15-21.
    Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or (...)
    Download  
     
    Export citation  
     
    Bookmark   78 citations  
  • Common Morality as an Alternative to Principlism.K. Danner Clouser - 1995 - Kennedy Institute of Ethics Journal 5 (3):219-236.
    Unlike the principles of Kant, Mill, and Rawls, those of principlism are not action guides that stem from an underlying, integrated moral theory. Hence problems arise in reconciling the principles with each other and, indeed, in interpreting them as action guides at all, since they have no content in and of themselves. Another approach to "theory and method in bioethics" is presented as an alternative to principlism, though actually the "alternative" predates principlism by about 10 years. The alternative's account of (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Principlism and communitarianism.D. Callahan - 2003 - Journal of Medical Ethics 29 (5):287-291.
    The decline in the interest in ethical theory is first outlined, as a background to the author’s discussion of principlism. The author’s own stance, that of a communitarian philosopher, is then described, before the subject of principlism itself is addressed. Two problems stand in the way of the author’s embracing principlism: its individualistic bias and its capacity to block substantive ethical inquiry. The more serious problem the author finds to be its blocking function. Discussing the four scenarios the author finds (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Principlism and moral dilemmas: a new principle.J. P. DeMarco - 2005 - Journal of Medical Ethics 31 (2):101-105.
    Moral conflicts occur in theories that involve more than one principle. I examine basic ways of dealing with moral dilemmas in medical ethics and in ethics generally, and propose a different approach based on a principle I call the "mutuality principle". It is offered as an addition to Tom Beauchamp and James Childress' principlism. The principle calls for the mutual enhancement of basic moral values. After explaining the principle and its strengths, I test it by way of an examination of (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • (1 other version)Applying a principle of explicability to AI research in Africa: should we do it?Mary Carman & Benjamin Rosman - 2020 - Ethics and Information Technology 23 (2):107-117.
    Developing and implementing artificial intelligence (AI) systems in an ethical manner faces several challenges specific to the kind of technology at hand, including ensuring that decision-making systems making use of machine learning are just, fair, and intelligible, and are aligned with our human values. Given that values vary across cultures, an additional ethical challenge is to ensure that these AI systems are not developed according to some unquestioned but questionable assumption of universal norms but are in fact compatible with the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making.Giorgia Lorenzini, Laura Arbelaez Ossa, David Martin Shaw & Bernice Simone Elger - 2023 - Bioethics 37 (5):424-429.
    Artificial intelligence (AI) based clinical decision support systems (CDSS) are becoming ever more widespread in healthcare and could play an important role in diagnostic and treatment processes. For this reason, AI‐based CDSS has an impact on the doctor–patient relationship, shaping their decisions with its suggestions. We may be on the verge of a paradigm shift, where the doctor–patient relationship is no longer a dual relationship, but a triad. This paper analyses the role of AI‐based CDSS for shared decision‐making to better (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • In Defence of Principlism in AI Ethics and Governance.Elizabeth Seger - 2022 - Philosophy and Technology 35 (2):1-7.
    It is widely acknowledged that high-level AI principles are difficult to translate into practices via explicit rules and design guidelines. Consequently, many AI research and development groups that claim to adopt ethics principles have been accused of unwarranted “ethics washing”. Accordingly, there remains a question as to if and how high-level principles should be expected to influence the development of safe and beneficial AI. In this short commentary I discuss two roles high-level principles might play in AI ethics and governance. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Thinking through Technology: The Path between Engineering and Philosophy.Carl Mitcham - 1996 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 27 (2):359-360.
    Download  
     
    Export citation  
     
    Bookmark   102 citations  
  • Expert responsibility in AI development.Maria Hedlund & Erik Persson - 2022 - AI and Society:1-12.
    The purpose of this paper is to discuss the responsibility of AI experts for guiding the development of AI in a desirable direction. More specifically, the aim is to answer the following research question: To what extent are AI experts responsible in a forward-looking way for effects of AI technology that go beyond the immediate concerns of the programmer or designer? AI experts, in this paper conceptualised as experts regarding the technological aspects of AI, have knowledge and control of AI (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations