Switch to: Citations

Add references

You must login to add references.
  1. The human use of human beings.Norbert Wiener - 1950 - Boston,: Houghton Mifflin.
    As this book reveals, his vision was much more complex and interesting. He hoped that machines would release people from relentless and repetitive drudgery in order to achieve more creative pursuits.
    Download  
     
    Export citation  
     
    Bookmark   108 citations  
  • Rebooting Ai: Building Artificial Intelligence We Can Trust.Gary Marcus & Ernest Davis - 2019 - Vintage.
    Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust artificial intelligence. Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest milestones in the (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Group privacy.Bart van der Sloot, Luciano Floridi & Linnet Taylor (eds.) - 2016 - Springer Verlag.
    The goal of the book is to present the latest research on the new challenges of data technologies. It will offer an overview of the social, ethical and legal problems posed by group profiling, big data and predictive analysis and of the different approaches and methods that can be used to address them. In doing so, it will help the reader to gain a better grasp of the ethical and legal conundrums posed by group profiling. The volume first maps the (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   68 citations  
  • Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers have (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • It would be pretty immoral to choose a random algorithm.Helena Webb, Menisha Patel, Michael Rovatsos, Alan Davoust, Sofia Ceppi, Ansgar Koene, Liz Dowthwaite, Virginia Portillo, Marina Jirotka & Monica Cano - 2019 - Journal of Information, Communication and Ethics in Society 17 (2):210-228.
    Purpose The purpose of this paper is to report on empirical work conducted to open up algorithmic interpretability and transparency. In recent years, significant concerns have arisen regarding the increasing pervasiveness of algorithms and the impact of automated decision-making in our lives. Particularly problematic is the lack of transparency surrounding the development of these algorithmic systems and their use. It is often suggested that to make algorithms more fair, they should be made more transparent, but exactly how this can be (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • The ethics of information transparency.Matteo Turilli & Luciano Floridi - 2009 - Ethics and Information Technology 11 (2):105-112.
    The paper investigates the ethics of information transparency (henceforth transparency). It argues that transparency is not an ethical principle in itself but a pro-ethical condition for enabling or impairing other ethical practices or principles. A new definition of transparency is offered in order to take into account the dynamics of information production and the differences between data and information. It is then argued that the proposed definition provides a better understanding of what sort of information should be disclosed and what (...)
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Detecting racial bias in algorithms and machine learning.Nicol Turner Lee - 2018 - Journal of Information, Communication and Ethics in Society 16 (3):252-260.
    Purpose The online economy has not resolved the issue of racial bias in its applications. While algorithms are procedures that facilitate automated decision-making, or a sequence of unambiguous instructions, bias is a byproduct of these computations, bringing harm to historically disadvantaged populations. This paper argues that algorithmic biases explicitly and implicitly harm racial groups and lead to forms of discrimination. Relying upon sociological and technical research, the paper offers commentary on the need for more workplace diversity within high-tech industries and (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Trusting artificial intelligence in cybersecurity is a double-edged sword.Mariarosaria Taddeo, Tom McCutcheon & Luciano Floridi - 2019 - Philosophy and Technology 32 (1):1-15.
    Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Regulate artificial intelligence to avert cyber arms race.Mariarosaria Taddeo & Luciano Floridi - 2018 - Nature 556 (7701):296-298.
    This paper argues that there is an urgent need for an international doctrine for cyberspace skirmishes before they escalate into conventional warfare.
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Agency Laundering and Information Technologies.Alan Rubel, Clinton Castro & Adam Pham - 2019 - Ethical Theory and Moral Practice 22 (4):1017-1041.
    When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation.Huw Roberts, Josh Cowls, Jessica Morley, Mariarosaria Taddeo, Vincent Wang & Luciano Floridi - 2021 - AI and Society 36 (1):59–⁠77.
    In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence, entitled ‘New Generation Artificial Intelligence Development Plan’. This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this article, we focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • Society-in-the-loop: programming the algorithmic social contract.Iyad Rahwan - 2018 - Ethics and Information Technology 20 (1):5-14.
    Recent rapid advances in Artificial Intelligence and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve (...)
    Download  
     
    Export citation  
     
    Bookmark   50 citations  
  • From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Download  
     
    Export citation  
     
    Bookmark   75 citations  
  • Recommender systems and their ethical challenges.Silvia Milano, Mariarosaria Taddeo & Luciano Floridi - 2020 - AI and Society (4):957-967.
    This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system.
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Ethical Implications and Accountability of Algorithms.Kirsten Martin - 2018 - Journal of Business Ethics 160 (4):835-850.
    Algorithms silently structure our lives. Algorithms can determine whether someone is hired, promoted, offered a loan, or provided housing as well as determine which political ads and news articles consumers see. Yet, the responsibility for algorithms in these important decisions is not clear. This article identifies whether developers have a responsibility for their algorithms later in use, what those firms are responsible for, and the normative grounding for that responsibility. I conceptualize algorithms as value-laden, rather than neutral, in that algorithms (...)
    Download  
     
    Export citation  
     
    Bookmark   59 citations  
  • Ethics and Epistemology in Big Data Research.Wendy Lipworth, Paul H. Mason, Ian Kerridge & John P. A. Ioannidis - 2017 - Journal of Bioethical Inquiry 14 (4):489-500.
    Biomedical innovation and translation are increasingly emphasizing research using “big data.” The hope is that big data methods will both speed up research and make its results more applicable to “real-world” patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges.Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland & Patrick Vinck - 2018 - Philosophy and Technology 31 (4):611-627.
    The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper, we (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management.Min Kyung Lee - 2018 - Big Data and Society 5 (1).
    Algorithms increasingly make managerial decisions that people used to make. Perceptions of algorithms, regardless of the algorithms' actual performance, can significantly influence their adoption, yet we do not fully understand how people perceive decisions made by algorithms as compared with decisions made by humans. To explore perceptions of algorithmic management, we conducted an online experiment using four managerial decisions that required either mechanical or human skills. We manipulated the decision-maker, and measured perceived fairness, trust, and emotional response. With the mechanical (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • What an Algorithm Is.Robin K. Hill - 2016 - Philosophy and Technology 29 (1):35-59.
    The algorithm, a building block of computer science, is defined from an intuitive and pragmatic point of view, through a methodological lens of philosophy rather than that of formal computation. The treatment extracts properties of abstraction, control, structure, finiteness, effective mechanism, and imperativity, and intentional aspects of goal and preconditions. The focus on the algorithm as a robust conceptual object obviates issues of correctness and minimality. Neither the articulation of an algorithm nor the dynamic process constitute the algorithm itself. Analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • On the ethics of algorithmic decision-making in healthcare.Thomas Grote & Philipp Berens - 2020 - Journal of Medical Ethics 46 (3):205-211.
    In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • What the near future of artificial intelligence could be.Luciano Floridi - 2019 - Philosophy and Technology 32 (1):1-15.
    In this article, I shall argue that AI’s likely developments and possible challenges are best understood if we interpret AI not as a marriage between some biological-like intelligence and engineered artefacts, but as a divorce between agency and intelligence, that is, the ability to solve problems successfully and the necessity of being intelligent in doing so. I shall then look at five developments: (1) the growing shift from logic to statistics, (2) the progressive adaptation of the environment to AI rather (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Translating principles into practices of digital ethics: five risks of being unethical.Luciano Floridi - 2019 - Philosophy and Technology 32 (2):185-193.
    Modern digital technologies—from web-based services to Artificial Intelligence (AI) solutions—increasingly affect the daily lives of billions of people. Such innovation brings huge opportunities, but also concerns about design, development, and deployment of digital technologies. This article identifies and discusses five clusters of risk in the international debate about digital ethics: ethics shopping; ethics bluewashing; ethics lobbying; ethics dumping; and ethics shirking.
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Infraethics—on the conditions of possibility of morality.Luciano Floridi - 2017 - Philosophy and Technology 30 (4):391-394.
    Information and communication technologies (ICTs) place a crucial emphasis on accountability, intellectual property rights, neutrality, openness, privacy, transparency, and trust; they provide a platform or infrastructure of social norms and expectations. Developing the concept of infraethics, this paper argues that all societies need rules for effective co-ordination and collaboration of their infrastructures, and that their design and maintenance is one of the crucial challenges for our own world today. -/- .
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Distributed morality in an information society.Luciano Floridi - 2013 - Science and Engineering Ethics 19 (3):727-743.
    The phenomenon of distributed knowledge is well-known in epistemic logic. In this paper, a similar phenomenon in ethics, somewhat neglected so far, is investigated, namely distributed morality. The article explains the nature of distributed morality, as a feature of moral agency, and explores the implications of its occurrence in advanced information societies. In the course of the analysis, the concept of infraethics is introduced, in order to refer to the ensemble of moral enablers, which, although morally neutral per se, can (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Download  
     
    Export citation  
     
    Bookmark   171 citations  
  • Appraising Black-Boxed Technology: the Positive Prospects.E. S. Dahl - 2018 - Philosophy and Technology 31 (4):571-591.
    One staple of living in our information society is having access to the web. Web-connected devices interpret our queries and retrieve information from the web in response. Today’s web devices even purport to answer our queries directly without requiring us to comb through search results in order to find the information we want. How do we know whether a web device is trustworthy? One way to know is to learn why the device is trustworthy by inspecting its inner workings, 156–170 (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    Download  
     
    Export citation  
     
    Bookmark   181 citations  
  • Managing Algorithmic Accountability: Balancing Reputational Concerns, Engagement Strategies, and the Potential of Rational Discourse.Alexander Buhmann, Johannes Paßmann & Christian Fieseler - 2020 - Journal of Business Ethics 163 (2):265-280.
    While organizations today make extensive use of complex algorithms, the notion of algorithmic accountability remains an elusive ideal due to the opacity and fluidity of algorithms. In this article, we develop a framework for managing algorithmic accountability that highlights three interrelated dimensions: reputational concerns, engagement strategies, and discourse principles. The framework clarifies that accountability processes for algorithms are driven by reputational concerns about the epistemic setup, opacity, and outcomes of algorithms; that the way in which organizations practically engage with emergent (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data.Reuben Binns & Michael Veale - 2017 - Big Data and Society 4 (2).
    Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining and fairness, accountability and transparency machine learning, their practical implementation faces real-world challenges. For legal, institutional or commercial reasons, organisations might not hold the data on sensitive attributes such as gender, ethnicity, sexuality or disability needed to diagnose and mitigate emergent indirect discrimination-by-proxy, such (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Toward human-centered algorithm design.Eric P. S. Baumer - 2017 - Big Data and Society 4 (2).
    As algorithms pervade numerous facets of daily life, they are incorporated into systems for increasingly diverse purposes. These systems’ results are often interpreted differently by the designers who created them than by the lay persons who interact with them. This paper offers a proposal for human-centered algorithm design, which incorporates human and social interpretations into the design process for algorithmically based systems. It articulates three specific strategies for doing so: theoretical, participatory, and speculative. Drawing on the author’s work designing and (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • AI Assistants and the Paradox of Internal Automaticity.William A. Bauer & Veljko Dubljević - 2019 - Neuroethics 13 (3):303-310.
    What is the ethical impact of artificial intelligence assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. He takes (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Beyond mystery: Putting algorithmic accountability in context.Andrea Ballestero, Baki Cakici & Elizabeth Reddy - 2019 - Big Data and Society 6 (1).
    Critical algorithm scholarship has demonstrated the difficulties of attributing accountability for the actions and effects of algorithmic systems. In this commentary, we argue that we cannot stop at denouncing the lack of accountability for algorithms and their effects but must engage the broader systems and distributed agencies that algorithmic systems exist within; including standards, regulations, technologies, and social relations. To this end, we explore accountability in “the Generated Detective,” an algorithmically generated comic. Taking up the mantle of detectives ourselves, we (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Value of Privacy.Beate Roessler - 2004 - Polity.
    This new book by Beate Rossler is a work of real quality and originality on an extremely topical issue: the issue of privacy and the relations between the private and the public. Rossler investigates the reasons why we value privacy and why we ought to value it. In the context of modern, liberal societies, Rossler develops a theory of the private which links privacy and autonomy in a constitutive way: privacy is a necessary condition to lead an autonomous life. The (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Unpopular Privacy: What Must We Hide?Anita Allen - 2011 - New York, US: Oup Usa.
    Can the government stick us with privacy we don't want? It can, it does, and according to this author, may need to do more of it. Privacy is a foundational good, she argues, a necessary tool in the liberty-lover's kit for a successful life. A nation committed to personal freedom must be prepared to mandate inalienable, liberty-promoting privacies for its people, whether they eagerly embrace them or not. The eight chapters of this book are reflections on public regulation of privacy (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Do artifacts have politics?Langdon Winner - 1980 - Daedalus 109 (1):121--136.
    In controversies about technology and society, there is no idea more pro vocative than the notion that technical things have political qualities. At issue is the claim that the machines, structures, and systems of modern material culture can be accurately judged not only for their contributions of efficiency and pro-ductivity, not merely for their positive and negative environmental side effects, but also for the ways in which they can embody specific forms of power and authority. Since ideas of this kind (...)
    Download  
     
    Export citation  
     
    Bookmark   325 citations  
  • The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2).
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Download  
     
    Export citation  
     
    Bookmark   163 citations  
  • Faultless responsibility: on the nature and allocation of moral responsibility for distributed moral actions.Luciano Floridi - 2016 - Philosophical Transactions of the Royal Society A 374:20160112.
    The concept of distributed moral responsibility (DMR) has a long history. When it is understood as being entirely reducible to the sum of (some) human, individual and already morally loaded actions, then the allocation of DMR, and hence of praise and reward or blame and punishment, may be pragmatically difficult, but not conceptually problematic. However, in distributed environments, it is increasingly possible that a network of agents, some human, some artificial (e.g. a program) and some hybrid (e.g. a group of (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • What is data ethics?Luciano Floridi & Mariarosaria Taddeo - 2016 - Philosophical Transactions of the Royal Society A 374 (2083).
    This theme issue has the founding ambition of landscaping Data Ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and use), algorithms (including AI, artificial agents, machine learning, and robots), and corresponding practices (including responsible innovation, programming, hacking, and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data Ethics builds on the foundation provided by Computer and Information (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • A definition, benchmark and database of AI for social good initiatives.Josh Cowls, Andreas Tsmadaos, Mariarosaria Taddeo & Luciano Floridi - 2021 - Nature Machine Intelligence 3:111–⁠115.
    Initiatives relying on artificial intelligence (AI) to deliver socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. However, existing attempts to understand and foster AI4SG initiatives have so far been limited by the lack of normative analyses and a shortage of empirical evidence. In this Perspective, we address these limitations by providing a definition of AI4SG and by advocating the use of the United Nations’ Sustainable Development Goals (SDGs) as a benchmark for tracing the scope and spread of AI4SG. (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • How AI can be a force for good.Mariarosaria Taddeo & Luciano Floridi - 2018 - Science Magazine 361 (6404):751-752.
    This article argues that an ethical framework will help to harness the potential of AI while keeping humans in control.
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Introduction to the Philosophy of Science: Cutting Nature at Its Seams.Robert Klee - 1997 - Behavior and Philosophy 25 (1):77-80.
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Why privacy is important.James Rachels - 1975 - Philosophy and Public Affairs 4 (4):323-333.
    Download  
     
    Export citation  
     
    Bookmark   148 citations  
  • The Human Use of Human Beings.Norbert Wiener - 1952 - British Journal for the Philosophy of Science 3 (9):91-92.
    Download  
     
    Export citation  
     
    Bookmark   59 citations  
  • Introduction to The Philosophy of Science: Cutting Nature at Its Seams.Robert Klee - 2001 - Mind 110 (437):215-218.
    Download  
     
    Export citation  
     
    Bookmark   16 citations