Switch to: References

Add citations

You must login to add citations.
  1. Digital hyperconnectivity and the self.Rogers Brubaker - 2020 - Theory and Society 49 (5):771-801.
    Digital hyperconnectivity is a defining fact of our time. In addition to recasting social interaction, culture, economics, and politics, it has profoundly transformed the self. It has created new ways of being and constructing a self, but also new ways of being constructed as a self from the outside, new ways of being configured, represented, and governed as a self by sociotechnical systems. Rather than analyze theories of the self, I focus on practices of the self, using this expression in (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Artificial Intelligence and Medical Humanities.Kirsten Ostherr - 2022 - Journal of Medical Humanities 43 (2):211-232.
    The use of artificial intelligence in healthcare has led to debates about the role of human clinicians in the increasingly technological contexts of medicine. Some researchers have argued that AI will augment the capacities of physicians and increase their availability to provide empathy and other uniquely human forms of care to their patients. The human vulnerabilities experienced in the healthcare context raise the stakes of new technologies such as AI, and the human dimensions of AI in healthcare have particular significance (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A taxonomy of human–machine collaboration: capturing automation and technical autonomy.Monika Simmler & Ruth Frischknecht - 2021 - AI and Society 36 (1):239-250.
    Due to the ongoing advancements in technology, socio-technical collaboration has become increasingly prevalent. This poses challenges in terms of governance and accountability, as well as issues in various other fields. Therefore, it is crucial to familiarize decision-makers and researchers with the core of human–machine collaboration. This study introduces a taxonomy that enables identification of the very nature of human–machine interaction. A literature review has revealed that automation and technical autonomy are main parameters for describing and understanding such interaction. Both aspects (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Download  
     
    Export citation  
     
    Bookmark   81 citations  
  • Ethical concerns with the use of intelligent assistive technology: findings from a qualitative study with professional stakeholders.Tenzin Wangmo, Mirjam Lipps, Reto W. Kressig & Marcello Ienca - 2019 - BMC Medical Ethics 20 (1):1-11.
    Background Advances in artificial intelligence, robotics and wearable computing are creating novel technological opportunities for mitigating the global burden of population ageing and improving the quality of care for older adults with dementia and/or age-related disability. Intelligent assistive technology is the umbrella term defining this ever-evolving spectrum of intelligent applications for the older and disabled population. However, the implementation of IATs has been observed to be sub-optimal due to a number of barriers in the translation of novel applications from the (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Robots in the Workplace: a Threat to—or Opportunity for—Meaningful Work?Jilles Smids, Sven Nyholm & Hannah Berkers - 2020 - Philosophy and Technology 33 (3):503-522.
    The concept of meaningful work has recently received increased attention in philosophy and other disciplines. However, the impact of the increasing robotization of the workplace on meaningful work has received very little attention so far. Doing work that is meaningful leads to higher job satisfaction and increased worker well-being, and some argue for a right to access to meaningful work. In this paper, we therefore address the impact of robotization on meaningful work. We do so by identifying five key aspects (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • Agency Laundering and Information Technologies.Alan Rubel, Clinton Castro & Adam Pham - 2019 - Ethical Theory and Moral Practice 22 (4):1017-1041.
    When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Dissecting the Algorithmic Leviathan: On the Socio-Political Anatomy of Algorithmic Governance.Pascal D. König - 2020 - Philosophy and Technology 33 (3):467-485.
    A growing literature is taking an institutionalist and governance perspective on how algorithms shape society based on unprecedented capacities for managing social complexity. Algorithmic governance altogether emerges as a novel and distinctive kind of societal steering. It appears to transcend established categories and modes of governance—and thus seems to call for new ways of thinking about how social relations can be regulated and ordered. However, as this paper argues, despite its novel way of realizing outcomes of collective steering and coordination, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers have (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • 15 challenges for AI: or what AI (currently) can’t do.Thilo Hagendorff & Katharina Wezel - 2020 - AI and Society 35 (2):355-365.
    The current “AI Summer” is marked by scientific breakthroughs and economic successes in the fields of research, development, and application of systems with artificial intelligence. But, aside from the great hopes and promises associated with artificial intelligence, there are a number of challenges, shortcomings and even limitations of the technology. For one, these challenges arise from methodological and epistemological misconceptions about the capabilities of artificial intelligence. Secondly, they result from restrictions of the social context in which the development of applications (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Artificial intelligence and the ‘Good Society’: the US, EU, and UK approach.Corinne Cath, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo & Luciano Floridi - 2018 - Science and Engineering Ethics 24 (2):505-528.
    In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: the development of a ‘good (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Algorithmic paranoia: the temporal governmentality of predictive policing.Bonnie Sheehey - 2019 - Ethics and Information Technology 21 (1):49-58.
    In light of the recent emergence of predictive techniques in law enforcement to forecast crimes before they occur, this paper examines the temporal operation of power exercised by predictive policing algorithms. I argue that predictive policing exercises power through a paranoid style that constitutes a form of temporal governmentality. Temporality is especially pertinent to understanding what is ethically at stake in predictive policing as it is continuous with a historical racialized practice of organizing, managing, controlling, and stealing time. After first (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Toward an Ethics of AI Assistants: an Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Ethical Implications and Accountability of Algorithms.Kirsten Martin - 2018 - Journal of Business Ethics 160 (4):835-850.
    Algorithms silently structure our lives. Algorithms can determine whether someone is hired, promoted, offered a loan, or provided housing as well as determine which political ads and news articles consumers see. Yet, the responsibility for algorithms in these important decisions is not clear. This article identifies whether developers have a responsibility for their algorithms later in use, what those firms are responsible for, and the normative grounding for that responsibility. I conceptualize algorithms as value-laden, rather than neutral, in that algorithms (...)
    Download  
     
    Export citation  
     
    Bookmark   62 citations  
  • Mental time-travel, semantic flexibility, and A.I. ethics.Marcus Arvan - 2023 - AI and Society 38 (6):2577-2596.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • An Analysis of the Impact of Brain-Computer Interfaces on Autonomy.Orsolya Friedrich, Eric Racine, Steffen Steinert, Johannes Pömsl & Ralf J. Jox - 2018 - Neuroethics 14 (1):17-29.
    Research conducted on Brain-Computer Interfaces has grown considerably during the last decades. With the help of BCIs, users can gain a wide range of functions. Our aim in this paper is to analyze the impact of BCIs on autonomy. To this end, we introduce three abilities that most accounts of autonomy take to be essential: the ability to use information and knowledge to produce reasons; the ability to ensure that intended actions are effectively realized ; and the ability to enact (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Big Data for Biomedical Research and Personalised Medicine: an Epistemological and Ethical Cross-Analysis.Thierry Magnin & Mathieu Guillermin - 2017 - Human and Social Studies. Research and Practice 6 (3):13-36.
    Big data techniques, data-driven science and their technological applications raise many serious ethical questions, notably about privacy protection. In this paper, we highlight an entanglement between epistemology and ethics of big data. Discussing the mobilisation of big data in the fields of biomedical research and health care, we show how an overestimation of big data epistemic power – of their objectivity or rationality understood through the lens of neutrality – can become ethically threatening. Highlighting the irreducible non-neutrality at play in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics of the health-related internet of things: a narrative review.Brent Mittelstadt - 2017 - Ethics and Information Technology 19 (3):1-19.
    The internet of things is increasingly spreading into the domain of medical and social care. Internet-enabled devices for monitoring and managing the health and well-being of users outside of traditional medical institutions have rapidly become common tools to support healthcare. Health-related internet of things (H-IoT) technologies increasingly play a key role in health management, for purposes including disease prevention, real-time tele-monitoring of patient’s functions, testing of treatments, fitness and well-being monitoring, medication dispensation, and health research data collection. H-IoT promises many (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Philosophy of technology.Maarten Franssen - 2010 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Using artificial intelligence to enhance patient autonomy in healthcare decision-making.Jose Luis Guerrero Quiñones - forthcoming - AI and Society:1-10.
    The use of artificial intelligence in healthcare contexts is highly controversial for the (bio)ethical conundrums it creates. One of the main problems arising from its implementation is the lack of transparency of machine learning algorithms, which is thought to impede the patient’s autonomous choice regarding their medical decisions. If the patient is unable to clearly understand why and how an AI algorithm reached certain medical decision, their autonomy is being hovered. However, there are alternatives to prevent the negative impact of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Decolonizing AI Ethics: Relational Autonomy as a Means to Counter AI Harms.Sábëlo Mhlambi & Simona Tiribelli - 2023 - Topoi 42 (3):867-880.
    Many popular artificial intelligence (AI) ethics frameworks center the principle of autonomy as necessary in order to mitigate the harms that might result from the use of AI within society. These harms often disproportionately affect the most marginalized within society. In this paper, we argue that the principle of autonomy, as currently formalized in AI ethics, is itself flawed, as it expresses only a mainstream mainly liberal notion of autonomy as rational self-determination, derived from Western traditional philosophy. In particular, we (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Toward children-centric AI: a case for a growth model in children-AI interactions.Karolina La Fors - forthcoming - AI and Society:1-13.
    This article advocates for a hermeneutic model for children-AI interactions in which the desirable purpose of children’s interaction with artificial intelligence systems is children's growth. The article perceives AI systems with machine-learning components as having a recursive element when interacting with children. They can learn from an encounter with children and incorporate data from interaction, not only from prior programming. Given the purpose of growth and this recursive element of AI, the article argues for distinguishing the interpretation of bias within (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies.Justyna Stypinska - 2023 - AI and Society 38 (2):665-677.
    In the last few years, we have witnessed a surge in scholarly interest and scientific evidence of how algorithms can produce discriminatory outcomes, especially with regard to gender and race. However, the analysis of fairness and bias in AI, important for the debate of AI for social good, has paid insufficient attention to the category of age and older people. Ageing populations have been largely neglected during the turn to digitality and AI. In this article, the concept of AI ageism (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach.Filippo Santoni de Sio, Giulio Mecacci, Simeon Calvert, Daniel Heikoop, Marjan Hagenzieker & Bart van Arem - 2023 - Minds and Machines 33 (4):587-611.
    The paper presents a framework to realise “meaningful human control” over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project “Meaningful Human Control over Automated Driving Systems” lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings.Georgina Curto, Mario Fernando Jojoa Acosta, Flavio Comim & Begoña Garcia-Zapirain - forthcoming - AI and Society:1-16.
    Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual framework of the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Promises and Pitfalls of Algorithm Use by State Authorities.Maryam Amir Haeri, Kathrin Hartmann, Jürgen Sirsch, Georg Wenzelburger & Katharina A. Zweig - 2022 - Philosophy and Technology 35 (2):1-31.
    Algorithmic systems are increasingly used by state agencies to inform decisions about humans. They produce scores on risks of recidivism in criminal justice, indicate the probability for a job seeker to find a job in the labor market, or calculate whether an applicant should get access to a certain university program. In this contribution, we take an interdisciplinary perspective, provide a bird’s eye view of the different key decisions that are to be taken when state actors decide to use an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Neo-Republican Critique of AI ethics.Jonne Maas - 2022 - Journal of Responsible Technology 9 (C):100022.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Challenges in enabling user control over algorithm-based services.Pascal D. König - 2024 - AI and Society 39 (1):195-205.
    Algorithmic systems that provide services to people by supporting or replacing human decision-making promise greater convenience in various areas. The opacity of these applications, however, means that it is not clear how much they truly serve their users. A promising way to address the issue of possible undesired biases consists in giving users control by letting them configure a system and aligning its performance with users’ own preferences. However, as the present paper argues, this form of control over an algorithmic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Epistemic injustice and data science technologies.John Symons & Ramón Alvarado - 2022 - Synthese 200 (2):1-26.
    Technologies that deploy data science methods are liable to result in epistemic harms involving the diminution of individuals with respect to their standing as knowers or their credibility as sources of testimony. Not all harms of this kind are unjust but when they are we ought to try to prevent or correct them. Epistemically unjust harms will typically intersect with other more familiar and well-studied kinds of harm that result from the design, development, and use of data science technologies. However, (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms.Bas de Boer & Olya Kudina - 2021 - Theoretical Medicine and Bioethics 42 (5):245-266.
    In this paper, we examine the qualitative moral impact of machine learning-based clinical decision support systems in the process of medical diagnosis. To date, discussions about machine learning in this context have focused on problems that can be measured and assessed quantitatively, such as by estimating the extent of potential harm or calculating incurred risks. We maintain that such discussions neglect the qualitative moral impact of these technologies. Drawing on the philosophical approaches of technomoral change and technological mediation theory, which (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Modeling Ethics: Approaches to Data Creep in Higher Education.Madisson Whitman - 2021 - Science and Engineering Ethics 27 (6):1-18.
    Though rapid collection of big data is ubiquitous across domains, from industry settings to academic contexts, the ethics of big data collection and research are contested. A nexus of data ethics issues is the concept of creep, or repurposing of data for other applications or research beyond the conditions of original collection. Data creep has proven controversial and has prompted concerns about the scope of ethical oversight. Institutional review boards offer little guidance regarding big data, and problematic research can still (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Evaluating the prospects for university-based ethical governance in artificial intelligence and data-driven innovation.Christine Hine - 2021 - Research Ethics 17 (4):464-479.
    There has been considerable debate around the ethical issues raised by data-driven technologies such as artificial intelligence. Ethical principles for the field have focused on the need to ensure...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Unprepared humanities: A pedagogy (forced) online.Houman Harouni - 2021 - Journal of Philosophy of Education 55 (4-5):633-648.
    Journal of Philosophy of Education, EarlyView.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems.Atoosa Kasirzadeh & Colin Klein - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21).
    Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Problems with “Friendly AI”.Oliver Li - 2021 - Ethics and Information Technology 23 (3):543-550.
    On virtue ethical grounds, Barbro Fröding and Martin Peterson recently recommended that near-future AIs should be developed as ‘Friendly AI’. AI in social interaction with humans should be programmed such that they mimic aspects of human friendship. While it is a reasonable goal to implement AI systems interacting with humans as Friendly AI, I identify four issues that need to be addressed concerning Friendly AI with Fröding’s and Peterson’s understanding of Friendly AI as a starting point. In a first step, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Care ethics and the responsible management of power and privacy in digitally enhanced disaster response.Paul Hayes & Damian Jackson - 2020 - Journal of Information, Communication and Ethics in Society 18 (1):157-174.
    PurposeThis paper aims to argue that traditional ethical theories used in disaster response may be inadequate and particularly strained by the emergence of new technologies and social media, particularly with regard to privacy. The paper suggests incorporation of care ethics into the disaster ethics nexus to better include the perspectives of disaster affected communities.Design/methodology/approachThis paper presents a theoretical examination of privacy and care ethics in the context of social media/digitally enhanced disaster response.FindingsThe paper proposes an ethics of care can fruitfully (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Examination and diagnosis of electronic patient records and their associated ethics: a scoping literature review.Tim Jacquemard, Colin P. Doherty & Mary B. Fitzsimons - 2020 - BMC Medical Ethics 21 (1):1-13.
    BackgroundElectronic patient record (EPR) technology is a key enabler for improvements to healthcare service and management. To ensure these improvements and the means to achieve them are socially and ethically desirable, careful consideration of the ethical implications of EPRs is indicated. The purpose of this scoping review was to map the literature related to the ethics of EPR technology. The literature review was conducted to catalogue the prevalent ethical terms, to describe the associated ethical challenges and opportunities, and to identify (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Automated Influence and the Challenge of Cognitive Security.Sarah Rajtmajer & Daniel Susser - forthcoming - HoTSoS: ACM Symposium on Hot Topics in the Science of Security.
    Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Recognizing Argument Types and Adding Missing Reasons.Christoph Lumer - 2019 - In Bart J. Garssen, David Godden, Gordon Mitchell & Jean Wagemans (eds.), Proceedings of the Ninth Conference of the International Society for the Study of Argumentation (ISSA). [Amsterdam, July 3-6, 2018.]. Sic Sat. pp. 769-777.
    The article develops and justifies, on the basis of the epistemological argumentation theory, two central pieces of the theory of evaluative argumentation interpretation: 1. criteria for recognizing argument types and 2. rules for adding reasons to create ideal arguments. Ad 1: The criteria for identifying argument types are a selection of essential elements from the definitions of the respective argument types. Ad 2: After presenting the general principles for adding reasons (benevolence, authenticity, immanence, optimization), heuristics are proposed for finding missing (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning.Maya Krishnan - 2020 - Philosophy and Technology 33 (3):487-502.
    The usefulness of machine learning algorithms has led to their widespread adoption prior to the development of a conceptual framework for making sense of them. One common response to this situation is to say that machine learning suffers from a “black box problem.” That is, machine learning algorithms are “opaque” to human users, failing to be “interpretable” or “explicable” in terms that would render categorization procedures “understandable.” The purpose of this paper is to challenge the widespread agreement about the existence (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • The Epistemology of Non-distributive Profiles.Patrick Allo - 2020 - Philosophy and Technology 33 (3):379-409.
    The distinction between distributive and non-distributive profiles figures prominently in current evaluations of the ethical and epistemological risks that are associated with automated profiling practices. The diagnosis that non-distributive profiles may coincidentally situate an individual in the wrong category is often perceived as the central shortcoming of such profiles. According to this diagnosis, most risks can be retraced to the use of non-universal generalisations and various other statistical associations. This article develops a top-down analysis of non-distributive profiles in which this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Machine Decisions and Human Consequences.Teresa Scantamburlo, Andrew Charlesworth & Nello Cristianini - 2019 - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford University Press.
    As we increasingly delegate decision-making to algorithms, whether directly or indirectly, important questions emerge in circumstances where those decisions have direct consequences for individual rights and personal opportunities, as well as for the collective good. A key problem for policymakers is that the social implications of these new methods can only be grasped if there is an adequate comprehension of their general technical underpinnings. The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, but (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Managing Algorithmic Accountability: Balancing Reputational Concerns, Engagement Strategies, and the Potential of Rational Discourse.Alexander Buhmann, Johannes Paßmann & Christian Fieseler - 2020 - Journal of Business Ethics 163 (2):265-280.
    While organizations today make extensive use of complex algorithms, the notion of algorithmic accountability remains an elusive ideal due to the opacity and fluidity of algorithms. In this article, we develop a framework for managing algorithmic accountability that highlights three interrelated dimensions: reputational concerns, engagement strategies, and discourse principles. The framework clarifies that accountability processes for algorithms are driven by reputational concerns about the epistemic setup, opacity, and outcomes of algorithms; that the way in which organizations practically engage with emergent (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Agency Laundering and Algorithmic Decision Systems.Alan Rubel, Adam Pham & Clinton Castro - 2019 - In N. Taylor, C. Christian-Lamb, M. Martin & B. Nardi (eds.), Information in Contemporary Society (Lecture Notes in Computer Science). Springer Nature. pp. 590-598.
    This paper has two aims. The first is to explain a type of wrong that arises when agents obscure responsibility for their actions. Call it “agency laundering.” The second is to use the concept of agency laundering to understand the underlying moral issues in a number of recent cases involving algorithmic decision systems. From the Proceedings of the 14th International Conference, iConference 2019, Washington D.C., March 31-April 3, 2019.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What has the Trolley Dilemma ever done for us (and what will it do in the future)? On some recent debates about the ethics of self-driving cars.Andreas Wolkenstein - 2018 - Ethics and Information Technology 20 (3):163-173.
    Self-driving cars currently face a lot of technological problems that need to be solved before the cars can be widely used. However, they also face ethical problems, among which the question of crash-optimization algorithms is most prominently discussed. Reviewing current debates about whether we should use the ethics of the Trolley Dilemma as a guide towards designing self-driving cars will provide us with insights about what exactly ethical research does. It will result in the view that although we need the (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations