Results for 'Algorithmic Decision-Making'

997 found
Order:
  1. Algorithmic decision-making: the right to explanation and the significance of stakes.Lauritz Munch, Jens Christian Bjerring & Jakob Mainz - forthcoming - Big Data and Society.
    The stakes associated with an algorithmic decision are often said to play a role in determining whether the decision engenders a right to an explanation. More specifically, “high stakes” decisions are often said to engender such a right to explanation whereas “low stakes” or “non-high” stakes decisions do not. While the overall gist of these ideas is clear enough, the details are lacking. In this paper, we aim to provide these details through a detailed investigation of what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  3. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  4. The value of responsibility gaps in algorithmic decision-making.Lauritz Munch, Jakob Mainz & Jens Christian Bjerring - 2023 - Ethics and Information Technology 25 (1):1-11.
    Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Authenticity in algorithm-aided decision-making.Brett Karlan - forthcoming - Synthese.
    I identify an undertheorized problem with decisions we make with the aid of algorithms: the problem of inauthenticity. When we make decisions with the aid of algorithms, we can make ones that go against our commitments and values in a normatively important way. In this paper, I present a framework for algorithm-aided decision-making that can lead to inauthenticity. I then construct a taxonomy of the features of the decision environment that make such outcomes likely, and I discuss (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Shared decision-making and maternity care in the deep learning age: Acknowledging and overcoming inherited defeaters.Keith Begley, Cecily Begley & Valerie Smith - 2021 - Journal of Evaluation in Clinical Practice 27 (3):497–503.
    In recent years there has been an explosion of interest in Artificial Intelligence (AI) both in health care and academic philosophy. This has been due mainly to the rise of effective machine learning and deep learning algorithms, together with increases in data collection and processing power, which have made rapid progress in many areas. However, use of this technology has brought with it philosophical issues and practical problems, in particular, epistemic and ethical. In this paper the authors, with backgrounds in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Explainable AI lacks regulative reasons: why AI and human decisionmaking are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic (...)-making. Here, I contend that this argument overlooks that human decision-making is sometimes significantly more transparent and trustworthy than algorithmic decision-making. This is because when people explain their decisions by giving reasons for them, this frequently prompts those giving the reasons to govern or regulate themselves so as to think and act in ways that confirm their reason reports. AI explanation systems lack this self-regulative feature. Overlooking it when comparing algorithmic and human decision-making can result in underestimations of the transparency of human decision-making and in the development of explainable AI that may mislead people by activating generally warranted beliefs about the regulative dimension of reason-giving. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  8. Iudicium ex Machinae – The Ethical Challenges of Automated Decision-Making in Criminal Sentencing.Frej Thomsen - 2022 - In Julian Roberts & Jesper Ryberg (eds.), Principled Sentencing and Artificial Intelligence. Oxford University Press.
    Automated decision making for sentencing is the use of a software algorithm to analyse a convicted offender’s case and deliver a sentence. This chapter reviews the moral arguments for and against employing automated decision making for sentencing and finds that its use is in principle morally permissible. Specifically, it argues that well-designed automated decision making for sentencing will better approximate the just sentence than human sentencers. Moreover, it dismisses common concerns about transparency, privacy and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  10. Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - International Data Privacy Law 1 (2):76-99.
    Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that the GDPR will legally mandate a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  11. From the Eyeball Test to the Algorithm — Quality of Life, Disability Status, and Clinical Decision Making in Surgery.Charles Binkley, Joel Michael Reynolds & Andrew Shuman - 2022 - New England Journal of Medicine 14 (387):1325-1328.
    Qualitative evidence concerning the relationship between QoL and a wide range of disabilities suggests that subjective judgments regarding other people’s QoL are wrong more often than not and that such judgments by medical practitioners in particular can be biased. Guided by their desire to do good and avoid harm, surgeons often rely on "the eyeball test" to decide whether a patient will or will not benefit from surgery. But the eyeball test can easily harbor a range of implicit judgments and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems.Atoosa Kasirzadeh & Colin Klein - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21).
    Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  14. Decision Time: Normative Dimensions of Algorithmic Speed.Daniel Susser - forthcoming - ACM Conference on Fairness, Accountability, and Transparency (FAccT '22).
    Existing discussions about automated decision-making focus primarily on its inputs and outputs, raising questions about data collection and privacy on one hand and accuracy and fairness on the other. Less attention has been devoted to critically examining the temporality of decision-making processes—the speed at which automated decisions are reached. In this paper, I identify four dimensions of algorithmic speed that merit closer analysis. Duration (how much time it takes to reach a judgment), timing (when automated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. The philosophical basis of algorithmic recourse.Suresh Venkatasubramanian & Mark Alfano - forthcoming - Fairness, Accountability, and Transparency Conference 2020.
    Philosophers have established that certain ethically important val- ues are modally robust in the sense that they systematically deliver correlative benefits across a range of counterfactual scenarios. In this paper, we contend that recourse – the systematic process of reversing unfavorable decisions by algorithms and bureaucracies across a range of counterfactual scenarios – is such a modally ro- bust good. In particular, we argue that two essential components of a good life – temporally extended agency and trust – are under- (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  16. Inscrutable Processes: Algorithms, Agency, and Divisions of Deliberative Labour.Marinus Ferreira - 2021 - Journal of Applied Philosophy 38 (4):646-661.
    As the use of algorithmic decisionmaking becomes more commonplace, so too does the worry that these algorithms are often inscrutable and our use of them is a threat to our agency. Since we do not understand why an inscrutable process recommends one option over another, we lose our ability to judge whether the guidance is appropriate and are vulnerable to being led astray. In response, I claim that a process being inscrutable does not automatically make its guidance (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  17. Algorithms, Agency, and Respect for Persons.Alan Rubel, Clinton Castro & Adam Pham - 2020 - Social Theory and Practice 46 (3):547-572.
    Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four key (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  18. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  19. Algorithmic fairness in mortgage lending: from absolute conditions to relational trade-offs.Michelle Seng Ah Lee & Luciano Floridi - 2020 - Minds and Machines 31 (1):165-191.
    To address the rising concern that algorithmic decision-making may reinforce discriminatory biases, researchers have proposed many notions of fairness and corresponding mathematical formalizations. Each of these notions is often presented as a one-size-fits-all, absolute condition; however, in reality, the practical and ethical trade-offs are unavoidable and more complex. We introduce a new approach that considers fairness—not as a binary, absolute mathematical condition—but rather, as a relational notion in comparison to alternative decisionmaking processes. Using US mortgage lending as (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  20. Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
    As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  21. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2):2053951716679679.
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Download  
     
    Export citation  
     
    Bookmark   209 citations  
  23. Algorithmic Colonization of Love.Hao Wang - 2023 - Techné Research in Philosophy and Technology 27 (2):260-280.
    Love is often seen as the most intimate aspect of our lives, but it is increasingly engineered by a few programmers with Artificial Intelligence (AI). Nowadays, numerous dating platforms are deploying so-called smart algorithms to identify a greater number of potential matches for a user. These AI-enabled matchmaking systems, driven by a rich trove of data, can not only predict what a user might prefer but also deeply shape how people choose their partners. This paper draws on Jürgen Habermas’s “colonization (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy.Atoosa Kasirzadeh - 2022 - Aies '22: Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society.
    Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Algorithms and Posthuman Governance.James Hughes - 2017 - Journal of Posthuman Studies.
    Since the Enlightenment, there have been advocates for the rationalizing efficiency of enlightened sovereigns, bureaucrats, and technocrats. Today these enthusiasms are joined by calls for replacing or augmenting government with algorithms and artificial intelligence, a process already substantially under way. Bureaucracies are in effect algorithms created by technocrats that systematize governance, and their automation simply removes bureaucrats and paper. The growth of algorithmic governance can already be seen in the automation of social services, regulatory oversight, policing, the justice system, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow and targeted (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  27. Neutrosophic Association Rule Mining Algorithm for Big Data Analysis.Mohamed Abdel-Basset, Mai Mohamed, Florentin Smarandache & Victor Chang - 2018 - Symmetry 10 (4):1-19.
    Big Data is a large-sized and complex dataset, which cannot be managed using traditional data processing tools. Mining process of big data is the ability to extract valuable information from these large datasets. Association rule mining is a type of data mining process, which is indented to determine interesting associations between items and to establish a set of association rules whose support is greater than a specific threshold. The classical association rules can only be extracted from binary data where an (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  28. Exploring moral algorithm preferences in autonomous vehicle dilemmas: an empirical study.Tingting Sui - 2023 - Frontiers in Psychology 14:1-12.
    Introduction: This study delves into the ethical dimensions surrounding autonomous vehicles (AVs), with a specific focus on decision-making algorithms. Termed the “Trolley problem,” an ethical quandary arises, necessitating the formulation of moral algorithms grounded in ethical principles. To address this issue, an online survey was conducted with 460 participants in China, comprising 237 females and 223 males, spanning ages 18 to 70. -/- Methods: Adapted from Joshua Greene’s trolley dilemma survey, our study employed Yes/No options to probe participants’ (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  30. Algorithmic Indirect Discrimination, Fairness, and Harm.Frej Klem Thomsen - 2023 - AI and Ethics.
    Over the past decade, scholars, institutions, and activists have voiced strong concerns about the potential of automated decision systems to indirectly discriminate against vulnerable groups. This article analyses the ethics of algorithmic indirect discrimination, and argues that we can explain what is morally bad about such discrimination by reference to the fact that it causes harm. The article first sketches certain elements of the technical and conceptual background, including definitions of direct and indirect algorithmic differential treatment. It (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Three Lessons For and From Algorithmic Discrimination.Frej Klem Thomsen - 2023 - Res Publica (2):1-23.
    Algorithmic discrimination has rapidly become a topic of intense public and academic interest. This article explores three issues raised by algorithmic discrimination: 1) the distinction between direct and indirect discrimination, 2) the notion of disadvantageous treatment, and 3) the moral badness of discriminatory automated decision-making. It argues that some conventional distinctions between direct and indirect discrimination appear not to apply to algorithmic discrimination, that algorithmic discrimination may often be discrimination between groups, as opposed to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Neutrosophic Treatment of the Modified Simplex Algorithm to find the Optimal Solution for Linear Models.Maissam Jdid & Florentin Smarandache - 2023 - International Journal of Neutrosophic Science 23.
    Science is the basis for managing the affairs of life and human activities, and living without knowledge is a form of wandering and a kind of loss. Using scientific methods helps us understand the foundations of choice, decision-making, and adopting the right solutions when solutions abound and options are numerous. Operational research is considered the best that scientific development has provided because its methods depend on the application of scientific methods in solving complex issues and the optimal use (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral assumptions of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Surrogate Perspectives on a Patient Preference Predictor: Good Idea, But I Should Decide How It Is Used.Dana Howard - 2022 - AJOB Empirical Bioethics 13 (2):125-135.
    Background: Current practice frequently fails to provide care consistent with the preferences of decisionally-incapacitated patients. It also imposes significant emotional burden on their surrogates. Algorithmic-based patient preference predictors (PPPs) have been proposed as a possible way to address these two concerns. While previous research found that patients strongly support the use of PPPs, the views of surrogates are unknown. The present study thus assessed the views of experienced surrogates regarding the possible use of PPPs as a means to help (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Neutrosophic speech recognition Algorithm for speech under stress by Machine learning.Florentin Smarandache, D. Nagarajan & Said Broumi - 2023 - Neutrosophic Sets and Systems 53.
    It is well known that the unpredictable speech production brought on by stress from the task at hand has a significant negative impact on the performance of speech processing algorithms. Speech therapy benefits from being able to detect stress in speech. Speech processing performance suffers noticeably when perceptually produced stress causes variations in speech production. Using the acoustic speech signal to objectively characterize speaker stress is one method for assessing production variances brought on by stress. Real-world complexity and ambiguity make (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. Agency Laundering and Information Technologies.Alan Rubel, Clinton Castro & Adam Pham - 2019 - Ethical Theory and Moral Practice 22 (4):1017-1041.
    When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  38. How virtue signalling makes us better: moral preferences with respect to autonomous vehicle type choices.Robin Kopecky, Michaela Jirout Košová, Daniel D. Novotný, Jaroslav Flegr & David Černý - 2023 - AI and Society 38 (2):937-946.
    One of the moral questions concerning autonomous vehicles (henceforth AVs) is the choice between types that differ in their built-in algorithms for dealing with rare situations of unavoidable lethal collision. It does not appear to be possible to avoid questions about how these algorithms should be designed. We present the results of our study of moral preferences (N = 2769) with respect to three types of AVs: (1) selfish, which protects the lives of passenger(s) over any number of bystanders; (2) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. The Threat of Algocracy: Reality, Resistance and Accommodation.John Danaher - 2016 - Philosophy and Technology 29 (3):245-268.
    One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  40. Principles of Information Processing and Natural Learning in Biological Systems.Predrag Slijepcevic - 2021 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 52 (2):227-245.
    The key assumption behind evolutionary epistemology is that animals are active learners or ‘knowers’. In the present study, I updated the concept of natural learning, developed by Henry Plotkin and John Odling-Smee, by expanding it from the animal-only territory to the biosphere-as-a-whole territory. In the new interpretation of natural learning the concept of biological information, guided by Peter Corning’s concept of “control information”, becomes the ‘glue’ holding the organism–environment interactions together. The control information guides biological systems, from bacteria to ecosystems, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  41. Dio, l'evento e l'algoritmo: il tradimento di Leibniz nell'ontologia digitale e l'etica dell'istante.Giuseppe De Ruvo - 2022 - Segni E Comprensione 36 (103):81-112.
    This article shows how the so-called digital ontology betrays the metaphysical-theological thought of Leibniz (of which it claims to be heir), giving rise to an apparent “algorithmic providence” which, however, confines subjects in algorithmic types, making it impossible the occurrence of event and of the new. If digital ontology sees in Leibniz a thinker from whom to interpret being on the basis of algorithms, this article – by reconstructing Leibniz’s thought – wants to show not only how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  43. Classification of Real and Fake Human Faces Using Deep Learning.Fatima Maher Salman & Samy S. Abu-Naser - 2022 - International Journal of Academic Engineering Research (IJAER) 6 (3):1-14.
    Artificial intelligence (AI), deep learning, machine learning and neural networks represent extremely exciting and powerful machine learning-based techniques used to solve many real-world problems. Artificial intelligence is the branch of computer sciences that emphasizes the development of intelligent machines, thinking and working like humans. For example, recognition, problem-solving, learning, visual perception, decision-making and planning. Deep learning is a subset of machine learning in artificial intelligence that has networks capable of learning unsupervised from data that is unstructured or unlabeled. (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  44. Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 1.
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  45. Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?Frank Ursin, Cristian Timmermann & Florian Steger - 2022 - Bioethics 36 (2):143-153.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  46. Why Decision-making Capacity Matters.Ben Schwan - 2021 - Journal of Moral Philosophy 19 (5):447-473.
    Decision-making Capacity matters to whether a patient’s decision should determine her treatment. But why it matters in this way isn’t clear. The standard story is that dmc matters because autonomy matters. And this is thought to justify dmc as a gatekeeper for autonomy – whereby autonomy concerns arise if but only if a patient has dmc. But appeals to autonomy invoke two distinct concerns: concern for authenticity – concern that a choice is consistent with an individual’s commitments; (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  47. Diagnosis of Pneumonia Using Deep Learning.Alaa M. A. Barhoom & Samy S. Abu-Naser - 2022 - International Journal of Academic Engineering Research (IJAER) 6 (2):48-68.
    Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines or software that work and react like humans. Some of the activities computers with artificial intelligence are designed for include, Speech, recognition, Learning, Planning and Problem solving. Deep learning is a collection of algorithms used in machine learning, It is part of a broad family of methods used for machine learning that are based on learning representations of data. Deep learning is a technique used (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  48. Algebraic structures of neutrosophic triplets, neutrosophic duplets, or neutrosophic multisets. Volume II.Florentin Smarandache, Xiaohong Zhang & Mumtaz Ali - 2019 - Basel, Switzerland: MDPI.
    The topics approached in this collection of papers are: neutrosophic sets; neutrosophic logic; generalized neutrosophic set; neutrosophic rough set; multigranulation neutrosophic rough set (MNRS); neutrosophic cubic sets; triangular fuzzy neutrosophic sets (TFNSs); probabilistic single-valued (interval) neutrosophic hesitant fuzzy set; neutro-homomorphism; neutrosophic computation; quantum computation; neutrosophic association rule; data mining; big data; oracle Turing machines; recursive enumerability; oracle computation; interval number; dependent degree; possibility degree; power aggregation operators; multi-criteria group decision-making (MCGDM); expert set; soft sets; LA-semihypergroups; single valued trapezoidal (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Ditching Decision-Making Capacity.Daniel Fogal & Ben Schwan - forthcoming - Journal of Medical Ethics.
    Decision-making capacity (DMC) plays an important role in clinical practice—determining, on the basis of a patient’s decisional abilities, whether they are entitled to make their own medical decisions or whether a surrogate must be secured to participate in decisions on their behalf. As a result, it’s critical that we get things right—that our conceptual framework be well-suited to the task of helping practitioners systematically sort through the relevant ethical considerations in a way that reliably and transparently delivers correct (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. A Beginner’s Guide to Crossing the Road: Towards an Epistemology of Successful Action in Complex Systems.Ragnar van Der Merwe & Alex Broadbent - forthcoming - Interdisciplinary Science Reviews.
    Crossing the road within the traffic system is an example of an action human agents perform successfully day-to-day in complex systems. How do they perform such successful actions given that the behaviour of complex systems is often difficult to predict? The contemporary literature contains two contrasting approaches to the epistemology of complex systems: an analytic and a post-modern approach. We argue that neither approach adequately accounts for how successful action is possible in complex systems. Agents regularly perform successful actions without (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 997