Results for 'Machine Trustworthiness'

1000+ found
Order:
  1.  68
    Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Two challenges for CI trustworthiness and how to address them.Kevin Baum, Eva Schmidt & A. Köhl Maximilian - 2017
    We argue that, to be trustworthy, Computa- tional Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which are accurate and gras- pable. We support this claim by drawing par- allels with trustworthy human persons, and we show what difference this makes in a hypo- thetical CI hiring system. Finally, we point out two challenges for trustworthy CI and sketch a mechanism which could be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Establishing the rules for building trustworthy AI.Luciano Floridi - 2019 - Nature Machine Intelligence 1:261-262.
    AI is revolutionizing everyone’s life, and it is crucial that it does so in the right way. AI’s profound and far-reaching potential for transformation concerns the engineering of systems that have some degree of autonomous agency. This is epochal and requires establishing a new, ethical balance between human and artificial autonomy.
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  4. Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  5. Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):323–327.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  6. The nonhuman condition: Radical democracy through new materialist lenses.Hans Asenbaum, Amanda Machin, Jean-Paul Gagnon, Diana Leong, Melissa Orlie & James Louis Smith - 2023 - Contemporary Political Theory (Online first):584-615.
    Radical democratic thinking is becoming intrigued by the material situatedness of its political agents and by the role of nonhuman participants in political interaction. At stake here is the displacement of narrow anthropocentrism that currently guides democratic theory and practice, and its repositioning into what we call ‘the nonhuman condition’. This Critical Exchange explores the nonhuman condition. It asks: What are the implications of decentering the human subject via a new materialist reading of radical democracy? Does this reading dilute political (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Trusting the (ro)botic other: By assumption?Paul B. de Laat - 2015 - SIGCAS Computers and Society 45 (3):255-260.
    How may human agents come to trust (sophisticated) artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet (ro)bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Hedonistic Act Utilitarianism: Action Guidance and Moral intuitions.Simon Rosenqvist - 2020 - Dissertation, Uppsala University
    According to hedonistic act utilitarianism, an act is morally right if and only if, and because, it produces at least as much pleasure minus pain as any alternative act available to the agent. This dissertation gives a partial defense of utilitarianism against two types of objections: action guidance objections and intuitive objections. In Chapter 1, the main themes of the dissertation are introduced. The chapter also examines questions of how to understand utilitarianism, including (a) how to best formulate the moral (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Evaluation and Design of Generalist Systems (EDGeS).John Beverley & Amanda Hicks - 2023 - Ai Magazine.
    The field of AI has undergone a series of transformations, each marking a new phase of development. The initial phase emphasized curation of symbolic models which excelled in capturing reasoning but were fragile and not scalable. The next phase was characterized by machine learning models—most recently large language models (LLMs)—which were more robust and easier to scale but struggled with reasoning. Now, we are witnessing a return to symbolic models as complementing machine learning. Successes of LLMs contrast with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Mad Speculation and Absolute Inhumanism: Lovecraft, Ligotti, and the Weirding of Philosophy.Ben Woodard - 2011 - Continent 1 (1):3-13.
    continent. 1.1 : 3-13. / 0/ – Introduction I want to propose, as a trajectory into the philosophically weird, an absurd theoretical claim and pursue it, or perhaps more accurately, construct it as I point to it, collecting the ground work behind me like the Perpetual Train from China Mieville's Iron Council which puts down track as it moves reclaiming it along the way. The strange trajectory is the following: Kant's critical philosophy and much of continental philosophy which has followed, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  11. “Who Should I Trust with My Data?” Ethical and Legal Challenges for Innovation in New Decentralized Data Management Technologies.Haleh Asgarinia, Andrés Chomczyk Penedo, Beatriz Esteves & Dave Lewis - 2023 - Information (Switzerland) 14 (7):1-17.
    News about personal data breaches or data abusive practices, such as Cambridge Analytica, has questioned the trustworthiness of certain actors in the control of personal data. Innovations in the field of personal information management systems to address this issue have regained traction in recent years, also coinciding with the emergence of new decentralized technologies. However, only with ethically and legally responsible developments will the mistakes of the past be avoided. This contribution explores how current data management schemes are insufficient (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Gründe geben. Maschinelles Lernen als Problem der Moralfähigkeit von Entscheidungen. Ethische Herausforderungen von Big-Data.Andreas Kaminski, Michael Nerurkar, Christian Wadephul & Klaus Wiegerling - 2020 - In Klaus Wiegerling, Michael Nerurkar, Christian Wadephul (Hg.): Ethische Herausforderungen von Big-Data. Bielefeld: Transcript. pp. 151-174.
    Entscheidungen verweisen in einem begrifflichen Sinne auf Gründe. Entscheidungssysteme bieten eine probabilistische Verlässlichkeit als Rechtfertigung von Empfehlungen an. Doch nicht für alle Situationen mögen Verlässlichkeitsgründe auch angemessene Gründe sein. Damit eröffnet sich die Idee, die Güte von Gründen von ihrer Angemessenheit zu unterscheiden. Der Aufsatz betrachtet an einem Beispiel, einem KI-Lügendetektor, die Frage, ob eine (zumindest aktuell nicht gegebene) hohe Verlässlichkeit den Einsatz rechtfertigen kann. Gleicht er nicht einem Richter, der anhand einer Statistik Urteile fällen würde?
    Download  
     
    Export citation  
     
    Bookmark  
  13. Trustworthiness and truth: The epistemic pitfalls of internet accountability.Karen Frost-Arnold - 2014 - Episteme 11 (1):63-81.
    Since anonymous agents can spread misinformation with impunity, many people advocate for greater accountability for internet speech. This paper provides a veritistic argument that accountability mechanisms can cause significant epistemic problems for internet encyclopedias and social media communities. I show that accountability mechanisms can undermine both the dissemination of true beliefs and the detection of error. Drawing on social psychology and behavioral economics, I suggest alternative mechanisms for increasing the trustworthiness of internet communication.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  14. Speaker trustworthiness: Shall confidence match evidence?Mélinda Pozzi & Diana Mazzarella - 2024 - Philosophical Psychology 37 (1):102-125.
    Overconfidence is typically damaging to one’s reputation as a trustworthy source of information. Previous research shows that the reputational cost associated with conveying a piece of false information is higher for confident than unconfident speakers. When judging speaker trustworthiness, individuals do not exclusively rely on past accuracy but consider the extent to which speakers expressed a degree of confidence that matched the accuracy of their claims (their “confidence-accuracy calibration”). The present study experimentally examines the interplay between confidence, accuracy and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Restoring trustworthiness in the financial system: Norms, behaviour and governance.Aisling Crean, Natalie Gold, David Vines & Annie Williamson - 2018 - Journal of the British Academy 6 (S1):131-155.
    Abstract: We examine how trustworthy behaviour can be achieved in the financial sector. The task is to ensure that firms are motivated to pursue long-term interests of customers rather than pursuing short-term profits. Firms’ self-interested pursuit of reputation, combined with regulation, is often not sufficient to ensure that this happens. We argue that trustworthy behaviour requires that at least some actors show a concern for the wellbeing of clients, or a respect for imposed standards, and that the behaviour of these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Trust, Trustworthiness, and the Moral Consequence of Consistency.Jason D'cruz - 2015 - Journal of the American Philosophical Association 1 (3):467-484.
    Situationists such as John Doris, Gilbert Harman, and Maria Merritt suppose that appeal to reliable behavioral dispositions can be dispensed with without radical revision to morality as we know it. This paper challenges this supposition, arguing that abandoning hope in reliable dispositions rules out genuine trust and forces us to suspend core reactive attitudes of gratitude and resentment, esteem and indignation. By examining situationism through the lens of trust we learn something about situationism (in particular, the radically revisionary moral implications (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  17. Trust, trustworthiness, and obligation.Mona Simion & Christopher Willard-Kyle - 2024 - Philosophical Psychology 37 (1):87-101.
    Where does entitlement to trust come from? When we trust someone to φ, do we need to have reason to trust them to φ or do we start out entitled to trust them to φ by default? Reductivists think that entitlement to trust always “reduces to” or is explained by the reasons that agents have to trust others. In contrast, anti-reductivists think that, in a broad range of circumstances, we just have entitlement to trust. even if we don’t have positive (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. The trustworthiness of AI: Comments on Simion and Kelp’s account.Dong-Yong Choi - 2023 - Asian Journal of Philosophy 2 (1):1-9.
    Simion and Kelp explain the trustworthiness of an AI based on that AI’s disposition to meet its obligations. Roughly speaking, according to Simion and Kelp, an AI is trustworthy regarding its task if and only if that AI is obliged to complete the task and its disposition to complete the task is strong enough. Furthermore, an AI is obliged to complete a task in the case where the task is the AI’s etiological function or design function. This account has (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Trust and Trustworthiness.J. Adam Carter - 2022 - Philosophy and Phenomenological Research (2):377-394.
    A widespread assumption in debates about trust and trustworthiness is that the evaluative norms of principal interest on the trustor’s side of a cooperative exchange regulate trusting attitudes and performances whereas those on the trustee’s side regulate dispositions to respond to trust. The aim here will be to highlight some unnoticed problems with this asymmetrical picture – and in particular, how it elides certain key evaluative norms on both the trustor’s and trustee’s side the satisfaction of which are critical (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  20. Trustworthy Science Advice: The Case of Policy Recommendations.Torbjørn Gundersen - 2023 - Res Publica 30 (Onine):1-19.
    This paper examines how science advice can provide policy recommendations in a trustworthy manner. Despite their major political importance, expert recommendations are understudied in the philosophy of science and social epistemology. Matthew Bennett has recently developed a notion of what he calls recommendation trust, according to which well-placed trust in experts’ policy recommendations requires that recommendations are aligned with the interests of the trust-giver. While interest alignment might be central to some cases of public trust, this paper argues against the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Organisms ≠ Machines.Daniel J. Nicholson - 2013 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 44 (4):669-678.
    The machine conception of the organism (MCO) is one of the most pervasive notions in modern biology. However, it has not yet received much attention by philosophers of biology. The MCO has its origins in Cartesian natural philosophy, and it is based on the metaphorical redescription of the organism as a machine. In this paper I argue that although organisms and machines resemble each other in some basic respects, they are actually very different kinds of systems. I submit (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  22. Just Machines.Clinton Castro - 2022 - Public Affairs Quarterly 36 (2):163-183.
    A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, owed to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  23. Can Machines Read our Minds?Christopher Burr & Nello Cristianini - 2019 - Minds and Machines 29 (3):461-494.
    We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  24. Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force of (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  25. Trustworthiness and Motivations.Natalie Gold - 2014 - In N. Morris D. Vines (ed.), Capital Failure: Rebuilding trust in financial services. Oxford University Press.
    Trust can be thought of as a three place relation: A trusts B to do X. Trustworthiness has two components: competence (does the trustee have the relevant skills, knowledge and abilities to do X?) and willingness (is the trustee intending or aiming to do X?). This chapter is about the willingness component, and the different motivations that a trustee may have for fulfilling trust. The standard assumption in economics is that agents are self-regarding, maximizing their own consumption of goods (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a variety of normative egalitarian (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. Xin: Being Trustworthy.Winnie Sung - 2020 - International Philosophical Quarterly 60 (3):271-286.
    This essay analyses the Confucian conception of xin, an attribute that broadly resembles what we would ordinarily call trustworthiness. More specifically, it provides an analysis of the psychology of someone who is xin and highlights a feature of the Confucian conception of trustworthiness: the trustworthy person has to ensure that there is a match between her self-presentation and the way she is. My goal is not to argue against any of the existing accounts of trustworthiness but to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  28. The Machine Conception of the Organism in Development and Evolution: A Critical Analysis.Daniel J. Nicholson - 2014 - Studies in History and Philosophy of Biological and Biomedical Sciences 48:162-174.
    This article critically examines one of the most prevalent metaphors in modern biology, namely the machine conception of the organism (MCO). Although the fundamental differences between organisms and machines make the MCO an inadequate metaphor for conceptualizing living systems, many biologists and philosophers continue to draw upon the MCO or tacitly accept it as the standard model of the organism. This paper analyses the specific difficulties that arise when the MCO is invoked in the study of development and evolution. (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  29. Machine Learning-Based Diabetes Prediction: Feature Analysis and Model Assessment.Fares Wael Al-Gharabawi & Samy S. Abu-Naser - 2023 - International Journal of Academic Engineering Research (IJAER) 7 (9):10-17.
    This study employs machine learning to predict diabetes using a Kaggle dataset with 13 features. Our three-layer model achieves an accuracy of 98.73% and an average error of 0.01%. Feature analysis identifies Age, Gender, Polyuria, Polydipsia, Visual blurring, sudden weight loss, partial paresis, delayed healing, irritability, Muscle stiffness, Alopecia, Genital thrush, Weakness, and Obesity as influential predictors. These findings have clinical significance for early diabetes risk assessment. While our research addresses gaps in the field, further work is needed to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. A Machine That Knows Its Own Code.Samuel A. Alexander - 2014 - Studia Logica 102 (3):567-576.
    We construct a machine that knows its own code, at the price of not knowing its own factivity.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  31. Machines as Moral Patients We Shouldn’t Care About : The Interests and Welfare of Current Machines.John Basl - 2014 - Philosophy and Technology 27 (1):79-96.
    In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, they (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  32. Engineering Trustworthiness in the Online Environment.Hugh Desmond - 2023 - In Mark Alfano & David Collins (eds.), The Moral Psychology of Trust. Rowman and Littlefield. pp. 215-237.
    Algorithm engineering is sometimes portrayed as a new 21st century return of manipulative social engineering. Yet algorithms are necessary tools for individuals to navigate online platforms. Algorithms are like a sensory apparatus through which we perceive online platforms: this is also why individuals can be subtly but pervasively manipulated by biased algorithms. How can we better understand the nature of algorithm engineering and its proper function? In this chapter I argue that algorithm engineering can be best conceptualized as a type (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Relativistic Conceptions of Trustworthiness: Implications for the Trustworthy Status of National Identification Systems.Paul Smart, Wendy Hall & Michael Boniface - 2022 - Data and Policy 4 (e21):1-16.
    Trustworthiness is typically regarded as a desirable feature of national identification systems (NISs); but the variegated nature of the trustor communities associated with such systems makes it difficult to see how a single system could be equally trustworthy to all actual and potential trustors. This worry is accentuated by common theoretical accounts of trustworthiness. According to such accounts, trustworthiness is relativized to particular individuals and particular areas of activity, such that one can be trustworthy with regard to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen C. King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Machine Learning, Misinformation, and Citizen Science.Adrian K. Yee - 2023 - European Journal for Philosophy of Science 13 (56):1-24.
    Current methods of operationalizing concepts of misinformation in machine learning are often problematic given idiosyncrasies in their success conditions compared to other models employed in the natural and social sciences. The intrinsic value-ladenness of misinformation and the dynamic relationship between citizens' and social scientists' concepts of misinformation jointly suggest that both the construct legitimacy and the construct validity of these models needs to be assessed via more democratic criteria than has previously been recognized.
    Download  
     
    Export citation  
     
    Bookmark  
  36. Can machines think? The controversy that led to the Turing test.Bernardo Gonçalves - 2023 - AI and Society 38 (6):2499-2509.
    Turing’s much debated test has turned 70 and is still fairly controversial. His 1950 paper is seen as a complex and multilayered text, and key questions about it remain largely unanswered. Why did Turing select learning from experience as the best approach to achieve machine intelligence? Why did he spend several years working with chess playing as a task to illustrate and test for machine intelligence only to trade it out for conversational question-answering in 1950? Why did Turing (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. Consciousness, Machines, and Moral Status.Henry Shevlin - manuscript
    In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 of the paper (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Cognitive Projects and the Trustworthiness of Positive Truth.Matteo Zicchetti - 2022 - Erkenntnis (8).
    The aim of this paper is twofold: first, I provide a cluster of theories of truth in classical logic that is (internally) consistent with global reflection principles: the theories of positive truth (and falsity). After that, I analyse the _epistemic value_ of such theories. I do so employing the framework of cognitive projects introduced by Wright (Proc Aristot Soc 78:167–245, 2004), and employed—in the context of theories of truth—by Fischer et al. (Noûs 2019. https://doi.org/10.1111/nous.12292 ). In particular, I will argue (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - manuscript
    Large language models (LLMs) represent the currently most relevant incarnation of artificial intelligence with respect to the future fate of democratic governance. Considering their potential, this paper seeks to answer a pressing question: Could LLMs outperform humans as expert advisors to democratic assemblies? While bearing the promise of enhanced expertise availability and accessibility, they also present challenges of hallucinations, misalignment, or value imposition. Weighing LLMs’ benefits and drawbacks compared to their human counterparts, I argue for their careful integration to augment (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Cybersecurity, Trustworthiness and Resilient Systems: Guiding Values for Policy.Adam Henschke & Shannon Ford - 2017 - Journal of Cyber Policy 1 (2).
    Cyberspace relies on information technologies to mediate relations between different people, across different communication networks and is reliant on the supporting technology. These interactions typically occur without physical proximity and those working depending on cybersystems must be able to trust the overall human–technical systems that support cyberspace. As such, detailed discussion of cybersecurity policy would be improved by including trust as a key value to help guide policy discussions. Moreover, effective cybersystems must have resilience designed into them. This paper argues (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Machine Intentionality, the Moral Status of Machines, and the Composition Problem.David Leech Anderson - 2012 - In Vincent C. Müller (ed.), Philosophy & Theory of Artificial Intelligence. Springer. pp. 312-333.
    According to the most popular theories of intentionality, a family of theories we will refer to as “functional intentionality,” a machine can have genuine intentional states so long as it has functionally characterizable mental states that are causally hooked up to the world in the right way. This paper considers a detailed description of a robot that seems to meet the conditions of functional intentionality, but which falls victim to what I call “the composition problem.” One obvious way to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  42. Machine learning, justification, and computational reliabilism.Juan Manuel Duran - 2023
    This article asks the question, ``what is reliable machine learning?'' As I intend to answer it, this is a question about epistemic justification. Reliable machine learning gives justification for believing its output. Current approaches to reliability (e.g., transparency) involve showing the inner workings of an algorithm (functions, variables, etc.) and how they render outputs. We then have justification for believing the output because we know how it was computed. Thus, justification is contingent on what can be shown about (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Why Machine-Information Metaphors are Bad for Science and Science Education.Massimo Pigliucci & Maarten Boudry - 2011 - Science & Education 20 (5-6):471.
    Genes are often described by biologists using metaphors derived from computa- tional science: they are thought of as carriers of information, as being the equivalent of ‘‘blueprints’’ for the construction of organisms. Likewise, cells are often characterized as ‘‘factories’’ and organisms themselves become analogous to machines. Accordingly, when the human genome project was initially announced, the promise was that we would soon know how a human being is made, just as we know how to make airplanes and buildings. Impor- tantly, (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  44. Making life more interesting: Trust, trustworthiness, and testimonial injustice.Aidan McGlynn - 2024 - Philosophical Psychology 37 (1):126-147.
    A theme running through Katherine Hawley’s recent works on trust and trustworthiness is that thinking about the relations between these and Miranda Fricker’s notion of testimonial injustice offers a perspective from which we can see several limitations of Fricker’s own account of testimonial injustice. This paper clarifies the aspects of Fricker’s account that Hawley’s criticisms target, focusing on her objections to Fricker’s proposal that its primary harm involves a kind of epistemic objectification and her characterization of testimonial injustice in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  46. Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice.Marie Oldfield - 2021 - AI and Ethics 1 (1):1.
    AI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This paper focusses on the funding processes for AI research grants which we have identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  48. Getting Machines to Do Your Dirty Work.Tomi Francis & Todd Karhu - forthcoming - Philosophical Studies:1-15.
    Autonomous systems are machines that can alter their behavior without direct human oversight or control. How ought we to program them to behave? A plausible starting point is given by the Reduction to Acts Thesis, according to which we ought to program autonomous systems to do whatever a human agent ought to do in the same circumstances. Although the Reduction to Acts Thesis is initially appealing, we argue that it is false: it is sometimes permissible to program a machine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Building machines that learn and think about morality.Christopher Burr & Geoff Keeling - 2018 - In Proceedings of the Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB 2018). Society for the Study of Artificial Intelligence and Simulation of Behaviour.
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  50. Manufacturing the Illusion of Epistemic Trustworthiness.Tyler Porter - forthcoming - Episteme.
    Abstract: There are epistemic manipulators in the world. These people are actively attempting to sacrifice epistemic goods for personal gain. In doing so, manipulators have led many competent epistemic agents into believing contrarian theories that go against well-established knowledge. In this paper, I explore one mechanism by which manipulators get epistemic agents to believe contrarian theories. I do so by looking at a prominent empirical model of trustworthiness. This model identifies three major factors that epistemic agents look for when (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000