Switch to: References

Add citations

You must login to add citations.
  1. Building Epistemically Healthier Platforms.Dallas Amico-Korby, Maralee Harrell & David Danks - forthcoming - Episteme.
    When thinking about designing social media platforms, we often focus on factors such as usability, functionality, aesthetics, ethics, and so forth. Epistemic considerations have rarely been given the same level of attention in design discussions. This paper aims to rectify this neglect. We begin by arguing that there are epistemic norms that govern environments, including social media environments. Next, we provide a framework for applying these norms to the question of platform design. We then apply this framework to the real-world (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Environmental Epistemology.Dallas Amico-Korby, Maralee Harrell & David Danks - 2024 - Synthese 203 (81):1-24.
    We argue that there is a large class of questions—specifically questions about how to epistemically evaluate environments that currently available epistemic theories are not well-suited for answering, precisely because these questions are not about the epistemic state of particular agents or groups. For example, if we critique Facebook for being conducive to the spread of misinformation, then we are not thereby critiquing Facebook for being irrational, or lacking knowledge, or failing to testify truthfully. Instead, we are saying something about the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Making Sense of the Conceptual Nonsense 'Trustworthy AI'.Ori Freiman - 2022 - AI and Ethics 4.
    Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence of ignoring (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • WP:NOT, WP:NPOV, and Other Stories Wikipedia Tells Us: A Feminist Critique of Wikipedia’s Epistemology.Jon Rosenberg & Amanda Menking - 2021 - Science, Technology, and Human Values 46 (3):455-479.
    Wikipedia has become increasingly prominent in online search results, serving as an initial path for the public to access “facts,” and lending plausibility to its autobiographical claim to be “the sum of all human knowledge.” However, this self-conception elides Wikipedia’s role as the world’s largest online site of encyclopedic knowledge production. A repository for established facts, Wikipedia is also a social space in which the facts themselves are decided. As a community, Wikipedia is guided by the five pillars—principles that inform (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Towards trustworthy blockchains: normative reflections on blockchain-enabled virtual institutions.Yan Teng - 2021 - Ethics and Information Technology 23 (3):385-397.
    This paper proposes a novel way to understand trust in blockchain technology by analogy with trust placed in institutions. In support of the analysis, a detailed investigation of institutional trust is provided, which is then used as the basis for understanding the nature and ethical limits of blockchain trust. Two interrelated arguments are presented. First, given blockchains’ capacity for being institution-like entities by inviting expectations similar to those invited by traditional institutions, blockchain trust is argued to be best conceptualized as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (1 other version)Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • Social Epistemology as a New Paradigm for Journalism and Media Studies.Yigal Godler, Zvi Reich & Boaz Miller - forthcoming - New Media and Society.
    Journalism and media studies lack robust theoretical concepts for studying journalistic knowledge ‎generation. More specifically, conceptual challenges attend the emergence of big data and ‎algorithmic sources of journalistic knowledge. A family of frameworks apt to this challenge is ‎provided by “social epistemology”: a young philosophical field which regards society’s participation ‎in knowledge generation as inevitable. Social epistemology offers the best of both worlds for ‎journalists and media scholars: a thorough familiarity with biases and failures of obtaining ‎knowledge, and a strong (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Appraising Black-Boxed Technology: the Positive Prospects.E. S. Dahl - 2018 - Philosophy and Technology 31 (4):571-591.
    One staple of living in our information society is having access to the web. Web-connected devices interpret our queries and retrieve information from the web in response. Today’s web devices even purport to answer our queries directly without requiring us to comb through search results in order to find the information we want. How do we know whether a web device is trustworthy? One way to know is to learn why the device is trustworthy by inspecting its inner workings, 156–170 (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The internet, cognitive enhancement, and the values of cognition.Richard Heersmink - 2016 - Minds and Machines 26 (4):389-407.
    This paper has two distinct but related goals: (1) to identify some of the potential consequences of the Internet for our cognitive abilities and (2) to suggest an approach to evaluate these consequences. I begin by outlining the Google effect, which (allegedly) shows that when we know information is available online, we put less effort into storing that information in the brain. Some argue that this strategy is adaptive because it frees up internal resources which can then be used for (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Knowledge, Democracy, and the Internet.Nicola Mößner & Philip Kitcher - 2017 - Minerva 55 (1):1-24.
    The internet has considerably changed epistemic practices in science as well as in everyday life. Apparently, this technology allows more and more people to get access to a huge amount of information. Some people even claim that the internet leads to a democratization of knowledge. In the following text, we will analyze this statement. In particular, we will focus on a potential change in epistemic structure. Does the internet change our common epistemic practice to rely on expert opinions? Does it (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2):2053951716679679.
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Download  
     
    Export citation  
     
    Bookmark   218 citations  
  • This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.Keith W. Miller, Marty J. Wolf & Frances Grodzinsky - 2017 - Science and Engineering Ethics 23 (2):389-401.
    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Responsible Epistemic Technologies: A Social-Epistemological Analysis of Autocompleted Web Search.Boaz Miller & Isaac Record - 2017 - New Media and Society 19 (12):1945-1963.
    Information providing and gathering increasingly involve technologies like search ‎engines, which actively shape their epistemic surroundings. Yet, a satisfying account ‎of the epistemic responsibilities associated with them does not exist. We analyze ‎automatically generated search suggestions from the perspective of social ‎epistemology to illustrate how epistemic responsibilities associated with a ‎technology can be derived and assigned. Drawing on our previously developed ‎theoretical framework that connects responsible epistemic behavior to ‎practicability, we address two questions: first, given the different technological ‎possibilities available (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Towards the Epistemology of the Internet of Things Techno-Epistemology and Ethical Considerations Through the Prism of Trust.Ori Freiman - 2014 - International Review of Information Ethics 22:6-22.
    This paper discusses the epistemology of the Internet of Things [IoT] by focusing on the topic of trust. It presents various frameworks of trust, and argues that the ethical framework of trust is what constitutes our responsibility to reveal desired norms and standards and embed them in other frameworks of trust. The first section briefly presents the IoT and scrutinizes the scarce philosophical work that has been done on this subject so far. The second section suggests that the field of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Trustworthiness and truth: The epistemic pitfalls of internet accountability.Karen Frost-Arnold - 2014 - Episteme 11 (1):63-81.
    Since anonymous agents can spread misinformation with impunity, many people advocate for greater accountability for internet speech. This paper provides a veritistic argument that accountability mechanisms can cause significant epistemic problems for internet encyclopedias and social media communities. I show that accountability mechanisms can undermine both the dissemination of true beliefs and the detection of error. Drawing on social psychology and behavioral economics, I suggest alternative mechanisms for increasing the trustworthiness of internet communication.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • NAVIGATING BETWEEN CHAOS AND BUREAUCRACY: BACKGROUNDING TRUST IN OPEN-CONTENT COMMUNITIES.Paul B. de Laat - 2012 - In Karl Aberer, Andreas Flache, Wander Jager, Ling Liu, Jie Tang & Christophe Guéret (eds.), 4th International Conference, SocInfo 2012, Lausanne, Switzerland, December 5-7, 2012. Proceedings. Springer.
    Many virtual communities that rely on user-generated content (such as social news sites, citizen journals, and encyclopedias in particular) offer unrestricted and immediate ‘write access’ to every contributor. It is argued that these communities do not just assume that the trust granted by that policy is well-placed; they have developed extensive mechanisms that underpin the trust involved (‘backgrounding’). These target contributors (stipulating legal terms of use and developing etiquette, both underscored by sanctions) as well as the contents contributed by them (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Justified Belief in a Digital Age: On the Epistemic Implications of Secret Internet Technologies.Boaz Miller & Isaac Record - 2013 - Episteme 10 (2):117 - 134.
    People increasingly form beliefs based on information gained from automatically filtered Internet ‎sources such as search engines. However, the workings of such sources are often opaque, preventing ‎subjects from knowing whether the information provided is biased or incomplete. Users’ reliance on ‎Internet technologies whose modes of operation are concealed from them raises serious concerns about ‎the justificatory status of the beliefs they end up forming. Yet it is unclear how to address these concerns ‎within standard theories of knowledge and justification. (...)
    Download  
     
    Export citation  
     
    Bookmark   50 citations  
  • Open Source Production of Encyclopedias: Editorial Policies at the Intersection of Organizational and Epistemological Trust.Paul B. de Laat - 2012 - Social Epistemology 26 (1):71-103.
    The ideas behind open source software are currently applied to the production of encyclopedias. A sample of six English text-based, neutral-point-of-view, online encyclopedias of the kind are identified: h2g2, Wikipedia, Scholarpedia, Encyclopedia of Earth, Citizendium and Knol. How do these projects deal with the problem of trusting their participants to behave as competent and loyal encyclopedists? Editorial policies for soliciting and processing content are shown to range from high discretion to low discretion; that is, from granting unlimited trust to limited (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • AI as an Epistemic Technology.Ramón Alvarado - 2023 - Science and Engineering Ethics 29 (5):1-30.
    In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (_including_ other (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • What does it mean to trust blockchain technology?Yan Teng - 2022 - Metaphilosophy 54 (1):145-160.
    This paper argues that the widespread belief that interactions between blockchains and their users are trust-free is inaccurate and misleading, since this belief not only overlooks the vital role played by trust in the lack of knowledge and control but also conceals the moral and normative relevance of relying on blockchain applications. The paper reaches this argument by providing a close philosophical examination of the concept referred to as trust in blockchain technology, clarifying the trustor group, the structure, and the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
    Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • A Socio‐epistemological Framework for Scientific Publishing.Judith Simon - 2010 - Social Epistemology 24 (3):201-218.
    In this paper I propose a new theoretical framework to analyse socio‐technical epistemic practices and systems on the Web and beyond, and apply it to the topic of web‐based scientific publishing. This framework is informed by social epistemology, science and technology studies (STS) and feminist epistemology. Its core consists of a tripartite classification of socio‐technical epistemic systems based on the mechanisms of closure they employ to terminate socio‐epistemic processes in which multiple agents are involved. In particular I distinguish three mechanisms (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Developing Automated Deceptions and the Impact on Trust.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2015 - Philosophy and Technology 28 (1):91-105.
    As software developers design artificial agents , they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use.Christian Herzog - 2021 - Science and Engineering Ethics 27 (1):1-15.
    In the present article, I will advocate caution against developing artificial moral agents based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • There’s something in your eye: ethical implications of augmented visual field devices.Marty J. Wolf, Frances S. Grodzinsky & Keith W. Miller - 2016 - Journal of Information, Communication and Ethics in Society 14 (3):214-230.
    Purpose This paper aims to explore the ethical and social impact of augmented visual field devices, identifying issues that AVFDs share with existing devices and suggesting new ethical and social issues that arise with the adoption of AVFDs. Design/methodology/approach This essay incorporates both a philosophical and an ethical analysis approach. It is based on Plato’s Allegory of the Cave, philosophical notions of transparency and presence and human values including psychological well-being, physical well-being, privacy, deception, informed consent, ownership and property and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • ‘What are these researchers doing in my Wikipedia?’: ethical premises and practical judgment in internet-based ethnography.Christian Pentzold - 2017 - Ethics and Information Technology 19 (2):143-155.
    The article ties together codified ethical premises, proceedings of ethical reasoning, and field-specific ethical reflections so to inform the ethnography of an Internet-based collaborative project. It argues that instead of only obeying formal statutes, practical judgment has to account for multiple understandings of ethical issues in the research field as well as for the self-determination of reflexive participants. The article reflects on the heuristics that guided the decisions of a 4-year participant observation in the English-language and German-language editions of Wikipedia. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Semantic Web Regulatory Models: Why Ethics Matter.Pompeu Casanovas - 2015 - Philosophy and Technology 28 (1):33-55.
    The notion of validity fulfils a crucial role in legal theory. In the emerging Web 3.0, Semantic Web languages, legal ontologies, and normative multi-agent systems are designed to cover new regulatory needs. Conceptual models for complex regulatory systems shape the characteristic features of rules, norms, and principles in different ways. This article outlines one of such multilayered governance models, designed for the CAPER platform, and offers a definition of Semantic Web Regulatory Models . It distinguishes between normative-SWRM and institutional-SWRM. It (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation