Jennifer Lackey’s case “Creationist Teacher,” in which students acquire knowledge of evolutionary theory from a teacher who does not herself believe the theory, has been discussed widely as a counterexample to so-called transmission theories of testimonial knowledge and justification. The case purports to show that a speaker need not herself have knowledge or justification in order to enable listeners to acquire knowledge or justification from her assertion. The original case has been criticized on the ground that it does not really (...) refute the transmission theory, because there is still somebody in a chain of testifiers—the person from whom the creationist teacher acquired what she testifies—who knows the truth of the testified statements. In this paper, we provide a kind of pattern for generating counterexample cases, one that avoids objections discussed by Peter Graham and others in relation to such cases. (shrink)
A person presented with adequate but not conclusive evidence for a proposition is in a position voluntarily to acquire a belief in that proposition, or to suspend judgment about it. The availability of doxastic options in such cases grounds a moderate form of doxastic voluntarism not based on practical motives, and therefore distinct from pragmatism. In such cases, belief-acquisition or suspension of judgment meets standard conditions on willing: it can express stable character traits of the agent, it can be responsive (...) to reasons, and it is compatible with a subjective awareness of the available options. (shrink)
Technology is a practically indispensible means for satisfying one’s basic interests in all central areas of human life including nutrition, habitation, health care, entertainment, transportation, and social interaction. It is impossible for any one person, even a well-trained scientist or engineer, to know enough about how technology works in these different areas to make a calculated choice about whether to rely on the vast majority of the technologies she/he in fact relies upon. Yet, there are substantial risks, uncertainties, and unforeseen (...) practical consequences associated with the use of technological artifacts and systems. The salience of technological failure (both catastrophic and mundane), as well as technology’s sometimes unforeseeable influence on our behavior, makes it relevant to wonder whether we are really justified as individuals in our practical reliance on technology. Of course, even if we are not justified, we might nonetheless continue in our technological reliance, since the alternatives might not be attractive or feasible. In this chapter I argue that a conception of trust in technological artifacts and systems is plausible and helps us understand what is at stake philosophically in our reliance on technology. Such an account also helps us understand the relationship between trust and technological risk and the ethical obligations of those who design, manufacture, and deploy technological artifacts. (shrink)
In this chapter, we consider ethical and philosophical aspects of trust in the practice of medicine. We focus on trust within the patient-physician relationship, trust and professionalism, and trust in Western (allopathic) institutions of medicine and medical research. Philosophical approaches to trust contain important insights into medicine as an ethical and social practice. In what follows we explain several philosophical approaches and discuss their strengths and weaknesses in this context. We also highlight some relevant empirical work in the section on (...) trust in the institutions of medicine. It is hoped that the approaches discussed here can be extended to nursing and other topics in the philosophy of medicine. (shrink)
The adoption of web-based telecare services has raised multifarious ethical concerns, but a traditional principle-based approach provides limited insight into how these concerns might be addressed and what, if anything, makes them problematic. We take an alternative approach, diagnosing some of the main concerns as arising from a core phenomenon of shifting trust relations that come about when the physician plays a less central role in the delivery of care, and new actors and entities are introduced. Correspondingly, we propose an (...) applied ethics of trust based on the idea that patients should be provided with good reasons to trust telecare services, which we call sound trust. On the basis of this approach, we propose several concrete strategies for safeguarding sound trust in telecare. (shrink)
According to assurance views of testimonial justification, in virtue of the act of testifying a speaker provides an assurance of the truth of what she asserts to the addressee. This assurance provides a special justificatory force and a distinctive normative status to the addressee. It is thought to explain certain asymmetries between addressees and other unintended hearers (bystanders and eavesdroppers), such as the phenomenon that the addressee has a right to blame the speaker for conveying a falsehood but unintended hearers (...) do not, and the phenomenon that the addressee may deflect challenges to his testimonial belief to the speaker but unintended hearers may not. Here I argue that we can do a better job explaining the normative statuses associated with testimony by reference to epistemic norms of assertion and privacy norms. Following Sanford Goldberg, I argue that epistemic norms of assertion, according to which sincere assertion is appropriate only when the asserter possesses certain epistemic goods, can be ‘put to work’ to explain the normative statuses associated with testimony. When these norms are violated, they give hearers the right to blame the speaker, and they also explain why the speaker takes responsibility for the justification of the statement asserted. Norms of privacy, on the other hand, directly exclude eavesdroppers and bystanders from an informational exchange, implying that they have no standing to do many of the things, such as issue challenges or questions to the speaker, that would be normal for conversational participants. This explains asymmetries of normative status associated with testimony in a way logically independent of speaker assurance. (shrink)
Trust is a kind of risky reliance on another person. Social scientists have offered two basic accounts of trust: predictive expectation accounts and staking accounts. Predictive expectation accounts identify trust with a judgment that performance is likely. Staking accounts identify trust with a judgment that reliance on the person's performance is worthwhile. I argue that these two views of trust are different, that the staking account is preferable to the predictive expectation account on grounds of intuitive adequacy and coherence with (...) plausible explanations of action; and that there are counterexamples to both accounts. I then set forward an additional necessary condition on trust, according to which trust implies a moral expectation. When A trusts B to do x, A ascribes to B an obligation to do x, and holds B to this obligation. This Moral Expectation view throws new light on some of the consequences of misplaced trust. I use the example of physicians’ defensive behavior/defensive medicine to illustrate this final point. (shrink)
Some recent accounts of testimonial warrant base it on trust, and claim that doing so helps explain asymmetries between the intended recipient of testimony and other non-intended hearers, e.g. differences in their entitlement to challenge the speaker or to rebuke the speaker for lying. In this explanation ‘dependence-responsiveness’ is invoked as an essential feature of trust: the trustor believes the trustee to be motivationally responsive to the fact that the trustor is relying on the trustee. I argue that dependence-responsiveness is (...) not essential to trust and that the asymmetries, where genuine, can be better explained without reference to trust. (shrink)
Some of the systems used in natural language generation (NLG), a branch of applied computational linguistics, have the capacity to create or assemble somewhat original messages adapted to new contexts. In this paper, taking Bernard Williams’ account of assertion by machines as a starting point, I argue that NLG systems meet the criteria for being speech actants to a substantial degree. They are capable of authoring original messages, and can even simulate illocutionary force and speaker meaning. Background intelligence embedded in (...) their datasets enhances these speech capacities. Although there is an open question about who is ultimately responsible for their speech, if anybody, we can settle this question by using the notion of proxy speech, in which responsibility for artificial speech acts is assigned legally or conventionally to an entity separate from the speech actant. (shrink)
Assurance theories of testimony attempt to explain what is distinctive about testimony as a form of epistemic warrant or justification. The most characteristic assurance theories hold that a distinctive subclass of assertion (acts of “telling”) involves a real commitment given by the speaker to the listener, somewhat like a promise to the effect that what is asserted is true. This chapter sympathetically explains what is attractive about such theories: instead of treating testimony as essentially similar to any other kind of (...) evidence, they instead make testimonial warrant depend on essential features of the speech act of testimony as a social practice. One such feature is “buck-passing,” the phenomenon that when I am challenged to defend a belief I acquired through testimony, I may respond by referring to the source of my testimony (and thereby “passing the buck”) rather than providing direct evidence for the truth of the content of the belief. The chapter concludes by posing a serious challenge to assurance theories, namely that the social practice of assurance insufficiently ensures the truth of beliefs formed on the basis of testimony, and thereby fails a crucial epistemological test as a legitimate source of beliefs. (shrink)
Trust should be able to explain cooperation, and its failure should help explain the emergence of cooperation-enabling institutions. This proposed methodological constraint on theorizing about trust, when satisfied, can then be used to differentiate theories of trust with some being able to explain cooperation more generally and effectively than others. Unrestricted views of trust, which take trust to be no more than the disposition to rely on others, fare well compared to restrictive views, which require the trusting person to have (...) some further attitude in addition to this disposition. The same methodological constraint also favours some restrictive views over others. (shrink)
This paper develops a philosophical account of moral disruption. According to Robert Baker (2013), moral disruption is a process in which technological innovations undermine established moral norms without clearly leading to a new set of norms. Here I analyze this process in terms of moral uncertainty, formulating a philosophical account with two variants. On the Harm Account, such uncertainty is always harmful because it blocks our knowledge of our own and others’ moral obligations. On the Qualified Harm Account, there is (...) no harm in cases where moral uncertainty is related to innovation that is “for the best” in historical perspective, or where uncertainty is the expression of a deliberative virtue. The two accounts are compared by applying them to Baker’s historical case of the introduction of mechanical ventilation and organ transplantation technologies, as well as the present-day case of mass data practices in the health domain. (shrink)
This paper develops a philosophical account of moral disruption. According to Robert Baker, moral disruption is a process in which technological innovations undermine established moral norms without clearly leading to a new set of norms. Here I analyze this process in terms of moral uncertainty, formulating a philosophical account with two variants. On the harm account, such uncertainty is always harmful because it blocks our knowledge of our own and others’ moral obligations. On the qualified harm account, there is no (...) harm in cases where moral uncertainty is related to innovation that is “for the best” in historical perspective or where uncertainty is the expression of a deliberative virtue. The two accounts are compared by applying them to Baker’s historical case of the introduction of mechanical ventilation and organ transplantation technologies, as well as the present-day case of mass data practices in the health domain. (shrink)
This paper explores the role of moral uncertainty in explaining the morally disruptive character of new technologies. We argue that existing accounts of technomoral change do not fully explain its disruptiveness. This explanatory gap can be bridged by examining the epistemic dimensions of technomoral change, focusing on moral uncertainty and inquiry. To develop this account, we examine three historical cases: the introduction of the early pregnancy test, the contraception pill, and brain death. The resulting account highlights what we call “differential (...) disruption” and provides a resource for fields such as technology assessment, ethics of technology, and responsible innovation. (shrink)
The arrival of synthetic organs may mean we need to reconsider principles of ownership of such items. One possible ownership criterion is the boundary between the organ’s being outside or inside the body. What is outside of my body, even if it is a natural organ made of my cells, may belong to a company or research institution. Yet when it is placed in me, it belongs to me. In the future, we should also keep an eye on how the (...) availability of synthetic organs may change our attitudes toward our own bodies. (shrink)
Engineers are traditionally regarded as trustworthy professionals who meet exacting standards. In this chapter I begin by explicating our trust relationship towards engineers, arguing that it is a linear but indirect relationship in which engineers “stand behind” the artifacts and technological systems that we rely on directly. The chapter goes on to explain how this relationship has become more complex as engineers have taken on two additional aims: the aim of social engineering to create and steer trust between people, and (...) the aim of creating automated systems that take over human tasks and are meant to invite the trust of those who rely on and interact with them. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.