In this paper, new evidence is presented for the assumption that the reason-relation reading of indicative conditionals ('if A, then C') reflects a conventional implicature. In four experiments, it is investigated whether relevance effects found for the probability assessment of indicative conditionals (Skovgaard-Olsen, Singmann, and Klauer, 2016a) can be classified as being produced by a) a conversational implicature, b) a (probabilistic) presupposition failure, or c) a conventional implicature. After considering several alternative hypotheses and the accumulating evidence from other studies as (...) well, we conclude that the evidence is most consistent with the Relevance Effect being the outcome of a conventional implicature. This finding indicates that the reason-relation reading is part of the semantic content of indicative conditionals, albeit not part of their primary truth-conditional content. (shrink)
It is widely held that there are important differences between indicative conditionals (e.g. “If the authors are linguists, they have written a linguistics paper”) and subjunctive conditionals (e.g. “If the authors had been linguists, they would have written a linguistics paper”). A central difference is that indicatives and subjunctives convey different stances towards the truth of their antecedents. Indicatives (often) convey neutrality: for example, about whether the authors in question are linguists. Subjunctives (often) convey the falsity of the antecedent: for (...) example, that the authors in question are not linguists. This paper tests prominent accounts of how these different stances are conveyed: whether by presupposition or conversational implicature. Experiment 1 tests the presupposition account by investigating whether the stances project – remain constant – when embedded under operators like negations, possibility modals, and interrogatives, a key characteristic of presuppositions. Experiment 2 tests the conversational-implicature account by investigating whether the stances can be cancelled without producing a contradiction, a key characteristic of implicatures. The results provide evidence that both stances – neutrality about the antecedent in indicatives and the falsity of the antecedent in subjunctives – are conveyed by conversational implicatures. (shrink)
In The Hobbit, J. R. R. Tolkien constructs middle-aged Bilbo Baggins as a sheltered and emotionally immature ‘child’ during the opening chapters before tracing his development into an autonomous, self-aware adult as the tale progresses. This article examines Tolkien’s novel qua bildungsroman through both a literary lens—considering setting, dialogue, and symbolism, among other techniques—and via a psychological framework, emphasizing an Eriksonian conception of development. Additionally, Peter Jackson’s three-part film adaptation of The Hobbit is discussed throughout with ways that Jackson (...) succeeds and fails at portraying Bilbo’s childlike attributes noted. I argue that Tolkien presents a sophisticated account of Bilbo’s childish persona growing into a mature adult worldview, and that Jackson appropriately reflects much, though not all, of this development in his films. (shrink)
Historically, many Christians have understood God’s transcendence to imply God’s properties categorically differ from any created properties. For multiple historical figures, a problem arose for religious language: how can one talk of God at all if none of our predicates apply to God? What are we to make of creeds and Biblical passages that seem to predicate creaturely properties, such as goodness and wisdom, of God? Thomas Aquinas offered a solution: God is to be spoken of only through analogy (the (...) doctrine of analogy). Gavin Hyman argues Aquinas’s doctrine of analogy was neglected prior to the early-modern period and the neglect of analogy produced the conception of a god vulnerable to atheistic arguments. Contra Hyman, in this paper, I show early-modern atheism arose in a theological context in which there was an active debate concerning analogy. Peter Browne (1665–1735) and William King (1650–1729) offered two competing conceptions of analogical predication that were debated through the 19th century, with interlocutors such as the freethinker Anthony Collins (1676–1729), theologian/philosopher George Berkeley (1685–1753), and skeptic David Hume (1711–1776). Lastly, I discuss the 18th century debate over theological analogy as part of the background relevant to understanding Hume’s Dialogues Concerning Natural Religion. (shrink)
In May 2010, philosophers, family and friends gathered at the University of Notre Dame to celebrate the career and retirement of Alvin Plantinga, widely recognized as one of the world's leading figures in metaphysics, epistemology, and the philosophy of religion. Plantinga has earned particular respect within the community of Christian philosophers for the pivotal role that he played in the recent renewal and development of philosophy of religion and philosophical theology. Each of the essays in this volume engages with some (...) particular aspect of Plantinga's views on metaphysics, epistemology, or philosophy of religion. Contributors include Michael Bergman, Ernest Sosa, Trenton Merricks, Richard Otte, Peter VanInwagen, Thomas P. Flint, Eleonore Stump, Dean Zimmerman and Nicholas Wolterstorff. The volume also includes responses to each essay by Bas van Fraassen, Stephen Wykstra, David VanderLaan, Robin Collins, Raymond VanArragon, E. J. Coffman, Thomas Crisp, and Donald Smith. (shrink)
Chapter 1 Introduction This chapter briefly explains what care ethics is, what care ethics is not, and how much work there still is to be done in establishing care ethics’ scope. The chapter elaborates on care ethics’ relationship to political philosophy, ethics, feminism, and the history of philosophy. The upshot of these discussions is the suggestion that we need a unified, precise statement of care ethics’ normative core. The chapter concludes by giving an overview of the chapters to come: Chapters (...) 2 to 5 will each develop concise statement of one of four key care ethical claims, while Chapters 6 to 8 will unify, specify, and justify those four claims under a new principle : the dependency principle. -/- Chapter 2 Scepticism about Principles Care ethicists tend to be sceptical that there is any useful role for general, abstract principles or rule in moral theory and practice. This chapter assesses this scepticism. It argues for the importance of maintaining a distinction between, on the one hand, scepticism about principles as a tool in deliberation, and, on the other hand, scepticism about principles as a ground of moral rightness. It surveys and assesses the statements made by care ethicists against principles. The conclusion is that care ethicists are correct to be somewhat sceptical about the use of principles in deliberation, but that this scepticism should not extend to principles as the source of moral rightness. -/- Chapter 3 The Value of Relationships Amongst care, a special place is often made for personal relationships. This chapter delimits and justifies this. First, it distinguishes three kinds of importance personal relationships are attributed by care ethicists –as moral paradigms, as goods to be preserved, and as sources of weighty duties. Next, it suggests such ‘relationship importance’ is not justified by the nature of personal relationships or the value of their relatives. It concludes that any personal relationship has importance in proportion to the value the relationship has to its participants. Crucially, this source of importance – a relationship’s value to participants – holds also for non-personal relationships. This allows us to understand how care ethics extends relationship importance to our relations with distant others. -/- Chapter 4 Caring Attitudes Care ethics calls upon agents to care about and for others. This chapter focus on the “about” aspect of caring: on caring attitudes. Caring attitudes are defined as pro-attitudes to the fulfilment of some entity’s interests. The moral value of these attitudes—particularly in emotions like love—is elaborated upon. However, attitudes do not seem under our voluntary control, so do not seem to be something we can be morally instructed to bear. This objection is responded to, with the explanation that we have long-term control over our attitudes and that moral theories can legitimately call upon agents to do things they cannot immediately control. Ultimately, then, care ethics’ injunction that agents hold caring attitudes is both defined and vindicated. -/- Chapter 5 Caring Actions This chapter starts by comparing and assessing the numerous definitions of care found in care ethics literature—distinguishing care, good care, bad care, and non- care. Caring actions are defined as having the intention to fulfil something’s perceived interests. The moral value of such actions is interrogated and found to be a combination off the intention’s value and the action’s consequences’ value. The chapter considers whether acknowledgment of care by the care recipient adds value to caring actions. It is suggested that such ‘ care receiving’ often, but not always, adds value to caring actions, and should not be part of the definition of care. Thus, care ethics’ imploration of agents to perform caring actions is defined. -/- Chapter 6 The Dependency Principle This chapter develops the dependency principle. This principle asserts that a moral agent, A, has a responsibility when: moral person B has an important interest that is unfulfilled; A is sufficiently capable of fulfilling that interest; and A’s most efficacious measure for fulfilling the interest will be not too costly. A incurs a weighty responsibility if ¬ to are true and A’s most efficacious measure for fulfilling the interest will be the least costly of anyone’s most efficacious measure for fulfilling B’s interest. Each of components to is elaborated on in turn. We arrive at a precise, comprehensive statement of the principle that will be used to unify, specify, and justify care ethics. -/- Chapter 7 Collective Dependency Duties It is impossible to do justice to care ethics without discussing the duties of groups—especially groups such as families and nation-states. This chapter defends the claim that the responsibilities produces by the dependency principle are, in many cases, responsibilities of groups. It develops permissive conditions that a group must meet in order to be a prospective responsibility-bearer, and explains how it is that groups’ responsibilities distribute to their individual members. This model of group responsibility is applied to states. This allows us to make sense of how the dependency principle can unify a care ethics that is greatly concerned with social and political outcomes. -/- Chapter 8 Unifying, Specifying, and Justifying Care Ethics Can the abstract, formalised ‘dependency principle ’—developed in Chapter 6 – serve as a compelling justification of the heterogeneous theory of care ethics? This chapter argues that it can. Each of the four claims of care ethics—developed in Chapters 2 to 5—is assessed in turn. Three questions are asked with regard to each. First, does the dependency principle generate some responsibilities of the relevant kind? Second, does the dependency principle generate enough responsibilities of the relevant kind? And third, does the dependency principle give the right explanation of these responsibilities? The answer each time, it is argued, is “yes”. Along the way, this answer produces some new results regarding both care ethics and the dependency principle. (shrink)
You don't say much about who you are teaching, or what subject you teach, but you do seem to see a need to justify what you are doing. Perhaps you're teaching underprivileged children, opening their minds to possibilities that might otherwise never have occurred to them. Or maybe you're teaching the children of affluent families and opening their eyes to the big moral issues they will face in life — like global poverty, and climate change. If you're doing something like (...) this, then stick with it. Giving money isn't the only way to make a difference. (shrink)
A plausible thought about vagueness is that it involves semantic incompleteness. To say that a predicate is vague is to say (at the very least) that its extension is incompletely specified. Where there is incomplete specification of extension there is indeterminacy, an indeterminacy between various ways in which the specification of the predicate might be completed or sharpened. In this paper we show that this idea is bound to founder by presenting an argument to the effect that there are vague (...) predicates which cannot be sharpened in such a way as to meet certain basic constraints (of penumbral connection and public accessibility) that must be imposed on the very notion of a sharpening. (shrink)
Peter Ludlow shows how word meanings are much more dynamic than we might have supposed, and explores how they are modulated even during everyday conversation. The resulting view is radical, and has far-reaching consequences for our political and legal discourse, and for enduring puzzles in the foundations of semantics, epistemology, and logic.
Book Symposium on The Territories of Science and Religion (University of Chicago Press, 2015). The author responds to review essays by John Heilbron, Stephen Gaukroger, and Yiftach Fehige.
Gender is both indeterminate and multifaceted: many individuals do not fit neatly into accepted gender categories, and a vast number of characteristics are relevant to determining a person's gender. This article demonstrates how these two features, taken together, enable gender to be modeled as a multidimensional sorites paradox. After discussing the diverse terminology used to describe gender, I extend Helen Daly's research into sex classifications in the Olympics and show how varying testosterone levels can be represented using a sorites argument. (...) The most appropriate way of addressing the paradox that results, I propose, is to employ fuzzy logic. I then move beyond physiological characteristics and consider how gender portrayals in reality television shows align with Judith Butler's notion of performativity, thereby revealing gender to be composed of numerous criteria. Following this, I explore how various elements of gender can each be modeled as individual sorites paradoxes such that the overall concept forms a multidimensional paradox. Resolving this dilemma through fuzzy logic provides a novel framework for interpreting gender membership. (shrink)
A recurring theme dominates recent philosophical debates about the nature of conscious perception: naïve realism’s opponents claim that the view is directly contradicted by empirical science. I argue that, despite their current popularity, empirical arguments against naïve realism are fundamentally flawed. The non-empirical premises needed to get from empirical scientific findings to substantive philosophical conclusions are ones the naïve realist is known to reject. Even granting the contentious premises, the empirical findings do not undermine the theory, given its overall philosophical (...) commitments. Thus, contemporary empirical research fails to supply any new argumentative force against naïve realism. I conclude that, as philosophers of mind, we would be better served spending a bit less time trying to wield empirical science as a cudgel against our opponents, and a bit more time working through the implications of each other’s views – something we can accomplish perfectly well from the comfort of our armchairs. (shrink)
Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in (...) some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people’s political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine. (shrink)
Although the relationship of part to whole is one of the most fundamental there is, this is the first full-length study of this key concept. Showing that mereology, or the formal theory of part and whole, is essential to ontology, Simons surveys and critiques previous theories--especially the standard extensional view--and proposes a new account that encompasses both temporal and modal considerations. Simons's revised theory not only allows him to offer fresh solutions to long-standing problems, but also has far-reaching consequences for (...) our understanding of a host of classical philosophical concepts. (shrink)
This paper looks at the critical reception of two central claims of Peter Auriol’s theory of cognition: the claim that the objects of cognition have an apparent or objective being that resists reduction to the real being of objects, and the claim that there may be natural intuitive cognitions of nonexistent objects. These claims earned Auriol the criticism of his fellow Franciscans, Walter Chatton and Adam Wodeham. According to them, the theory of apparent being was what had led Auriol (...) to allow for intuitive cognitions of nonexistents, but the intuitive cognition of nonexistents, at its turn, led to scepticism. Modern commentators have offered similar readings of Auriol, but this paper argues, first, that the apparent being provides no special reason to think there could be intuitions of nonexistent objects, and second, that despite his idiosyncratic account of intuition, Auriol was no more vulnerable to scepticism than his critics. (shrink)
Peter Ludlow presents the first book on the philosophy of generative linguistics, including both Chomsky's government and binding theory and his minimalist ...
The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers, and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take advantage of advances in technology. For more information regarding JSTOR, please contact [email protected]
Delusional beliefs have sometimes been considered as rational inferences from abnormal experiences. We explore this idea in more detail, making the following points. Firstly, the abnormalities of cognition which initially prompt the entertaining of a delusional belief are not always conscious and since we prefer to restrict the term “experience” to consciousness we refer to “abnormal data” rather than “abnormal experience”. Secondly, we argue that in relation to many delusions (we consider eight) one can clearly identify what the abnormal cognitive (...) data are which prompted the delusion and what the neuropsychological impairment is which is responsible for the occurrence of these data; but one can equally clearly point to cases where this impairments is present but delusion is not. So the impairment is not sufficient for delusion to occur. A second cognitive impairment, one which impairs the ability to evaluate beliefs, must also be present. Thirdly (and this is the main thrust of our chapter) we consider in detail what the nature of the inference is that leads from the abnormal data to the belief. This is not deductive inference and it is not inference by enumerative induction; it is abductive inference. We offer a Bayesian account of abductive inference and apply it to the explanation of delusional belief. (shrink)
Trust not only disposes us to feel betrayed, trust can be betrayed. Understanding what a betrayal of trust is requires understanding how trust can ground an obligation on the part of the trusted person to act specifically as trusted. This essay argues that, since trust cannot ground an appropriate obligation where there is no prior obligation, a betrayal of trust should instead be conceived as the violation of a trust-based obligation to respect an already existing obligation. Two forms of trust (...) are evaluated for their potential to ground such a second-order obligation. One form counts as a gift to the trusted person only because it does not involve an expectation of trustworthiness; the other form cannot count as a gift but confers an honor because it does include an expectation of trustworthiness. Only trust that confers an honor generates a second-order obligation whose violation would be a betrayal of trust. -/- . (shrink)
Accounts of the concepts of function and dysfunction have not adequately explained what factors determine the line between low‐normal function and dysfunction. I call the challenge of doing so the line‐drawing problem. Previous approaches emphasize facts involving the action of natural selection (Wakefield 1992a, 1999a, 1999b) or the statistical distribution of levels of functioning in the current population (Boorse 1977, 1997). I point out limitations of these two approaches and present a solution to the line‐drawing problem that builds on the (...) second one. (shrink)
I defend the following version of the ought-implies-can principle: (OIC) by virtue of conceptual necessity, an agent at a given time has an (objective, pro tanto) obligation to do only what the agent at that time has the ability and opportunity to do. In short, obligations correspond to ability plus opportunity. My argument has three premises: (1) obligations correspond to reasons for action; (2) reasons for action correspond to potential actions; (3) potential actions correspond to ability plus opportunity. In the (...) bulk of the paper I address six objections to OIC: three objections based on putative counterexamples, and three objections based on arguments to the effect that OIC conflicts with the is/ought thesis, the possibility of hard determinism, and the denial of the Principle of Alternate Possibilities. (shrink)
This paper presents a passage from Peter Singer on the pond analogy and comments on its content and use in the classroom, especially with respect to the development of the learners' argumentative skills.
In the mid-seventeenth century a movement of self-styled experimental philosophers emerged in Britain. Originating in the discipline of natural philosophy amongst Fellows of the fledgling Royal Society of London, it soon spread to medicine and by the eighteenth century had impacted moral and political philosophy and even aesthetics. Early modern experimental philosophers gave epistemic priority to observation and experiment over theorising and speculation. They decried the use of hypotheses and system-building without recourse to experiment and, in some quarters, developed a (...) philosophy of experiment. The movement spread to the Netherlands and France in the early eighteenth century and later impacted Germany. Its important role in early modern philosophy was subsequently eclipsed by the widespread adoption of the Kantian historiography of modern philosophy, which emphasised the distinction between rationalism and empiricism and had no place for the historical phenomenon of early modern experimental philosophy. The re-emergence of interest in early modern experimental philosophy roughly coincided with the development of contemporary x-phi and there are some important similarities between the two. (shrink)
Confirmation bias is one of the most widely discussed epistemically problematic cognitions, challenging reliable belief formation and the correction of inaccurate views. Given its problematic nature, it remains unclear why the bias evolved and is still with us today. To offer an explanation, several philosophers and scientists have argued that the bias is in fact adaptive. I critically discuss three recent proposals of this kind before developing a novel alternative, what I call the ‘reality-matching account’. According to the account, confirmation (...) bias evolved because it helps us influence people and social structures so that they come to match our beliefs about them. This can result in significant developmental and epistemic benefits for us and other people, ensuring that over time we don’t become epistemically disconnected from social reality but can navigate it more easily. While that might not be the only evolved function of confirmation bias, it is an important one that has so far been neglected in the theorizing on the bias. (shrink)
Within contemporary philosophy of mind, it is taken for granted that externalist accounts of meaning and mental content are, in principle, orthogonal to the matter of whether cognition itself is bound within the biological brain or whether it can constitutively include parts of the world. Accordingly, Clark and Chalmers (1998) distinguish these varieties of externalism as ‘passive’ and ‘active’ respectively. The aim here is to suggest that we should resist the received way of thinking about these dividing lines. With reference (...) to Brandom’s (1994; 2000; 2008) broad semantic inferentialism, we show that a theory of meaning can be at the same time a variety of active externalism. While we grant that supporters of other varieties of content externalism (e.g., Putnam 1975 and Burge 1986) can deny active externalism, this is not an option for semantic inferentialists: On this latter view, the role of the environment (both in its social and natural form) is not ‘passive’ in the sense assumed by the alternative approaches to content externalism. (shrink)
We often speak as if there are merely possible people—for example, when we make such claims as that most possible people are never going to be born. Yet most metaphysicians deny that anything is both possibly a person and never born. Since our unreflective talk of merely possible people serves to draw non-trivial distinctions, these metaphysicians owe us some paraphrase by which we can draw those distinctions without committing ourselves to there being merely possible people. We show that such paraphrases (...) are unavailable if we limit ourselves to the expressive resources of even highly infinitary first-order modal languages. We then argue that such paraphrases are available in higher-order modal languages only given certain strong assumptions concerning the metaphysics of properties. We then consider alternative paraphrase strategies, and argue that none of them are tenable. If talk of merely possible people cannot be paraphrased, then it must be taken at face value, in which case it is necessary what individuals there are. Therefore, if it is contingent what individuals there are, then the demands of paraphrase place tight constraints on the metaphysics of properties: either (i) it is necessary what properties there are, or (ii) necessarily equivalent properties are identical, and having properties does not entail even possibly being anything at all. (shrink)
This paper argues that early modern experimental philosophy emerged as the dominant member of a pair of methods in natural philosophy, the speculative versus the experimental, and that this pairing derives from an overarching distinction between speculative and operative philosophy that can be ultimately traced back to Aristotle. The paper examines the traditional classification of natural philosophy as a speculative discipline from the Stagirite to the seventeenth century; medieval and early modern attempts to articulate a scientia experimentalis; and the tensions (...) in the classification of natural magic and mechanics that led to the introduction of an operative part of natural philosophy in the writings of Francis Bacon and John Johnston. The paper concludes with a summary of the salient discontinuities between the experimental/speculative distinction of the mid-seventeenth century and its predecessors and a statement of the developments that led to the ascendance of experimental philosophy from the 1660s. (shrink)
We present a formal semantics for epistemic logic, capturing the notion of knowability relative to information (KRI). Like Dretske, we move from the platitude that what an agent can know depends on her (empirical) information. We treat operators of the form K_AB (‘B is knowable on the basis of information A’) as variably strict quantifiers over worlds with a topic- or aboutness- preservation constraint. Variable strictness models the non-monotonicity of knowledge acquisition while allowing knowledge to be intrinsically stable. Aboutness-preservation models (...) the topic-sensitivity of information, allowing us to invalidate controversial forms of epistemic closure while validating less controversial ones. Thus, unlike the standard modal framework for epistemic logic, KRI accommodates plausible approaches to the Kripke-Harman dogmatism paradox, which bear on non-monotonicity, or on topic-sensitivity. KRI also strikes a better balance between agent idealization and a non-trivial logic of knowledge ascriptions. (shrink)
Our topic is the theory of topics. My goal is to clarify and evaluate three competing traditions: what I call the way-based approach, the atom-based approach, and the subject-predicate approach. I develop criteria for adequacy using robust linguistic intuitions that feature prominently in the literature. Then I evaluate the extent to which various existing theories satisfy these constraints. I conclude that recent theories due to Parry, Perry, Lewis, and Yablo do not meet the constraints in total. I then introduce the (...) issue-based theory—a novel and natural entry in the atom-based tradition that meets our constraints. In a coda, I categorize a recent theory from Fine as atom-based, and contrast it to the issue-based theory, concluding that they are evenly matched, relative to our main criteria of adequacy. I offer tentative reasons to nevertheless favour the issue-based theory. (shrink)
This paper is a study of higher-order contingentism – the view, roughly, that it is contingent what properties and propositions there are. We explore the motivations for this view and various ways in which it might be developed, synthesizing and expanding on work by Kit Fine, Robert Stalnaker, and Timothy Williamson. Special attention is paid to the question of whether the view makes sense by its own lights, or whether articulating the view requires drawing distinctions among possibilities that, according to (...) the view itself, do not exist to be drawn. The paper begins with a non-technical exposition of the main ideas and technical results, which can be read on its own. This exposition is followed by a formal investigation of higher-order contingentism, in which the tools of variable-domain intensional model theory are used to articulate various versions of the view, understood as theories formulated in a higher-order modal language. Our overall assessment is mixed: higher-order contingentism can be fleshed out into an elegant systematic theory, but perhaps only at the cost of abandoning some of its original motivations. (shrink)
According to the structured theory of propositions, if two sentences express the same proposition, then they have the same syntactic structure, with corresponding syntactic constituents expressing the same entities. A number of philosophers have recently focused attention on a powerful argument against this theory, based on a result by Bertrand Russell, which shows that the theory of structured propositions is inconsistent in higher order-logic. This paper explores a response to this argument, which involves restricting the scope of the claim that (...) propositions are structured, so that it does not hold for all propositions whatsoever, but only for those which are expressible using closed sentences of a given formal language. We call this restricted principle Closed Structure, and show that it is consistent in classical higher-order logic. As a schematic principle, the strength of Closed Structure is dependent on the chosen language. For its consistency to be philosophically significant, it also needs to be consistent in every extension of the language which the theorist of structured propositions is apt to accept. But, we go on to show, Closed Structure is in fact inconsistent in a very natural extension of the standard language of higher-order logic, which adds resources for plural talk of propositions. We conclude that this particular strategy of restricting the scope of the claim that propositions are structured is not a compelling response to the argument based on Russell’s result, though we note that for some applications, for instance to propositional attitudes, a restricted thesis in the vicinity may hold some promise. (shrink)
Jennifer Lackey ('Testimonial Knowledge and Transmission' The Philosophical Quarterly 1999) and Peter Graham ('Conveying Information, Synthese 2000, 'Transferring Knowledge' Nous 2000) offered counterexamples to show that a hearer can acquire knowledge that P from a speaker who asserts that P, but the speaker does not know that P. These examples suggest testimony can generate knowledge. The showpiece of Lackey's examples is the Schoolteacher case. This paper shows that Lackey's case does not undermine the orthodox view that testimony cannot generate (...) knowledge. This paper explains why Lackey's arguments to the contrary are ineffective for they misunderstand the intuitive rationale for the view that testimony cannot generate knowledge. This paper then elaborates on a version of the case from Graham's paper 'Conveying Information' (the Fossil case) that effectively shows that testimony can generate knowledge. This paper then provides a deeper informative explanation for how it is that testimony transfers knowledge, and why there should be cases where testimony generates knowledge. (shrink)
We argue that recent empirical findings and theoretical models shed new light on the nature of attention. According to the resulting amplification view, attentional phenomena can be unified at the neural level as the consequence of the amplification of certain input signals of attention-independent perceptual computations. This way of identifying the core realizer of attention evades standard criticisms often raised against sub-personal accounts of attention. Moreover, this approach also reframes our thinking about the function of attention by shifting the focus (...) from the function of selection to the function of amplification. (shrink)
This reissue of his collection of early essays, Logico-Linguistic Papers, is published with a brand new introduction by Professor Strawson but, apart from minor ...
Martine Nida-Rümelin (1996) argues that color science indicates behaviorally undetectable spectrum inversion is possible and raises this possibility as an objection to functionalist accounts of visual states of color. I show that her argument does not rest solely on color science, but also on a philosophically controversial assumption, namely, that visual states of color supervene on physiological states. However, this assumption, on the part of philosophers or vision scientists, has the effect of simply ruling out certain versions of functionalism. While (...) Nida-Rümelin is quite right to search for empirical tests for claims about the nature of visual states, philosophical issues remain pivotal in determining the correctness of these claims. (shrink)
The systems studied in the special sciences are often said to be causally autonomous, in the sense that their higher-level properties have causal powers that are independent of the causal powers of their more basic physical properties. This view was espoused by the British emergentists, who claimed that systems achieving a certain level of organizational complexity have distinctive causal powers that emerge from their constituent elements but do not derive from them. More recently, non-reductive physicalists have espoused a similar view (...) about the causal autonomy of special-science properties. They argue that since these properties can typically have multiple physical realizations, they are not identical to physical properties, and further they possess causal powers that differ from those of their physical realisers. Despite the orthodoxy of this view, it is hard to find a clear exposition of its meaning or a defence of it in terms of a well-motivated account of causation. In this paper, we aim to address this gap in the literature by clarifying what is implied by the doctrine of the causal autonomy of special-science properties and by defending the doctrine using a prominent theory of causation from the philosophy of science. (shrink)
Plausibly, only moral agents can bear action-demanding duties. This places constraints on which groups can bear action-demanding duties: only groups with sufficient structure—call them ‘collectives’—have the necessary agency. Moreover, if duties imply ability then moral agents (of both the individual and collectives varieties) can bear duties only over actions they are able to perform. It is thus doubtful that individual agents can bear duties to perform actions that only a collective could perform. This appears to leave us at a loss (...) when assigning duties in circumstances where only a collective could perform some morally desirable action and no collective exists. But, I argue, we are not at a loss. This article outlines a new way of assigning duties over collective acts when there is no collective. Specifically, we should assign collectivisation duties to individuals. These are individual duties to take steps towards forming a collective, which then incurs a duty over the action. I give criteria for when individuals have collectivisation duties and discuss the demands these duties place on their bearers. (shrink)
The term “Gettier Case” is a technical term frequently applied to a wide array of thought experiments in contemporary epistemology. What do these cases have in common? It is said that they all involve a justified true belief which, intuitively, is not knowledge, due to a form of luck called “Gettiering.” While this very broad characterization suffices for some purposes, it masks radical diversity. We argue that the extent of this diversity merits abandoning the notion of a “Gettier case” in (...) a favour of more finely grained terminology. We propose such terminology, and use it to effectively sort the myriad Gettier cases from the theoretical literature in a way that charts deep fault lines in ordinary judgments about knowledge. (shrink)
This paper argues for the general proper functionalist view that epistemic warrant consists in the normal functioning of the belief-forming process when the process has forming true beliefs reliably as an etiological function. Such a process is reliable in normal conditions when functioning normally. This paper applies this view to so-called testimony-based beliefs. It argues that when a hearer forms a comprehension-based belief that P (a belief based on taking another to have asserted that P) through the exercise of a (...) reliable competence to comprehend and filter assertive speech acts, then the hearer's belief is prima facie warranted. The paper discusses the psychology of comprehension, the function of assertion, and the evolution of filtering mechanisms, especially coherence checking. (shrink)
Fitelson (1999) demonstrates that the validity of various arguments within Bayesian confirmation theory depends on which confirmation measure is adopted. The present paper adds to the results set out in Fitelson (1999), expanding on them in two principal respects. First, it considers more confirmation measures. Second, it shows that there are important arguments within Bayesian confirmation theory and that there is no confirmation measure that renders them all valid. Finally, the paper reviews the ramifications that this "strengthened problem of measure (...) sensitivity" has for Bayesian confirmation theory and discusses whether it points at pluralism about notions of confirmation. (shrink)
Similarly to other accounts of disease, Christopher Boorse’s Biostatistical Theory (BST) is generally presented and considered as conceptual analysis, that is, as making claims about the meaning of currently used concepts. But conceptual analysis has been convincingly critiqued as relying on problematic assumptions about the existence, meaning, and use of concepts. Because of these problems, accounts of disease and health should be evaluated not as claims about current meaning, I argue, but instead as proposals about how to define and use (...) these terms in the future, a methodology suggested by Quine and Carnap. I begin this article by describing problems with conceptual analysis and advantages of “philosophical explication,” my favored approach. I then describe two attacks on the BST that also question the entire project of defining “disease.” Finally, I defend the BST as a philosophical explication by showing how it could define useful terms for medical science and ethics. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.