The fact that Internet companies may record our personal data and track our online behavior for commercial or political purpose has emphasized aspects related to online privacy. This has also led to the development of search engines that promise no tracking and privacy. Search engines also have a major role in spreading low-quality health information such as that of anti-vaccine websites. This study investigates the relationship between search engines’ approach to privacy and the scientific quality of the information they return. (...) We analyzed the first 30 webpages returned searching “vaccines autism” in English, Spanish, Italian, and French. The results show that not only “alternative” search engines but also other commercial engines often return more anti-vaccine pages (10–53%) than Google (0%). Some localized versions of Google, however, returned more anti-vaccine webpages (up to 10%) than Google. Health information returned by search engines has an impact on public health and, specifically, in the acceptance of vaccines. The issue of information quality when seeking information for making health-related decisions also impact the ethical aspect represented by the right to an informed consent. Our study suggests that designing a search engine that is privacy savvy and avoids issues with filter bubbles that can result from user-tracking is necessary but insufficient; instead, mechanisms should be developed to test search engines from the perspective of information quality (particularly for health-related webpages) before they can be deemed trustworthy providers of public health information. (shrink)
You don't say much about who you are teaching, or what subject you teach, but you do seem to see a need to justify what you are doing. Perhaps you're teaching underprivileged children, opening their minds to possibilities that might otherwise never have occurred to them. Or maybe you're teaching the children of affluent families and opening their eyes to the big moral issues they will face in life — like global poverty, and climate change. If you're doing something like (...) this, then stick with it. Giving money isn't the only way to make a difference. (shrink)
Book Symposium on The Territories of Science and Religion (University of Chicago Press, 2015). The author responds to review essays by John Heilbron, Stephen Gaukroger, and Yiftach Fehige.
Peter Ludlow shows how word meanings are much more dynamic than we might have supposed, and explores how they are modulated even during everyday conversation. The resulting view is radical, and has far-reaching consequences for our political and legal discourse, and for enduring puzzles in the foundations of semantics, epistemology, and logic.
The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers, and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take advantage of advances in technology. For more information regarding JSTOR, please contact [email protected]
Peter Ludlow presents the first book on the philosophy of generative linguistics, including both Chomsky's government and binding theory and his minimalist ...
Although the relationship of part to whole is one of the most fundamental there is, this is the first full-length study of this key concept. Showing that mereology, or the formal theory of part and whole, is essential to ontology, Simons surveys and critiques previous theories--especially the standard extensional view--and proposes a new account that encompasses both temporal and modal considerations. Simons's revised theory not only allows him to offer fresh solutions to long-standing problems, but also has far-reaching consequences for (...) our understanding of a host of classical philosophical concepts. (shrink)
Our topic is the theory of topics. My goal is to clarify and evaluate three competing traditions: what I call the way-based approach, the atom-based approach, and the subject-predicate approach. I develop criteria for adequacy using robust linguistic intuitions that feature prominently in the literature. Then I evaluate the extent to which various existing theories satisfy these constraints. I conclude that recent theories due to Parry, Perry, Lewis, and Yablo do not meet the constraints in total. I then introduce the (...) issue-based theory—a novel and natural entry in the atom-based tradition that meets our constraints. In a coda, I categorize a recent theory from Fine as atom-based, and contrast it to the issue-based theory, concluding that they are evenly matched, relative to our main criteria of adequacy. I offer tentative reasons to nevertheless favour the issue-based theory. (shrink)
I defend the following version of the ought-implies-can principle: (OIC) by virtue of conceptual necessity, an agent at a given time has an (objective, pro tanto) obligation to do only what the agent at that time has the ability and opportunity to do. In short, obligations correspond to ability plus opportunity. My argument has three premises: (1) obligations correspond to reasons for action; (2) reasons for action correspond to potential actions; (3) potential actions correspond to ability plus opportunity. In the (...) bulk of the paper I address six objections to OIC: three objections based on putative counterexamples, and three objections based on arguments to the effect that OIC conflicts with the is/ought thesis, the possibility of hard determinism, and the denial of the Principle of Alternate Possibilities. (shrink)
Expressivists about epistemic modals deny that ‘Jane might be late’ canonically serves to express the speaker’s acceptance of a certain propositional content. Instead, they hold that it expresses a lack of acceptance. Prominent expressivists embrace pragmatic expressivism: the doxastic property expressed by a declarative is not helpfully identified with that sentence’s compositional semantic value. Against this, we defend semantic expressivism about epistemic modals: the semantic value of a declarative from this domain is the property of doxastic attitudes it canonically serves (...) to express. In support, we synthesize data from the critical literature on expressivism—largely reflecting interactions between modals and disjunctions—and present a semantic expressivism that readily predicts the data. This contrasts with salient competitors, including: pragmatic expressivism based on domain semantics or dynamic semantics; semantic expressivism à la Moss [2015]; and the bounded relational semantics of Mandelkern [2019]. (shrink)
This paper looks at the critical reception of two central claims of Peter Auriol’s theory of cognition: the claim that the objects of cognition have an apparent or objective being that resists reduction to the real being of objects, and the claim that there may be natural intuitive cognitions of nonexistent objects. These claims earned Auriol the criticism of his fellow Franciscans, Walter Chatton and Adam Wodeham. According to them, the theory of apparent being was what had led Auriol (...) to allow for intuitive cognitions of nonexistents, but the intuitive cognition of nonexistents, at its turn, led to scepticism. Modern commentators have offered similar readings of Auriol, but this paper argues, first, that the apparent being provides no special reason to think there could be intuitions of nonexistent objects, and second, that despite his idiosyncratic account of intuition, Auriol was no more vulnerable to scepticism than his critics. (shrink)
This paper presents a passage from Peter Singer on the pond analogy and comments on its content and use in the classroom, especially with respect to the development of the learners' argumentative skills.
Berkeley’s likeness principle is the claim that “an idea can be like nothing but an idea”. The likeness principle is intended to undermine representationalism: the view (that Berkeley attributes to thinkers like Descartes and Locke) that all human knowledge is mediated by ideas in the mind which represent material objects. Yet, Berkeley appears to leave the likeness principle unargued for. This has led to several attempts to explain why Berkeley accepts it. In contrast to ‘metaphysical’ and ‘epistemological’ interpretations available in (...) the literature, in this paper I defend a ‘conceptual’ interpretation. I argue that Berkeley accepts the likeness principle on the basis of (i) his commitment to the transparency of ideas, and (ii) his account of resemblance which he sets out in his works on vision. Thus, I provide an explanation for Berkeley’s reasons for accepting the likeness principle which, appropriately, focuses on his views concerning ideas and likeness. (shrink)
Confirmation bias is one of the most widely discussed epistemically problematic cognitions, challenging reliable belief formation and the correction of inaccurate views. Given its problematic nature, it remains unclear why the bias evolved and is still with us today. To offer an explanation, several philosophers and scientists have argued that the bias is in fact adaptive. I critically discuss three recent proposals of this kind before developing a novel alternative, what I call the ‘reality-matching account’. According to the account, confirmation (...) bias evolved because it helps us influence people and social structures so that they come to match our beliefs about them. This can result in significant developmental and epistemic benefits for us and other people, ensuring that over time we don’t become epistemically disconnected from social reality but can navigate it more easily. While that might not be the only evolved function of confirmation bias, it is an important one that has so far been neglected in the theorizing on the bias. (shrink)
Accounts of the concepts of function and dysfunction have not adequately explained what factors determine the line between low‐normal function and dysfunction. I call the challenge of doing so the line‐drawing problem. Previous approaches emphasize facts involving the action of natural selection (Wakefield 1992a, 1999a, 1999b) or the statistical distribution of levels of functioning in the current population (Boorse 1977, 1997). I point out limitations of these two approaches and present a solution to the line‐drawing problem that builds on the (...) second one. (shrink)
Martine Nida-Rümelin (1996) argues that color science indicates behaviorally undetectable spectrum inversion is possible and raises this possibility as an objection to functionalist accounts of visual states of color. I show that her argument does not rest solely on color science, but also on a philosophically controversial assumption, namely, that visual states of color supervene on physiological states. However, this assumption, on the part of philosophers or vision scientists, has the effect of simply ruling out certain versions of functionalism. While (...) Nida-Rümelin is quite right to search for empirical tests for claims about the nature of visual states, philosophical issues remain pivotal in determining the correctness of these claims. (shrink)
We often speak as if there are merely possible people—for example, when we make such claims as that most possible people are never going to be born. Yet most metaphysicians deny that anything is both possibly a person and never born. Since our unreflective talk of merely possible people serves to draw non-trivial distinctions, these metaphysicians owe us some paraphrase by which we can draw those distinctions without committing ourselves to there being merely possible people. We show that such paraphrases (...) are unavailable if we limit ourselves to the expressive resources of even highly infinitary first-order modal languages. We then argue that such paraphrases are available in higher-order modal languages only given certain strong assumptions concerning the metaphysics of properties. We then consider alternative paraphrase strategies, and argue that none of them are tenable. If talk of merely possible people cannot be paraphrased, then it must be taken at face value, in which case it is necessary what individuals there are. Therefore, if it is contingent what individuals there are, then the demands of paraphrase place tight constraints on the metaphysics of properties: either (i) it is necessary what properties there are, or (ii) necessarily equivalent properties are identical, and having properties does not entail even possibly being anything at all. (shrink)
We argue that recent empirical findings and theoretical models shed new light on the nature of attention. According to the resulting amplification view, attentional phenomena can be unified at the neural level as the consequence of the amplification of certain input signals of attention-independent perceptual computations. This way of identifying the core realizer of attention evades standard criticisms often raised against sub-personal accounts of attention. Moreover, this approach also reframes our thinking about the function of attention by shifting the focus (...) from the function of selection to the function of amplification. (shrink)
Selective scientific realists disagree on which theoretical posits should be regarded as essential to the empirical success of a scientific theory. A satisfactory account of essentialness will show that the (approximate) truth of the selected posits adequately explains the success of the theory. Therefore, (a) the essential elements must be discernible prospectively; (b) there cannot be a priori criteria regarding which type of posit is essential; and (c) the overall success of a theory, or ‘cluster’ of propositions, not only individual (...) derivations, should be explicable. Given these desiderata, I propose a “unification criterion” for identifying essential elements. (shrink)
In the mid-seventeenth century a movement of self-styled experimental philosophers emerged in Britain. Originating in the discipline of natural philosophy amongst Fellows of the fledgling Royal Society of London, it soon spread to medicine and by the eighteenth century had impacted moral and political philosophy and even aesthetics. Early modern experimental philosophers gave epistemic priority to observation and experiment over theorising and speculation. They decried the use of hypotheses and system-building without recourse to experiment and, in some quarters, developed a (...) philosophy of experiment. The movement spread to the Netherlands and France in the early eighteenth century and later impacted Germany. Its important role in early modern philosophy was subsequently eclipsed by the widespread adoption of the Kantian historiography of modern philosophy, which emphasised the distinction between rationalism and empiricism and had no place for the historical phenomenon of early modern experimental philosophy. The re-emergence of interest in early modern experimental philosophy roughly coincided with the development of contemporary x-phi and there are some important similarities between the two. (shrink)
Fitelson (1999) demonstrates that the validity of various arguments within Bayesian confirmation theory depends on which confirmation measure is adopted. The present paper adds to the results set out in Fitelson (1999), expanding on them in two principal respects. First, it considers more confirmation measures. Second, it shows that there are important arguments within Bayesian confirmation theory and that there is no confirmation measure that renders them all valid. Finally, the paper reviews the ramifications that this "strengthened problem of measure (...) sensitivity" has for Bayesian confirmation theory and discusses whether it points at pluralism about notions of confirmation. (shrink)
The systems studied in the special sciences are often said to be causally autonomous, in the sense that their higher-level properties have causal powers that are independent of the causal powers of their more basic physical properties. This view was espoused by the British emergentists, who claimed that systems achieving a certain level of organizational complexity have distinctive causal powers that emerge from their constituent elements but do not derive from them. More recently, non-reductive physicalists have espoused a similar view (...) about the causal autonomy of special-science properties. They argue that since these properties can typically have multiple physical realizations, they are not identical to physical properties, and further they possess causal powers that differ from those of their physical realisers. Despite the orthodoxy of this view, it is hard to find a clear exposition of its meaning or a defence of it in terms of a well-motivated account of causation. In this paper, we aim to address this gap in the literature by clarifying what is implied by the doctrine of the causal autonomy of special-science properties and by defending the doctrine using a prominent theory of causation from the philosophy of science. (shrink)
This reissue of his collection of early essays, Logico-Linguistic Papers, is published with a brand new introduction by Professor Strawson but, apart from minor ...
Why do we engage in folk psychology, that is, why do we think about and ascribe propositional attitudes such as beliefs, desires, intentions etc. to people? On the standard view, folk psychology is primarily for mindreading, for detecting mental states and explaining and/or predicting people’s behaviour in terms of them. In contrast, McGeer (1996, 2007, 2015), and Zawidzki (2008, 2013) maintain that folk psychology is not primarily for mindreading but for mindshaping, that is, for moulding people’s behavior and minds (e.g., (...) via the imposition of social norms) so that coordination becomes easier. Mindreading is derived from and only as effective as it is because of mindshaping, not vice versa. I critically assess McGeer’s, and Zawidzki’s proposal and contend that three common motivations for the mindshaping view do not provide sufficient support for their particular version of it. I argue furthermore that their proposal underestimates the role that epistemic processing plays for mindshaping. And I provide reasons for favouring an alternative according to which, in social cognition involving ascriptions of propositional attitudes, neither mindshaping nor mindreading is primary but both are complementary in that effective mindshaping depends as much on mindreading as effective mindreading depends on mindshaping. (shrink)
Members of the field of philosophy have, just as other people, political convictions or, as psychologists call them, ideologies. How are different ideologies distributed and perceived in the field? Using the familiar distinction between the political left and right, we surveyed an international sample of 794 subjects in philosophy. We found that survey participants clearly leaned left (75%), while right-leaning individuals (14%) and moderates (11%) were underrepresented. Moreover, and strikingly, across the political spectrum, from very left-leaning individuals and moderates to (...) very right-leaning individuals, participants reported experiencing ideological hostility in the field, occasionally even from those from their own side of the political spectrum. Finally, while about half of the subjects believed that discrimination against left- or right-leaning individuals in the field is not justified, a significant minority displayed an explicit willingness to discriminate against colleagues with the opposite ideology. Our findings are both surprising and important, because a commitment to tolerance and equality is widespread in philosophy, and there is reason to think that ideological similarity, hostility, and discrimination undermine reliable belief formation in many areas of the discipline. (shrink)
We propose a solution to the problem of logical omniscience in what we take to be its fundamental version: as concerning arbitrary agents and the knowledge attitude per se. Our logic of knowledge is a spin-off from a general theory of thick content, whereby the content of a sentence has two components: an intension, taking care of truth conditions; and a topic, taking care of subject matter. We present a list of plausible logical validities and invalidities for the logic of (...) knowledge per se for arbitrary agents, and isolate three explanatory factors for them: the topic-sensitivity of content; the fragmentation of knowledge states; the defeasibility of knowledge acquisition. We then present a novel dynamic epistemic logic that yields precisely the desired validities and invalidities, for which we provide expressivity and completeness results. We contrast this with related systems and address possible objections. (shrink)
We examine a prominent naturalistic line on the method of cases, exemplified by Timothy Williamson and Edouard Machery: MoC is given a fallibilist and non-exceptionalist treatment, accommodating moderate modal skepticism. But Gettier cases are in dispute: Williamson takes them to induce substantive philosophical knowledge; Machery claims that the ambitious use of MoC should be abandoned entirely. We defend an intermediate position. We offer an internal critique of Macherian pessimism about Gettier cases. Most crucially, we argue that Gettier cases needn’t exhibit (...) ‘disturbing characteristics’ that Machery posits to explain why philosophical cases induce dubious judgments. It follows, we show, that Machery’s central argument for the effective abandonment of MoC is undermined. Nevertheless, we engineer a restricted variant of the argument—in harmony with Williamsonian ideology–that survives our critique, potentially limiting philosophy’s scope for establishing especially ambitious modal theses, despite traditional MoC’s utility being partially preserved. (shrink)
We present a formal semantics for epistemic logic, capturing the notion of knowability relative to information (KRI). Like Dretske, we move from the platitude that what an agent can know depends on her (empirical) information. We treat operators of the form K_AB (‘B is knowable on the basis of information A’) as variably strict quantifiers over worlds with a topic- or aboutness- preservation constraint. Variable strictness models the non-monotonicity of knowledge acquisition while allowing knowledge to be intrinsically stable. Aboutness-preservation models (...) the topic-sensitivity of information, allowing us to invalidate controversial forms of epistemic closure while validating less controversial ones. Thus, unlike the standard modal framework for epistemic logic, KRI accommodates plausible approaches to the Kripke-Harman dogmatism paradox, which bear on non-monotonicity, or on topic-sensitivity. KRI also strikes a better balance between agent idealization and a non-trivial logic of knowledge ascriptions. (shrink)
According to the structured theory of propositions, if two sentences express the same proposition, then they have the same syntactic structure, with corresponding syntactic constituents expressing the same entities. A number of philosophers have recently focused attention on a powerful argument against this theory, based on a result by Bertrand Russell, which shows that the theory of structured propositions is inconsistent in higher order-logic. This paper explores a response to this argument, which involves restricting the scope of the claim that (...) propositions are structured, so that it does not hold for all propositions whatsoever, but only for those which are expressible using closed sentences of a given formal language. We call this restricted principle Closed Structure, and show that it is consistent in classical higher-order logic. As a schematic principle, the strength of Closed Structure is dependent on the chosen language. For its consistency to be philosophically significant, it also needs to be consistent in every extension of the language which the theorist of structured propositions is apt to accept. But, we go on to show, Closed Structure is in fact inconsistent in a very natural extension of the standard language of higher-order logic, which adds resources for plural talk of propositions. We conclude that this particular strategy of restricting the scope of the claim that propositions are structured is not a compelling response to the argument based on Russell’s result, though we note that for some applications, for instance to propositional attitudes, a restricted thesis in the vicinity may hold some promise. (shrink)
An action-oriented perspective changes the role of an individual from a passive observer to an actively engaged agent interacting in a closed loop with the world as well as with others. Cognition exists to serve action within a landscape that contains both. This chapter surveys this landscape and addresses the status of the pragmatic turn. Its potential influence on science and the study of cognition are considered (including perception, social cognition, social interaction, sensorimotor entrainment, and language acquisition) and its impact (...) on how neuroscience is studied is also investigated (with the notion that brains do not passively build models, but instead support the guidance of action). A review of its implications in robotics and engineering includes a discussion of the application of enactive control principles to couple action and perception in robotics as well as the conceptualization of system design in a more holistic, less modular manner. Practical applications that can impact the human condition are reviewed (e.g., educational applications, treatment possibilities for developmental and psychopathological disorders, the development of neural prostheses). All of this foreshadows the potential societal implications of the pragmatic turn. The chapter concludes that an action-oriented approach emphasizes a continuum of interaction between technical aspects of cognitive systems and robotics, biology, psychology, the social sciences, and the humanities, where the individual is part of a grounded cultural system. (shrink)
Jennifer Lackey ('Testimonial Knowledge and Transmission' The Philosophical Quarterly 1999) and Peter Graham ('Conveying Information, Synthese 2000, 'Transferring Knowledge' Nous 2000) offered counterexamples to show that a hearer can acquire knowledge that P from a speaker who asserts that P, but the speaker does not know that P. These examples suggest testimony can generate knowledge. The showpiece of Lackey's examples is the Schoolteacher case. This paper shows that Lackey's case does not undermine the orthodox view that testimony cannot generate (...) knowledge. This paper explains why Lackey's arguments to the contrary are ineffective for they misunderstand the intuitive rationale for the view that testimony cannot generate knowledge. This paper then elaborates on a version of the case from Graham's paper 'Conveying Information' (the Fossil case) that effectively shows that testimony can generate knowledge. This paper then provides a deeper informative explanation for how it is that testimony transfers knowledge, and why there should be cases where testimony generates knowledge. (shrink)
This paper is a study of higher-order contingentism – the view, roughly, that it is contingent what properties and propositions there are. We explore the motivations for this view and various ways in which it might be developed, synthesizing and expanding on work by Kit Fine, Robert Stalnaker, and Timothy Williamson. Special attention is paid to the question of whether the view makes sense by its own lights, or whether articulating the view requires drawing distinctions among possibilities that, according to (...) the view itself, do not exist to be drawn. The paper begins with a non-technical exposition of the main ideas and technical results, which can be read on its own. This exposition is followed by a formal investigation of higher-order contingentism, in which the tools of variable-domain intensional model theory are used to articulate various versions of the view, understood as theories formulated in a higher-order modal language. Our overall assessment is mixed: higher-order contingentism can be fleshed out into an elegant systematic theory, but perhaps only at the cost of abandoning some of its original motivations. (shrink)
You may not know me well enough to evaluate me in terms of my moral character, but I take it you believe I can be evaluated: it sounds strange to say that I am indeterminate, neither good nor bad nor intermediate. Yet I argue that the claim that most people are indeterminate is the conclusion of a sound argument—the indeterminacy paradox—with two premises: (1) most people are fragmented (they would behave deplorably in many and admirably in many other situations); (2) (...) fragmentation entails indeterminacy. I support (1) by examining psychological experiments in which most participants behave deplorably (e.g., by maltreating “prisoners” in a simulated prison) or admirably (e.g., by intervening in a simulated theft). I support (2) by arguing that, according to certain plausible conceptions, character evaluations presuppose behavioral consistency (lack of fragmentation). Possible reactions to the paradox include: (a) denying that the experiments are relevant to character; (b) upholding conceptions according to which character evaluations do not presuppose consistency; (c) granting that most people are indeterminate and explaining why it appears otherwise. I defend (c) against (a) and (b). (shrink)
Similarly to other accounts of disease, Christopher Boorse’s Biostatistical Theory (BST) is generally presented and considered as conceptual analysis, that is, as making claims about the meaning of currently used concepts. But conceptual analysis has been convincingly critiqued as relying on problematic assumptions about the existence, meaning, and use of concepts. Because of these problems, accounts of disease and health should be evaluated not as claims about current meaning, I argue, but instead as proposals about how to define and use (...) these terms in the future, a methodology suggested by Quine and Carnap. I begin this article by describing problems with conceptual analysis and advantages of “philosophical explication,” my favored approach. I then describe two attacks on the BST that also question the entire project of defining “disease.” Finally, I defend the BST as a philosophical explication by showing how it could define useful terms for medical science and ethics. (shrink)
The term “Gettier Case” is a technical term frequently applied to a wide array of thought experiments in contemporary epistemology. What do these cases have in common? It is said that they all involve a justified true belief which, intuitively, is not knowledge, due to a form of luck called “Gettiering.” While this very broad characterization suffices for some purposes, it masks radical diversity. We argue that the extent of this diversity merits abandoning the notion of a “Gettier case” in (...) a favour of more finely grained terminology. We propose such terminology, and use it to effectively sort the myriad Gettier cases from the theoretical literature in a way that charts deep fault lines in ordinary judgments about knowledge. (shrink)
In the philosophy of science, it is a common proposal that values are illegitimate in science and should be counteracted whenever they drive inquiry to the confirmation of predetermined conclusions. Drawing on recent cognitive scientific research on human reasoning and confirmation bias, I argue that this view should be rejected. Advocates of it have overlooked that values that drive inquiry to the confirmation of predetermined conclusions can contribute to the reliability of scientific inquiry at the group level even when they (...) negatively affect an individual’s cognition. This casts doubt on the proposal that such values should always be illegitimate in science. It also suggests that advocates of that proposal assume a narrow, individualistic account of science that threatens to undermine their own project of ensuring reliable belief formation in science. (shrink)
This paper argues that early modern experimental philosophy emerged as the dominant member of a pair of methods in natural philosophy, the speculative versus the experimental, and that this pairing derives from an overarching distinction between speculative and operative philosophy that can be ultimately traced back to Aristotle. The paper examines the traditional classification of natural philosophy as a speculative discipline from the Stagirite to the seventeenth century; medieval and early modern attempts to articulate a scientia experimentalis; and the tensions (...) in the classification of natural magic and mechanics that led to the introduction of an operative part of natural philosophy in the writings of Francis Bacon and John Johnston. The paper concludes with a summary of the salient discontinuities between the experimental/speculative distinction of the mid-seventeenth century and its predecessors and a statement of the developments that led to the ascendance of experimental philosophy from the 1660s. (shrink)
Imperatives cannot be true or false, so they are shunned by logicians. And yet imperatives can be combined by logical connectives: "kiss me and hug me" is the conjunction of "kiss me" with "hug me". This example may suggest that declarative and imperative logic are isomorphic: just as the conjunction of two declaratives is true exactly if both conjuncts are true, the conjunction of two imperatives is satisfied exactly if both conjuncts are satisfied—what more is there to say? Much more, (...) I argue. "If you love me, kiss me", a conditional imperative, mixes a declarative antecedent ("you love me") with an imperative consequent ("kiss me"); it is satisfied if you love and kiss me, violated if you love but don't kiss me, and avoided if you don't love me. So we need a logic of three -valued imperatives which mixes declaratives with imperatives. I develop such a logic. (shrink)
This paper argues for the general proper functionalist view that epistemic warrant consists in the normal functioning of the belief-forming process when the process has forming true beliefs reliably as an etiological function. Such a process is reliable in normal conditions when functioning normally. This paper applies this view to so-called testimony-based beliefs. It argues that when a hearer forms a comprehension-based belief that P (a belief based on taking another to have asserted that P) through the exercise of a (...) reliable competence to comprehend and filter assertive speech acts, then the hearer's belief is prima facie warranted. The paper discusses the psychology of comprehension, the function of assertion, and the evolution of filtering mechanisms, especially coherence checking. (shrink)
This paper examines a promising probabilistic theory of singular causation developed by David Lewis. I argue that Lewis' theory must be made more sophisticated to deal with certain counterexamples involving pre-emption. These counterexamples appear to show that in the usual case singular causation requires an unbroken causal process to link cause with effect. I propose a new probabilistic account of singular causation, within the framework developed by Lewis, which captures this intuition.
Collective deliberation is fuelled by disagreements and its epistemic value depends, inter alia, on how the participants respond to each other in disagreements. I use this accountability thesis to argue that deliberation may be valued not just instrumentally but also for its procedural features. The instrumental epistemic value of deliberation depends on whether it leads to more or less accurate beliefs among the participants. The procedural epistemic value of deliberation hinges on the relationships of mutual accountability that characterize appropriately conducted (...) deliberation. I will argue that it only comes into view from the second-person standpoint. I shall explain what the second-person standpoint in the epistemic context entails and how it compares to Stephen Darwall’s interpretation of the second-person standpoint in ethics. (shrink)
A recurring theme dominates recent philosophical debates about the nature of conscious perception: naïve realism’s opponents claim that the view is directly contradicted by empirical science. I argue that, despite their current popularity, empirical arguments against naïve realism are fundamentally flawed. The non-empirical premises needed to get from empirical scientific findings to substantive philosophical conclusions are ones the naïve realist is known to reject. Even granting the contentious premises, the empirical findings do not undermine the theory, given its overall philosophical (...) commitments. Thus, contemporary empirical research fails to supply any new argumentative force against naïve realism. I conclude that, as philosophers of mind, we would be better served spending a bit less time trying to wield empirical science as a cudgel against our opponents, and a bit more time working through the implications of each other’s views – something we can accomplish perfectly well from the comfort of our armchairs. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.