This is a pdf of a Mathematica calculation that supplements the paper "Presentist Fragmentalism and Quantum Mechanics" forthcoming in Foundations of Physics. In that paper the Born rule (or at least a progenitor) is derived from experimental conditions on the mutual observations of two fragments. In this pdf the experimental conditions are applied to Hilbert space dimensions 3, 4, and 5. It turns out each of these have a 1-dimensional solution space which, it is hoped, can be interpretated as the (...) phase. (shrink)
We argue that a semantics for counterfactual conditionals in terms of comparative overall similarity faces a formal limitation due to Arrow’s impossibility theorem from social choice theory. According to Lewis’s account, the truth-conditions for counterfactual conditionals are given in terms of the comparative overall similarity between possible worlds, which is in turn determined by various aspects of similarity between possible worlds. We argue that a function from aspects of similarity to overall similarity should satisfy certain plausible constraints while Arrow’s impossibility (...) theorem rules out that such a function satisfies all the constraints simultaneously. We argue that a way out of this impasse is to represent aspectual similarity in terms of ranking functions instead of representing it in a purely ordinal fashion. Further, we argue against the claim that the determination of overall similarity by aspects of similarity faces a difficulty in addition to the Arrovian limitation, namely the incommensurability of different aspects of similarity. The phenomena that have been cited as evidence for such incommensurability are best explained by ordinary vagueness. (shrink)
Physicalism, the thesis that everything is physical, is one of the most controversial problems in philosophy. Its adherents argue that there is no more important doctrine in philosophy, whilst its opponents claim that its role is greatly exaggerated. In this superb introduction to the problem Daniel Stoljar focuses on three fundamental questions: the interpretation, truth and philosophical significance of physicalism. In answering these questions he covers the following key topics: -/- (i)A brief history of physicalism and its definitions, (ii)what (...) a physical property is and how physicalism meets challenges from empirical sciences, (iii)'Hempel’s dilemma’ and the relationship between physicalism and physics, (iv)physicalism and key debates in metaphysics and philosophy of mind, such as supervenience, identity and conceivability, and (v)physicalism and causality. -/- Additional features include chapter summaries, annotated further reading and a glossary of technical terms, making Physicalism ideal for those coming to the problem for the first time. (shrink)
There are many psychic mechanisms by which people engage with their selves. We argue that an important yet hitherto neglected one is self-appraisal via meta-emotions. We discuss the intentional structure of meta-emotions and explore the phenomenology of a variety of examples. We then present a pilot study providing preliminary evidence that some facial displays may indicate the presence of meta-emotions. We conclude by arguing that meta-emotions have an important role to play in higher-order theories of psychic harmony.
Any theory of confirmation must answer the following question: what is the purpose of its conception of confirmation for scientific inquiry? In this article, we argue that no Bayesian conception of confirmation can be used for its primary intended purpose, which we take to be making a claim about how worthy of belief various hypotheses are. Then we consider a different use to which Bayesian confirmation might be put, namely, determining the epistemic value of experimental outcomes, and thus to decide (...) which experiments to carry out. Interestingly, Bayesian confirmation theorists rule out that confirmation be used for this purpose. We conclude that Bayesian confirmation is a means with no end. 1 Introduction2 Bayesian Confirmation Theory3 Bayesian Confirmation and Belief4 Confirmation and the Value of Experiments5 Conclusion. (shrink)
We provide three innovations to recent debates about whether topological or “network” explanations are a species of mechanistic explanation. First, we more precisely characterize the requirement that all topological explanations are mechanistic explanations and show scientific practice to belie such a requirement. Second, we provide an account that unifies mechanistic and non-mechanistic topological explanations, thereby enriching both the mechanist and autonomist programs by highlighting when and where topological explanations are mechanistic. Third, we defend this view against some powerful mechanist objections. (...) We conclude from this that topological explanations are autonomous from their mechanistic counterparts. (shrink)
There has recently been a focus on the question of statue removalism. This concerns what to do with public history statues that honour or otherwise celebrate ethically bad historical figures. The specific wrongs of these statues have been understood in terms of derogatory speech, inapt honours, or supporting bad ideologies. In this paper I understand these bad public history statues as history, and identify a distinctive class of public history-specific wrongs. Specifically, public history plays an important identity-shaping role, and bad (...) public history can commit specifically ontic injustice. Understanding bad public history in terms of ontic injustice helps understand not just to address bad public history statues, but also understand the value of public history more broadly. (shrink)
Moral debunking arguments are meant to show that, by realist lights, moral beliefs are not explained by moral facts, which in turn is meant to show that they lack some significant counterfactual connection to the moral facts (e.g., safety, sensitivity, reliability). The dominant, “minimalist” response to the arguments—sometimes defended under the heading of “third-factors” or “pre-established harmonies”—involves affirming that moral beliefs enjoy the relevant counterfactual connection while granting that these beliefs are not explained by the moral facts. We show that (...) the minimalist gambit rests on a controversial thesis about epistemic priority: that explanatory concessions derive their epistemic import from what they reveal about counterfactual connections. We then challenge this epistemic priority thesis, which undermines the minimalist response to debunking arguments (in ethics and elsewhere). (shrink)
Does trust play a significant role in the appreciation of art? If so, how does it operate? We argue that it does, and that the mechanics of trust operate both at a general and a particular level. After outlining the general notion of ‘art-trust’—the notion sketched is consistent with most notions of trust on the market—and considering certain objections to the model proposed, we consider specific examples to show in some detail that the experience of works of art, and the (...) attribution of art-relevant properties or characterisations to works of art, very often involves the notion of trust; in such cases—perhaps most or even, implicitly, all—the question ‘Do I trust the artist (or art-maker)?’, is inescapable. (shrink)
Rose and Schaffer (forthcoming) argue that teleological thinking has a substantial influence on folk intuitions about composition. They take this to show (i) that we should not rely on folk intuitions about composition and (ii) that we therefore should not reject theories of composition on the basis of intuitions about composition. We cast doubt on the teleological interpretation of folk judgments about composition; we show how their debunking argument can be resisted, even on the assumption that folk intuitions have a (...) teleological source; and we argue that, even if folk intuitions about composition carry no weight, theories of composition can still be rejected on the basis of the intuitions of metaphysicians. (shrink)
Supererogatory acts—good deeds “beyond the call of duty”—are a part of moral common sense, but conceptually puzzling. I propose a unified solution to three of the most infamous puzzles: the classic Paradox of Supererogation (if it’s so good, why isn’t it just obligatory?), Horton’s All or Nothing Problem, and Kamm’s Intransitivity Paradox. I conclude that supererogation makes sense if, and only if, the grounds of rightness are multi-dimensional and comparative.
In this paper, I present a general theory of topological explanations, and illustrate its fruitfulness by showing how it accounts for explanatory asymmetry. My argument is developed in three steps. In the first step, I show what it is for some topological property A to explain some physical or dynamical property B. Based on that, I derive three key criteria of successful topological explanations: a criterion concerning the facticity of topological explanations, i.e. what makes it true of a particular system; (...) a criterion for describing counterfactual dependencies in two explanatory modes, i.e. the vertical and the horizontal; and, finally, a third perspectival one that tells us when to use the vertical and when to use the horizontal mode. In the second step, I show how this general theory of topological explanations accounts for explanatory asymmetry in both the vertical and horizontal explanatory modes. Finally, in the third step, I argue that this theory is universally applicable across biological sciences, which helps to unify essential concepts of biological networks. (shrink)
Translation from German to English by Daniel Fidel Ferrer -/- What Does it Mean to Orient Oneself in Thinking? -/- German title: "Was heißt: sich im Denken orientieren?" -/- Published: October 1786, Königsberg in Prussia, Germany. By Immanuel Kant (Born in 1724 and died in 1804) -/- Translation into English by Daniel Fidel Ferrer (March, 17, 2014). The day of Holi in India in 2014. -/- From 1774 to about 1800, there were three intense philosophical and theological controversies (...) underway in Germany, namely: Fragments Controversy, the Pantheism Controversy, and the Atheism Controversy. Kant’s essay translated here is Kant’s respond to the Pantheism Controversy. During this period (1770-1800), there was the Sturm und Drang (Storm and Urge (stress)) movement with thinkers like Johann Hamann, Johann Herder, Friedrich Schiller, and Johann Goethe; who were against the cultural movement of the Enlightenment (Aufklärung). Kant was on the side of Enlightenment (see his Answer the Question: What is Enlightenment? 1784). -/- What Does it Mean to Orient Oneself in Thinking? / By Immanuel Kant (1724-1804). [Was heißt: sich im Denken orientieren? English]. (shrink)
This chapter provides a systematic overview of topological explanations in the philosophy of science literature. It does so by presenting an account of topological explanation that I (Kostić and Khalifa 2021; Kostić 2020a; 2020b; 2018) have developed in other publications and then comparing this account to other accounts of topological explanation. Finally, this appraisal is opinionated because it highlights some problems in alternative accounts of topological explanations, and also it outlines responses to some of the main criticisms raised by the (...) so-called new mechanists. (shrink)
Choices confront us with questions. How we act depends on our answers to those questions. So the way our beliefs guide our choices is not just a function of their informational content, but also depends systematically on the questions those beliefs address. This paper gives a precise account of the interplay between choices, questions and beliefs, and harnesses this account to obtain a principled approach to the problem of deduction. The result is a novel theory of belief-guided action that explains (...) and predicts the decisions of agents who, like ourselves, fail to be logically omniscient: that is, of agents whose beliefs may not be deductively closed, or even consistent. (Winner of the 2021 Isaac Levi Prize.). (shrink)
Proponents of ontic conceptions of explanation require all explanations to be backed by causal, constitutive, or similar relations. Among their justifications is that only ontic conceptions can do justice to the ‘directionality’ of explanation, i.e., the requirement that if X explains Y , then not-Y does not explain not-X . Using topological explanations as an illustration, we argue that non-ontic conceptions of explanation have ample resources for securing the directionality of explanations. The different ways in which neuroscientists rely on multiplexes (...) involving both functional and anatomical connectivity in their topological explanations vividly illustrate why ontic considerations are frequently (if not always) irrelevant to explanatory directionality. Therefore, directionality poses no problem to non-ontic conceptions of explanation. (shrink)
Privacy and surveillance scholars increasingly worry that data collectors can use the information they gather about our behaviors, preferences, interests, incomes, and so on to manipulate us. Yet what it means, exactly, to manipulate someone, and how we might systematically distinguish cases of manipulation from other forms of influence—such as persuasion and coercion—has not been thoroughly enough explored in light of the unprecedented capacities that information technologies and digital media enable. In this paper, we develop a definition of manipulation that (...) addresses these enhanced capacities, investigate how information technologies facilitate manipulative practices, and describe the harms—to individuals and to social institutions—that flow from such practices. -/- We use the term “online manipulation” to highlight the particular class of manipulative practices enabled by a broad range of information technologies. We argue that at its core, manipulation is hidden influence—the covert subversion of another person’s decision-making power. We argue that information technology, for a number of reasons, makes engaging in manipulative practices significantly easier, and it makes the effects of such practices potentially more deeply debilitating. And we argue that by subverting another person’s decision-making power, manipulation undermines his or her autonomy. Given that respect for individual autonomy is a bedrock principle of liberal democracy, the threat of online manipulation is a cause for grave concern. (shrink)
This collection of essays explores the metaphysical thesis that the living world is not made up of substantial particles or things, as has often been assumed, but is rather constituted by processes. The biological domain is organised as an interdependent hierarchy of processes, which are stabilised and actively maintained at different timescales. Even entities that intuitively appear to be paradigms of things, such as organisms, are actually better understood as processes. Unlike previous attempts to articulate processual views of biology, which (...) have tended to use Alfred North Whitehead’s panpsychist metaphysics as a foundation, this book takes a naturalistic approach to metaphysics. It submits that the main motivations for replacing an ontology of substances with one of processes are to be found in the empirical findings of science. Biology provides compelling reasons for thinking that the living realm is fundamentally dynamic, and that the existence of things is always conditional on the existence of processes. The phenomenon of life cries out for theories that prioritise processes over things, and it suggests that the central explanandum of biology is not change but rather stability, or more precisely, stability attained through constant change. This edited volume brings together philosophers of science and metaphysicians interested in exploring the consequences of a processual philosophy of biology. The contributors draw on an extremely wide range of biological case studies, and employ a process perspective to cast new light on a number of traditional philosophical problems, such as identity, persistence, and individuality. (shrink)
There are at least two threads in our thought and talk about rationality, both practical and theoretical. In one sense, to be rational is to respond correctly to the reasons one has. Call this substantive rationality. In another sense, to be rational is to be coherent, or to have the right structural relations hold between one’s mental states, independently of whether those attitudes are justified. Call this structural rationality. According to the standard view, structural rationality is associated with a distinctive (...) set of requirements that mandate or prohibit certain combinations of attitudes, and it’s in virtue of violating these requirements that incoherent agents are irrational. I think the standard view is mistaken. The goal of this paper is to explain why, and to motivate an alternative account: rather than corresponding to a set of law-like requirements, structural rationality should be seen as corresponding to a distinctive kind of pro tanto rational pressure—i.e. something that comes in degrees, having both magnitude and direction. Something similar is standardly assumed to be true of substantive rationality. On the resulting picture, each dimension of rational evaluation is associated with a distinct kind of rational pressure—substantive rationality with (what I call) justificatory pressure and structural rationality with attitudinal pressure. The former is generated by one’s reasons while the latter is generated by one’s attitudes. Requirements turn out to be at best a footnote in the theory of rationality. (shrink)
Since 2016, when the Facebook/Cambridge Analytica scandal began to emerge, public concern has grown around the threat of “online manipulation”. While these worries are familiar to privacy researchers, this paper aims to make them more salient to policymakers — first, by defining “online manipulation”, thus enabling identification of manipulative practices; and second, by drawing attention to the specific harms online manipulation threatens. We argue that online manipulation is the use of information technology to covertly influence another person’s decision-making, by targeting (...) and exploiting their decision-making vulnerabilities. Engaging in such practices can harm individuals by diminishing their economic interests, but its deeper, more insidious harm is its challenge to individual autonomy. We explore this autonomy harm, emphasising its implications for both individuals and society, and we briefly outline some strategies for combating online manipulation and strengthening autonomy in an increasingly digital world. (shrink)
A central debate in philosophy of race is between eliminativists and conservationists about what we ought do with ‘race’ talk. ‘Eliminativism’ is often defined such that it’s committed to holding that (a) ‘race’ is vacuous and races don’t exist, so (b) we should eliminate the term ‘race’ from our vocabulary. As a stipulative definition, that’s fine. But as an account of one of the main theoretical options in the debate, it’s a serious mistake. I offer three arguments for why eliminativism (...) should not be tethered to vacuity or error theory, and three arguments for why the view shouldn’t be understood in terms of eliminating the term ‘race’ from our vocabulary. Instead, I propose we understand the debate as concerning whether certain uses of ordinary race terms are typically wrong. This proposal is quite simple, and naturally suggested by the common gloss that eliminativism about ‘race’ is akin to a commonsensical view about 'witch' talk. But nonetheless, I argue that it offers a significant recharacterization of this core debate in philosophy of race. (shrink)
Philosophical accounts of humour standardly account for humour in terms of what happens within a person. On these internalist accounts, humour is to be understood in terms of cognition, perception, and sensation. These accounts, while valuable, are poorly-situated to engage the social functions of humour. They have difficulty engaging why we value humour, why we use it define ourselves and our friendships, and why it may be essential to our self-esteem. In opposition to these internal accounts, I offer a social (...) account of humour. This account approaches humour as a social practice. It foregrounds laughter and participation, and thereby gives an account of humour that helps to understand why we value humour, why we use it as we do, and why we use it to define our relationship to the world. (shrink)
This article shows that a slight variation of the argument in Milne 1996 yields the log‐likelihood ratio l rather than the log‐ratio measure r as “the one true measure of confirmation. ” *Received December 2006; revised December 2007. †To contact the author, please write to: Formal Epistemology Research Group, Zukunftskolleg and Department of Philosophy, University of Konstanz, P.O. Box X906, 78457 Konstanz, Germany; e‐mail: franz.huber@uni‐konstanz.de.
Consequentialists say we may always promote the good. Deontologists object: not if that means killing one to save five. “Consequentializers” reply: this act is wrong, but it is not for the best, since killing is worse than letting die. I argue that this reply undercuts the “compellingness” of consequentialism, which comes from an outcome-based view of action that collapses the distinction between killing and letting die.
Our primary aim in this paper is to sketch a cognitive evolutionary approach for developing explanations of social change that is anchored on the psychological mechanisms underlying normative cognition and the transmission of social norms. We throw the relevant features of this approach into relief by comparing it with the self-fulfilling social expectations account developed by Bicchieri and colleagues. After describing both accounts, we argue that the two approaches are largely compatible, but that the cognitive evolutionary approach is well- suited (...) to encompass much of the social expectations view, whose focus on a narrow range of norms comes at the expense of the breadth the cognitive evolutionary approach can provide. (shrink)
I present two puzzles about the metaphysics of stores, restaurants, and other such establishments. I defend a solution to the puzzles, according to which establishments are not material objects and are not constituted by the buildings that they occupy.
We defend Uniqueness, the claim that given a body of total evidence, there is a uniquely rational doxastic state that it is rational for one to be in. Epistemic rationality doesn't give you any leeway in forming your beliefs. To this end, we bring in two metaepistemological pictures about the roles played by rational evaluations. Rational evaluative terms serve to guide our practices of deference to the opinions of others, and also to help us formulate contingency plans about what to (...) believe in various situations. We argue that Uniqueness vindicates these two roles for rational evaluations, while Permissivism clashes with them. (shrink)
Nihilism is the thesis that no composite objects exist. Some ontologists have advocated abandoning nihilism in favor of deep nihilism, the thesis that composites do not existO, where to existO is to be in the domain of the most fundamental quantifier. By shifting from an existential to an existentialO thesis, the deep nihilist seems to secure all the benefits of a composite-free ontology without running afoul of ordinary belief in the existence of composites. I argue that, while there are well-known (...) reasons for accepting nihilism, there appears to be no reason at all to accept deep nihilism. In particular, deep nihilism draws no support either from the usual arguments for nihilism or from considerations of parsimony. (shrink)
The problem addressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen, BC, 1983, Theory comparison and relevant Evidence. In J. Earman (Ed.), Testing scientific theories (pp. 27–42). Minneapolis: University of Minnesota Press). Sections 1– 3 contain the general plausibility-informativeness theory of theory assessment. In a nutshell, the message is (1) that there are two values a theory should exhibit: (...) truth and informativeness—measured respectively by a truth indicator and a strength indicator; (2) that these two values are conflicting in the sense that the former is a decreasing and the latter an increasing function of the logical strength of the theory to be assessed; and (3) that in assessing a given theory by the available data one should weigh between these two conflicting aspects in such a way that any surplus in informativeness succeeds, if the shortfall in plausibility is small enough. Particular accounts of this general theory arise by inserting particular strength indicators and truth indicators. In Section 4 the theory is spelt out for the Bayesian paradigm of subjective probabilities. It is then compared to incremental Bayesian confirmation theory. Section 4 closes by asking whether it is likely to be lovely. Section 5 discusses a few problems of confirmation theory in the light of the present approach. In particular, it is briefly indicated how the present account gives rise to a new analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel, CG, 1945, Studies in the logic of comfirmation. Mind, 54, 1–26, 97–121.), differing from the one Carnap gave in § 87 of his Logical foundations of probability (1962, Chicago: University of Chicago Press). Section 6 adresses the question of justification any theory of theory assessment has to face: why should one stick to theories given high assessment values rather than to any other theories? The answer given by the Bayesian version of the account presented in section 4 is that one should accept theories given high assessment values, because, in the medium run, theory assessment almost surely takes one to the most informative among all true theories when presented separating data. The concluding section 7 continues the comparison between the present account and incremental Bayesian confirmation theory. (shrink)
Degrees of belief are familiar to all of us. Our confidence in the truth of some propositions is higher than our confidence in the truth of other propositions. We are pretty confident that our computers will boot when we push their power button, but we are much more confident that the sun will rise tomorrow. Degrees of belief formally represent the strength with which we believe the truth of various propositions. The higher an agent’s degree of belief for a particular (...) proposition, the higher her confidence in the truth of that proposition. For instance, Sophia’s degree of belief that it will be sunny in Vienna tomorrow might be .52, whereas her degree of belief that the train will leave on time might be .23. The precise meaning of these statements depends, of course, on the underlying theory of degrees of belief. These theories offer a formal tool to measure degrees of belief, to investigate the relations between various degrees of belief in different propositions, and to normatively evaluate degrees of belief. (shrink)
Popular discussions of faith often assume that having faith is a form of believing on insufficient evidence and that having faith is therefore in some way rationally defective. Here I offer a characterization of action-centered faith and show that action-centered faith can be both epistemically and practically rational even under a wide variety of subpar evidential circumstances.
Deontologists believe in two key exceptions to the duty to promote the good: restrictions forbid us from harming others, and prerogatives permit us not to harm ourselves. How are restrictions and prerogatives related? A promising answer is that they share a source in rights. I argue that prerogatives cannot be grounded in familiar kinds of rights, only in something much stranger: waivable rights against oneself.
The distinction between objective and subjective reasons plays an important role in both folk normative thought and many research programs in metaethics. But the relation between objective and subjective reasons is unclear. This paper explores problems related to the unity of objective and subjective reasons for actions and attitudes and then offers a novel objectivist account of subjective reasons.
“X-Firsters” hold that there is some normative feature that is fundamental to all others (and, often, that there’s some normative feature that is the “mark of the normative”: all other normative properties have it, and are normative in virtue of having it). This view is taken as a starting point in the debate about which X is “on first.” Little has been said about whether or why we should be X-Firsters, or what we should think about normativity if we aren’t (...) X-Firsters. Hence the chapter’s two main goals. First, to provide a simple argument that one shouldn’t be an X-Firster about the normative domain, which starts with the observation that analogous views have dubious merits in analogous domains. Second, to offer an alternative view—taking normativity to be a determinable explained in terms of its determinates—that offers an interesting way to think about the structure and unity of normativity. (shrink)
This is a contribution to a symposium on Amie Thomasson’s Ontology Made Easy (2015). Thomasson defends two deflationary theses: that philosophical questions about the existence of numbers, tables, properties, and other disputed entities can all easily be answered, and that there is something wrong with prolonged debates about whether such objects exist. I argue that the first thesis (properly understood) does not by itself entail the second. Rather, the case for deflationary metaontology rests largely on a controversial doctrine about the (...) possible meanings of ‘object’. I challenge Thomasson's argument for that doctrine, and I make a positive case for the availability of the contested, unrestricted use of ‘object’. (shrink)
It is commonly held that p is a reason for A to ϕ only if p explains why A ought to ϕ. I argue that this view must be rejected because there are reasons for A to ϕ that would be redundant in any ex...
The concept of mechanism in biology has three distinct meanings. It may refer to a philosophical thesis about the nature of life and biology (‘mechanicism’), to the internal workings of a machine-like structure (‘machine mechanism’), or to the causal explanation of a particular phenomenon (‘causal mechanism’). In this paper I trace the conceptual evolution of ‘mechanism’ in the history of biology, and I examine how the three meanings of this term have come to be featured in the philosophy of biology, (...) situating the new ‘mechanismic program’ in this context. I argue that the leading advocates of the mechanismic program (i.e., Craver, Darden, Bechtel, etc.) inadvertently conflate the different senses of ‘mechanism’. Specifically, they all inappropriately endow causal mechanisms with the ontic status of machine mechanisms, and this invariably results in problematic accounts of the role played by mechanism-talk in scientific practice. I suggest that for effective analyses of the concept of mechanism, causal mechanisms need to be distinguished from machine mechanisms, and the new mechanismic program in the philosophy of biology needs to be demarcated from the traditional concerns of mechanistic biology. (shrink)
One challenge to the rationality of religious commitment has it that faith is unreasonable because it involves believing on insufficient evidence. However, this challenge and influential attempts to reply depend on assumptions about what it is to have faith that are open to question. I distinguish between three conceptions of faith each of which can claim some plausible grounding in the Judaeo-Christian tradition. Questions about the rationality or justification of religious commitment and the extent of compatibility with doubt look different (...) on accounts of faith in which trust or hope, rather than belief, are the primary basis for the commitments. On such accounts, while the person of faith has a stake in the truth of the content, practical as well as epistemic considerations can legitimately figure in normative appraisals. Trust and hope can be appropriate in situations of recognized risk, need not involve self-deception, and are compatible with the idea that one's purely epistemic opinions should be responsive only to evidence.Send article to KindleTo send this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. Find out more about the Kindle Personal Document Service.Authentic faith and acknowledged risk: dissolving the problem of faith and reasonVolume 49, Issue 1DANIEL J. MCKAUGHAN DOI: https://doi.org/10.1017/S0034412512000200Your Kindle email address Please provide your Kindle [email protected]@kindle.com Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Dropbox To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Dropbox. Authentic faith and acknowledged risk: dissolving the problem of faith and reasonVolume 49, Issue 1DANIEL J. MCKAUGHAN DOI: https://doi.org/10.1017/S0034412512000200Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Google Drive To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Google Drive. Authentic faith and acknowledged risk: dissolving the problem of faith and reasonVolume 49, Issue 1DANIEL J. MCKAUGHAN DOI: https://doi.org/10.1017/S0034412512000200Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Export citation Request permission. (shrink)
Dry earth seems to its inhabitants (our intrinsic duplicates) just as earth seems to us, that is, it seems to them as though there are rivers and lakes and a clear, odorless liquid flowing from their faucets. But, in fact, this is an illusion; there is no such liquid anywhere on the planet. I address two objections to externalism concerning the nature of the concept that is expressed by the word 'water' in the mouths of the inhabitants of dry earth. (...) Gabriel Segal presents a dilemma for the externalist concerning the application conditions of the concept, and Paul Boghossian presents a dilemma for the externalist concerning the complexity of the concept. I show that, in both cases, the externalist may occupy the horn of his choice without departing from either the letter or spirit of externalism. (shrink)
Well-being measurements are frequently used to support conclusions about a range of philosophically important issues. This is a problem, because we know too little about the intervals of the relevant scales. I argue that it is plausible that well-being measurements are non-linear, and that common beliefs that they are linear are not truth-tracking, so we are not justified in believing that well-being scales are linear. I then argue that this undermines common appeals to both hypothetical and actual well-being measurements; I (...) first focus on the philosophical literature on prioritarianism and then discuss Kahneman’s Peak-End Rule as a systematic bias. Finally, I discuss general implications for research on well-being, and suggest a better way of representing scales. (shrink)
This paper explores the role of generics in social cognition. First, we explore the nature and effects of the most common form of generics about social kinds. Second, we discuss the nature and effects of a less common but equally important form of generics about social kinds. Finally, we consider the implications of this discussion for how we ought to use language about the social world.
There are plenty of classic paradoxes about conditional obligations, like the duty to be gentle if one is to murder, and about “supererogatory” deeds beyond the call of duty. But little has been said about the intersection of these topics. We develop the first general account of conditional supererogation, with the power to solve familiar puzzles as well as several that we introduce. Our account, moreover, flows from two familiar ideas: that conditionals restrict quantification and that supererogation emerges from a (...) clash between justifying and requiring reasons. (shrink)
The controversy over the old ideal of “value-free science” has cooled significantly over the past decade. Many philosophers of science now agree that even ethical and political values may play a substantial role in all aspects of scientific inquiry. Consequently, in the last few years, work in science and values has become more specific: Which values may influence science, and in which ways? Or, how do we distinguish illegitimate from illegitimate kinds of influence? In this paper, I argue that this (...) problem requires philosophers of science to take a new direction. I present two case studies in the influence of values on scientific inquiry: feminist values in archaeology and commercial values in pharmaceutical research. I offer a preliminary assessment of these cases, that the influence of values was legitimate in the feminist case, but not in the pharmaceutical case. I then turn to three major approaches to distinguish legitimate from illegitimate influences of values, including the distinction between epistemic and non-epistemic values and Heather Douglas’ distinction between direct and indirect roles for values. I argue that none of these three approaches gives an adequate analysis of the two cases. In the concluding section, I briefly sketch my own approach, which draws more heavily on ethics than the others, and is more promising as a solution to the current problem. This is the new direction in which I think science and values should move. (shrink)
Public discussions of political and social issues are often characterized by deep and persistent polarization. In social psychology, it’s standard to treat belief polarization as the product of epistemic irrationality. In contrast, we argue that the persistent disagreement that grounds political and social polarization can be produced by epistemically rational agents, when those agents have limited cognitive resources. Using an agent-based model of group deliberation, we show that groups of deliberating agents using coherence-based strategies for managing their limited resources tend (...) to polarize into different subgroups. We argue that using that strategy is epistemically rational for limited agents. So even though group polarization looks like it must be the product of human irrationality, polarization can be the result of fully rational deliberation with natural human limitations. (shrink)
This paper presents and defends an argument that the continuum hypothesis is false, based on considerations about objective chance and an old theorem due to Banach and Kuratowski. More specifically, I argue that the probabilistic inductive methods standardly used in science presuppose that every proposition about the outcome of a chancy process has a certain chance between 0 and 1. I also argue in favour of the standard view that chances are countably additive. Since it is possible to randomly pick (...) out a point on a continuum, for instance using a roulette wheel or by flipping a countable infinity of fair coins, it follows, given the axioms of ZFC, that there are many different cardinalities between countable infinity and the cardinality of the continuum. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.