Contextualists and pragmatists agree that knowledge-denying sentences are contextually variable, in the sense that a knowledge-denying sentence might semantically express a false proposition in one context and a true proposition in another context, without any change in the properties traditionally viewed as necessary for knowledge. Minimalists deny both pragmatism and contextualism, and maintain that knowledge-denying sentences are not contextually variable. To defend their view from cases like DeRose and Stanley's high stakes bank case, minimalists like PatrickRysiew, Jessica (...) Brown, and Wayne Davis forward ‘warranted assertability maneuvers.’ The basic idea is that some knowledge-denying sentence seems contextually variable because we mistake what a speaker pragmatically conveys by uttering that sentence for what she literally says by uttering that sentence. In this paper, I raise problems for the warranted assertability maneuvers of Rysiew, Brown, and Davis, and then present a warranted assertability maneuver that should succeed if any warranted assertability maneuver will succeed. I then show how my warranted assertability maneuver fails, and how the problem with my warranted assertability maneuver generalizes to pragmatic responses in general. The upshot of my argument is that, in order to defend their view from cases like DeRose and Stanley's high stakes bank case, minimalists must prioritize the epistemological question whether the subjects in those cases know over linguistic questions about the pragmatics of various knowledge-denying sentences. (shrink)
This book launches a sustained defense of a radical interpretation of the doctrine of the open future. Patrick Todd argues that all claims about undetermined aspects of the future are simply false.
An early, very preliminary edition of this book was circulated in 1962 under the title Set-theoretical Structures in Science. There are many reasons for maintaining that such structures play a role in the philosophy of science. Perhaps the best is that they provide the right setting for investigating problems of representation and invariance in any systematic part of science, past or present. Examples are easy to cite. Sophisticated analysis of the nature of representation in perception is to be found already (...) in Plato and Aristotle. One of the great intellectual triumphs of the nineteenth century was the mechanical explanation of such familiar concepts as temperature and pressure by their representation in terms of the motion of particles. A more disturbing change of viewpoint was the realization at the beginning of the twentieth century that the separate invariant properties of space and time must be replaced by the space-time invariants of Einstein's special relativity. Another example, the focus of the longest chapter in this book, is controversy extending over several centuries on the proper representation of probability. The six major positions on this question are critically examined. Topics covered in other chapters include an unusually detailed treatment of theoretical and experimental work on visual space, the two senses of invariance represented by weak and strong reversibility of causal processes, and the representation of hidden variables in quantum mechanics. The final chapter concentrates on different kinds of representations of language, concluding with some empirical results on brain-wave representations of words and sentences. (shrink)
This Introduction has three sections, on "logical fatalism," "theological fatalism," and the problem of future contingents, respectively. In the first two sections, we focus on the crucial idea of "dependence" and the role it plays it fatalistic arguments. Arguably, the primary response to the problems of logical and theological fatalism invokes the claim that the relevant past truths or divine beliefs depend on what we do, and therefore needn't be held fixed when evaluating what we can do. We call the (...) sort of dependence needed for this response to be successful "dependence with a capital 'd'": Dependence. We consider different accounts of Dependence, especially the account implicit in the so-called "Ockhamist" response to the fatalistic arguments. Finally, we present the problem of future contingents: what could "ground" truths about the undetermined future? On the other hand, how could all such propositions fail to be true? (shrink)
The plane was going to crash, but it didn't. Johnny was going to bleed to death, but he didn't. Geach sees here a changing future. In this paper, I develop Geach's primary argument for the (almost universally rejected) thesis that the future is mutable (an argument from the nature of prevention), respond to the most serious objections such a view faces, and consider how Geach's view bears on traditional debates concerning divine foreknowledge and human freedom. As I hope to show, (...) Geach's view constitutes a radically new view on the logic of future contingents, and deserves the status of a theoretical contender in these debates. (shrink)
The paper takes up Bell's “Everett theory” and develops it further. The resulting theory is about the system of all particles in the universe, each located in ordinary, 3-dimensional space. This many-particle system as a whole performs random jumps through 3N-dimensional configuration space – hence “Tychistic Bohmian Mechanics”. The distribution of its spontaneous localisations in configuration space is given by the Born Rule probability measure for the universal wavefunction. Contra Bell, the theory is argued to satisfy the minimal desiderata for (...) a Bohmian theory within the Primitive Ontology framework. TBM's formalism is that of ordinary Bohmian Mechanics, without the postulate of continuous particle trajectories and their deterministic dynamics. This “rump formalism” receives, however, a different interpretation. We defend TBM as an empirically adequate and coherent quantum theory. Objections voiced by Bell and Maudlin are rebutted. The “for all practical purposes”-classical, Everettian worlds exist sequentially in TBM. In a temporally coarse-grained sense, they quasi-persist. By contrast, the individual particles themselves cease to persist. (shrink)
At the most general level, "manipulation" refers one of many ways of influencing behavior, along with (but to be distinguished from) other such ways, such as coercion and rational persuasion. Like these other ways of influencing behavior, manipulation is of crucial importance in various ethical contexts. First, there are important questions concerning the moral status of manipulation itself; manipulation seems to be mor- ally problematic in ways in which (say) rational persuasion does not. Why is this so? Furthermore, the notion (...) of manipulation has played an increasingly central role in debates about free will and moral responsibility. Despite its significance in these (and other) contexts, however, the notion of manipulation itself remains deeply vexed. I would say notoriously vexed, but in fact direct philosophical treatments of the notion of manipulation are few and far between, and those that do exist are nota- ble for the sometimes widely divergent conclusions they reach concerning what it is. I begin by addressing (though certainly not resolving) the conceptual issue of how to distinguish manipulation from other ways of influencing behavior. Along the way, I also briefly address the (intimately related) question of the moral status of manipulation: what, if anything, makes it morally problematic? Then I discuss the controversial ways in which the notion of manipulation has been employed in contemporary debates about free will and moral responsibility. (shrink)
Nations are understood to have a right to go to war, not only in defense of individual rights, but in defense of their own political standing in a given territory. This paper argues that the political defensive privilege cannot be satisfactorily explained, either on liberal cosmopolitan grounds or on pluralistic grounds. In particular, it is argued that pluralistic accounts require giving implausibly strong weight to the value of political communities, overwhelming the standing of individuals. Liberal cosmopolitans, it is argued, underestimate (...) the difficulties in disentangling a state’s role in upholding or threatening individual interests from its role in providing the social context that shapes and determines those very interests. The paper proposes an alternative theory, “prosaic statism”, which shares the individualistic assumptions of liberal cosmopolitanism, but avoids a form of fundamentalism about human rights, and is therefore less likely to recommend humanitarian intervention in non-liberal states. (shrink)
According to the Unfinished Business Account, if actor p reasonably judges performing a supererogatory act ϕ at great sacrifice to herself will enable beneficiary q to achieve a greater good, then failure to promote the good made possible by ϕ wrongs p. Elizabeth Finneron-Burns questions whether it follows that we have a duty to render the sacrifices of past (and present) people more worthwhile by preventing human extinction. This note responds to her criticisms.
It has previously been argued that Schopenhauer is a distinctive type of virtue ethicist (Hassan, 2019). The Aristotelian version of virtue ethics has traditionally been accused of being fundamentally egoistic insofar as the possession of virtues is beneficial to the possessor, and serve as the ultimate justification for obtaining them. Indeed, Schopenhauer himself makes a version of this complaint. In this chapter, I investigate whether Schopenhauer’s moral framework nevertheless suffers from this same objection of egoism in light of how he (...) conceives of the relationship between morality and ascetic 'salvation'. Drawing upon his published works and letters, I argue that Schopenhauer has the resources to avoid the objection. Because of his idiosyncratic metaphysics, I argue that Schopenhauer can also avoid the problem of self-effacement which may result from the way in which he avoids the egoism objection. The discussion thus intends to establish further nuance to Schopenhauer’s conception of virtue and its value. (shrink)
At least since Aristotle’s famous 'sea-battle' passages in On Interpretation 9, some substantial minority of philosophers has been attracted to the doctrine of the open future--the doctrine that future contingent statements are not true. But, prima facie, such views seem inconsistent with the following intuition: if something has happened, then (looking back) it was the case that it would happen. How can it be that, looking forwards, it isn’t true that there will be a sea battle, while also being true (...) that, looking backwards, it was the case that there would be a sea battle? This tension forms, in large part, what might be called the problem of future contingents. A dominant trend in temporal logic and semantic theorizing about future contingents seeks to validate both intuitions. Theorists in this tradition--including some interpretations of Aristotle, but paradigmatically, Thomason (1970), as well as more recent developments in Belnap, et. al (2001) and MacFarlane (2003, 2014)--have argued that the apparent tension between the intuitions is in fact merely apparent. In short, such theorists seek to maintain both of the following two theses: (i) the open future: Future contingents are not true, and (ii) retro-closure: From the fact that something is true, it follows that it was the case that it would be true. It is well-known that reflection on the problem of future contingents has in many ways been inspired by importantly parallel issues regarding divine foreknowledge and indeterminism. In this paper, we take up this perspective, and ask what accepting both the open future and retro-closure predicts about omniscience. When we theorize about a perfect knower, we are theorizing about what an ideal agent ought to believe. Our contention is that there isn’t an acceptable view of ideally rational belief given the assumptions of the open future and retro-closure, and thus this casts doubt on the conjunction of those assumptions. (shrink)
One of the main characteristics of today’s democratic societies is their pluralism. As a result, liberal political philosophers often claim that the state should remain neutral with respect to different conceptions of the good. Legal and social policies should be acceptable to everyone regard- less of their culture, their religion or their comprehensive moral views. One might think that this commitment to neutrality should be especially pronounced in urban centres, with their culturally diverse populations. However, there are a large number (...) of laws and policies adopted at the municipal level that contradict the liberal principle of neutrality. In this paper, I want to suggest that these perfectionist laws and policies are legitimate at the urban level. Specifically, I will argue that the principle of neutrality applies only indirectly to social institutions within the broader framework of the nation-state. This is clear in the case of voluntary associations, but to a certain extent this rationale applies also to cities. In a liberal regime, private associations are allowed to hold and defend perfectionist views, focused on a particular conception of the good life. One problem is to determine the limits of this perfectionism at the urban level, since cities, unlike private associations, are public institutions. My aim here is therefore to give a liberal justification to a limited form of perfectionism of municipal laws and policies. (shrink)
As robots slip into more domains of human life-from the operating room to the bedroom-they take on our morally important tasks and decisions, as well as create new risks from psychological to physical. This book answers the urgent call to study their ethical, legal, and policy impacts.
The main goal in this paper is to outline and defend a form of Relativism, under which truth is absolute but assertibility is not. I dub such a view Norm-Relativism in contrast to the more familiar forms of Truth-Relativism. The key feature of this view is that just what norm of assertion, belief, and action is in play in some context is itself relative to a perspective. In slogan form: there is no fixed, single norm for assertion, belief, and action. (...) Upshot: 'knows' is neither context-sensitive nor perspectival. (shrink)
P.F. Strawson’s (1962) “Freedom and Resentment” has provoked a wide range of responses, both positive and negative, and an equally wide range of interpretations. In particular, beginning with Gary Watson, some have seen Strawson as suggesting a point about the “order of explanation” concerning moral responsibility: it is not that it is appropriate to hold agents responsible because they are morally responsible, rather, it is ... well, something else. Such claims are often developed in different ways, but one thing remains (...) constant: they meant to be incompatible with libertarian theories of moral responsibility. The overarching theme of this paper is that extant developments of “the reversal” face a dilemma: in order to make the proposals plausibly anti-libertarian, they must be made to be implausible on other grounds. I canvas different attempts to articulate a “Strawsonian reversal”, and argue that none is fit for the purposes for which it is intended. I conclude by suggesting a way of clarifying the intended thesis: an analogy with the concept of funniness. The result: proponents of the “reversal” need to accept the difficult result that if we blamed small children, they would be blameworthy, or instead explain how their view escapes this result, while still being a view on which our blaming practices “fix the facts” of moral responsibility. (shrink)
Conceptual Engineering alleges that philosophical problems are best treated via revising or replacing our concepts (or words). The goal here is not to defend Conceptual Engineering but rather show that it can (and should) invoke Neutralism—the broad view that philosophical progress can take place when (and sometimes only when) a thoroughly neutral, non-specific theory, treatment, or methodology is adopted. A neutralist treatment of one form of skepticism is used as a case study and is compared with various non-neutral rivals. Along (...) the way, a new taxonomy for paradox is proposed. (shrink)
There is a familiar debate between Russell and Strawson concerning bivalence and ‘the present King of France’. According to the Strawsonian view, ‘The present King of France is bald’ is neither true nor false, whereas, on the Russellian view, that proposition is simply false. In this paper, I develop what I take to be a crucial connection between this debate and a different domain where bivalence has been at stake: future contingents. On the familiar ‘Aristotelian’ view, future contingent propositions are (...) neither true nor false. However, I argue that, just as there is a Russellian alternative to the Strawsonian view concerning ‘the present King of France’, according to which the relevant class of propositions all turn out false, so there is a Russellian alternative to the Aristotelian view, according to which future contingents all turn out false, not neither true nor false. The result: contrary to millennia of philosophical tradition, we can be open futurists without denying bivalence. (shrink)
One of the basic principles of the general definition of information is its rejection of dataless information, which is reflected in its endorsement of an ontological neutrality. In general, this principles states that “there can be no information without physical implementation” (Floridi (2005)). Though this is standardly considered a commonsensical assumption, many questions arise with regard to its generalised application. In this paper a combined logic for data and information is elaborated, and specifically used to investigate the consequences of restricted (...) and unrestricted data-implementation-principles. (shrink)
Recently, philosophers have turned their attention to the question, not when a given agent is blameworthy for what she does, but when a further agent has the moral standing to blame her for what she does. Philosophers have proposed at least four conditions on having “moral standing”: -/- 1. One’s blame would not be “hypocritical”. 2. One is not oneself “involved in” the target agent’s wrongdoing. 3. One must be warranted in believing that the target is indeed blameworthy for the (...) wrongdoing. 4. The target’s wrongdoing must some of “one’s business”. -/- These conditions are often proposed as both conditions on one and the same thing, and as marking fundamentally different ways of “losing standing.” Here I call these claims into question. First, I claim that conditions (3) and (4) are simply conditions on different things than are conditions (1) and (2). Second, I argue that condition (2) reduces to condition (1): when “involvement” removes someone’s standing to blame, it does so only by indicating something further about that agent, viz., that he or she lacks commitment to the values that condemn the wrongdoer’s action. The result: after we clarify the nature of the non-hypocrisy condition, we will have a unified account of moral standing to blame. Issues also discussed: whether standing can ever be regained, the relationship between standing and our "moral fragility", the difference between mere inconsistency and hypocrisy, and whether a condition of standing might be derived from deeper facts about the "equality of persons". (shrink)
The principle of Conditional Excluded Middle has been a matter of longstanding controversy in both semantics and metaphysics. According to this principle, we are, inter alia, committed to claims like the following: If the coin had been flipped, it would have landed heads, or if the coin had been flipped, it would not have landed heads. In favour of the principle, theorists have appealed, primarily, to linguistic data such as that we tend to hear ¬(A > B) as equivalent to (...) (A > ¬B). Williams (2010), provides one of the most compelling recent arguments along these lines by appealing to intuitive equivalencies between certain quantified conditional statements. We argue that the strategy Williams employs can be parodied to generate an argument for the unwelcome principle of Should Excluded Middle: the principle that, for any A, it either should be that A or it should be that not A. Uncovering what goes wrong with this argument casts doubt on a key premise in Williams’ argument. The way we develop this point is by defending the thesis that, like "should", "would" is a so-called neg-raising predicate. Neg-raising is the linguistic phenomenon whereby “I don’t think that Trump is a good president” strongly tends to implicate “I think that Trump is not a good president,” despite the former not semantically entailing the latter. We show how a defender of a Lewis-style semantics for counterfactuals should implement the idea that the counterfactual is a “neg-raiser”. (shrink)
In response to criticism, we often say – in these or similar words – “Let’s see you do better!” Prima facie, it looks like this response is a challenge of a certain kind – a challenge to prove that one has what has recently been called standing . More generally, the data here seems to point a certain kind of norm of criticism: be better . Slightly more carefully: One must: criticize x with respect to standard s only if one (...) is better than x with respect to standard s . In this paper, I defend precisely this norm of criticism – an underexplored norm that is nevertheless ubiquitous in our lives, once we begin looking for it. The be better norm is, I hope to show, continuously invoked in a wide range of ordinary settings, can undergird and explain the widely endorsed non-hypocrisy condition on the standing to blame, and apparent counterexamples to the norm are no such counterexamples at all. (shrink)
It remains controversial whether touch is a truly spatial sense or not. Many philosophers suggest that, if touch is indeed spatial, it is only through its alliances with exploratory movement, and with proprioception. Here we develop the notion that a minimal yet important form of spatial perception may occur in purely passive touch. We do this by showing that the array of tactile receptive fields in the skin, and appropriately relayed to the cortex, may contain the same basic informational building (...) blocks that a creature navigating around its environment uses to build up a perception of space. We illustrate this point with preliminary evidence that perception of spatiotemporal patterns on the human skin shows some of the same features as spatial navigation in animals. We argue (a) that the receptor array defines a ‘tactile field’, (b) that this field exists in a minimal form in ‘skin space’, logically prior to any transformation into bodily or external spatial coordinates, and (c) that this field supports tactile perception without integration of concurrent proprioceptive or motor information. The basic cognitive elements of space perception may begin at lower levels of neural and perceptual organisation than previously thought. (shrink)
The Law of Non-Contradiction holds that both sides of a contradiction cannot be true. Dialetheism is the view that there are contradictions both sides of which are true. Crucial to the dispute, then, is the central notion of contradiction. My first step here is to work toward clarification of that simple and central notion: Just what is a contradiction?
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...) affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms. (shrink)
A small consortium of philosophers has begun work on the implications of epistemic networks (Zollman 2008 and forthcoming; Grim 2006, 2007; Weisberg and Muldoon forthcoming), building on theoretical work in economics, computer science, and engineering (Bala and Goyal 1998, Kleinberg 2001; Amaral et. al., 2004) and on some experimental work in social psychology (Mason, Jones, and Goldstone, 2008). This paper outlines core philosophical results and extends those results to the specific question of thresholds. Epistemic maximization of certain types does show (...) clear threshold effects. Intriguingly, however, those effects appear to be importantly independent from more familiar threshold effects in networks. (shrink)
Various philosophers have long since been attracted to the doctrine that future contingent propositions systematically fail to be true—what is sometimes called the doctrine of the open future. However, open futurists have always struggled to articulate how their view interacts with standard principles of classical logic—most notably, with the Law of Excluded Middle. For consider the following two claims: Trump will be impeached tomorrow; Trump will not be impeached tomorrow. According to the kind of open futurist at issue, both of (...) these claims may well fail to be true. According to many, however, the disjunction of these claims can be represented as p ∨ ~p—that is, as an instance of LEM. In this essay, however, I wish to defend the view that the disjunction these claims cannot be represented as an instance of p ∨ ~p. And this is for the following reason: the latter claim is not, in fact, the strict negation of the former. More particularly, there is an important semantic distinction between the strict negation of the first claim [~] and the latter claim. However, the viability of this approach has been denied by Thomason, and more recently by MacFarlane and Cariani and Santorio, the latter of whom call the denial of the given semantic distinction “scopelessness”. According to these authors, that is, will is “scopeless” with respect to negation; whereas there is perhaps a syntactic distinction between ‘~Will p’ and ‘Will ~p’, there is no corresponding semantic distinction. And if this is so, the approach in question fails. In this paper, then, I criticize the claim that will is “scopeless” with respect to negation. I argue that will is a so-called “neg-raising” predicate—and that, in this light, we can see that the requisite scope distinctions aren’t missing, but are simply being masked. The result: a under-appreciated solution to the problem of future contingents that sees and as contraries, not contradictories. (shrink)
I provide a manipulation-style argument against classical compatibilism—the claim that freedom to do otherwise is consistent with determinism. My question is simple: if Diana really gave Ernie free will, why isn't she worried that he won't use it precisely as she would like? Diana's non-nervousness, I argue, indicates Ernie's non-freedom. Arguably, the intuition that Ernie lacks freedom to do otherwise is stronger than the direct intuition that he is simply not responsible; this result highlights the importance of the denial of (...) the principle of alternative possibilities for compatibilist theories of responsibility. Along the way, I clarify the dialectical role and structure of “manipulation arguments”, and compare the manipulation argument I develop with the more familiar Consequence Argument. I contend that the two arguments are importantly mutually supporting and reinforcing. The result: classical compatibilists should be nervous—and if PAP is true, all compatibilists should be nervous. (shrink)
It is widely accepted that there is what has been called a non-hypocrisy norm on the appropriateness of moral blame; roughly, one has standing to blame only if one is not guilty of the very offence one seeks to criticize. Our acceptance of this norm is embodied in the common retort to criticism, “Who are you to blame me?”. But there is a paradox lurking behind this commonplace norm. If it is always inappropriate for x to blame y for a (...) wrong that x has committed, then all cases in which x blames x (i.e. cases of self-blame) are rendered inappropriate. But it seems to be ethical common-sense that we are often, sadly, in position (indeed, excellent, privileged position) to blame ourselves for our own moral failings. And thus we have a paradox: a conflict between the inappropriateness of hypocritical blame, and the appropriateness of self-blame. We consider several ways of resolving the paradox, and contend none is as defensible as a position that simply accepts it: we should never blame ourselves. In defending this startling position, we defend a crucial distinction between self-blame and guilt. (shrink)
One set of neglected problems consists of paradoxes of omniscience clearly recognizable as forms of the Liar, and these I have never seen raised at all. Other neglected problems are difficulties for omniscience posed by recent work on belief de se and essential indexicals. These have not yet been given the attention they deserve.
The Hong and Page ‘diversity trumps ability’ result has been used to argue for the more general claim that a diverse set of agents is epistemically superior to a comparable group of experts. Here we extend Hong and Page’s model to landscapes of different degrees of randomness and demonstrate the sensitivity of the ‘diversity trumps ability’ result. This analysis offers a more nuanced picture of how diversity, ability, and expertise may relate. Although models of this sort can indeed be suggestive (...) for diversity policies, we advise against interpreting such results overly broadly. (shrink)
This commentary is a reflection on a collaboration with the artist Rossella Biscotti and comments on how artistic research and logico-mathematical methods can be used to contribute to the development of critical perspectives on contemporary data practices.
In this paper I show that a variety of Cartesian Conceptions of the mental are unworkable. In particular, I offer a much weaker conception of limited discrimination than the one advanced by Williamson (2000) and show that this weaker conception, together with some plausible background assumptions, is not only able to undermine the claim that our core mental states are luminous (roughly: if one is in such a state then one is in a position to know that one is) but (...) also the claim that introspection is infallible with respect to our core mental states (where a belief that C obtains is infallible just in case if one believes that C obtains then C obtains). The upshot is a broader and much more powerful case against the Cartesian conception of the mental than has been advanced hitherto. (shrink)
The iterated Prisoner’s Dilemma has become the standard model for the evolution of cooperative behavior within a community of egoistic agents, frequently cited for implications in both sociology and biology. Due primarily to the work of Axelrod (1980a, 198Ob, 1984, 1985), a strategy of tit for tat (TFT) has established a reputation as being particularly robust. Nowak and Sigmund (1992) have shown, however, that in a world of stochastic error or imperfect communication, it is not TFT that finally triumphs in (...) an ecological model based on population percentages (Axelrod and Hamilton 1981), but ‘generous tit for tat’ (GTFT), which repays cooperation with a probability of cooperation approaching 1 but forgives defection with a probability of l/3. In this paper, we consider a spatialized instantiation of the stochastic Prisoner’s Dilemma, using two-dimensional cellular automata (Wolfram, 1984, 1986; Gutowitz, 1990) to model the spatial dynamics of populations of competing strategies. The surprising result is that in the spatial model it is not GIFT but still more generous strategies that are favored. The optimal strategy within this spatial ecology appears to be a form of ‘bending over backwards’, which returns cooperation for defection with a probability of 2/3 - a rate twice as generous as GTFT. (shrink)
Epistemic Contextualism is the view that “knows that” is semantically context-sensitive and that properly accommodating this fact into our philosophical theory promises to solve various puzzles concerning knowledge. Yet Epistemic Contextualism faces a big—some would say fatal—problem: The Semantic Error Problem. In its prominent form, this runs thus: speakers just don’t seem to recognise that “knows that” is context-sensitive; so, if “knows that” really is context-sensitive then such speakers are systematically in error about what is said by, or how to (...) evaluate, ordinary uses of “S knows that p”; but since it's wildly implausible that ordinary speakers should exhibit such systematic error, the expression “knows that” isn't context-sensitive. We are interested in whether, and in what ways, there is such semantic error; if there is such error, how it arises and is made manifest; and, again, if there is such error to what extent it is a problem for Epistemic Contextualism. The upshot is that some forms of The Semantic Error Problem turn out to be largely unproblematic. Those that remain troublesome have analogue error problems for various competitor conceptions of knowledge. So, if error is any sort of problem, then there is a problem for every extant competitor view. (shrink)
Among the phenomena of near-death experiences (NDEs) are what are known as aftereffects whereby, over time, experiencers undergo substantial, long-term life changes, becoming less fearful of death, more moral and spiritual, and more convinced that life has meaning and that an afterlife exists. Some supernaturalists attribute these changes to the experience being real. John Martin Fischer and Benjamin Mitchell-Yellin, on the other hand, have asserted a naturalist thesis involving a metaphorical interpretation of NDE narratives that preserves their significance but eliminates (...) the supernaturalist causal explanation. I argue that Fischer and Mitchell-Yellin’s psychological thesis fails as an explanation of NDEs. (shrink)
In this paper, I introduce a problem to the philosophy of religion – the problem of divine moral standing – and explain how this problem is distinct from (albeit related to) the more familiar problem of evil (with which it is often conflated). In short, the problem is this: in virtue of how God would be (or, on some given conception, is) “involved in” our actions, how is it that God has the moral standing to blame us for performing those (...) very actions? In light of the recent literature on “moral standing”, I consider God’s moral standing to blame on two models of “divine providence”: open theism, and theological determinism. I contend that God may have standing on open theism, and – perhaps surprisingly – may also have standing, even on theological determinism, given the truth of compatibilism. Thus, if you think that God could not justly both determine and blame, then you will have to abandon compatibilism. The topic of this paper thus sheds considerable light on the traditional philosophical debate about the conditions of moral responsibility. (shrink)
We model scientific theories as Bayesian networks. Nodes carry credences and function as abstract representations of propositions within the structure. Directed links carry conditional probabilities and represent connections between those propositions. Updating is Bayesian across the network as a whole. The impact of evidence at one point within a scientific theory can have a very different impact on the network than does evidence of the same strength at a different point. A Bayesian model allows us to envisage and analyze the (...) differential impact of evidence and credence change at different points within a single network and across different theoretical structures. (shrink)
Everyone agrees that we can’t change the past. But what about the future? Though the thought that we can change the future is familiar from popular discourse, it enjoys virtually no support from philosophers, contemporary or otherwise. In this paper, I argue that the thesis that the future is mutable has far more going for it than anyone has yet realized. The view, I hope to show, gains support from the nature of prevention, can provide a new way of responding (...) to arguments for fatalism, can account for the utility of total knowledge of the future, and can help in providing an account of the semantics of the English progressive. On the view developed, the future is mutable in the following sense: perhaps, once, it was true that you would never come to read a paper defending the mutability of the future. And then the future changed. And now you will. (shrink)
The field of mindfulness and the emerging science of heroism have a common interest in the causes and conditions of selfless altruism though up to this point there has been little cross-pollination. However, there is increasing evidence that mindfulness training delivers heroically relevant qualities such as increased attentional functioning, enhanced primary sensory awareness, greater conflict monitoring, increased cognitive control, reduced fear response, and an increase in loving kindness and self-sacrificing behaviors. Predicated on the notion of a “no self,” traditional mindfulness (...) and its focus on enlightenment and selfless service may in fact be ideally suited to the development of the elusive “trait” (predictable) versus “state” (intermittent) heroic character. Interweaving observations and questions drawn from the science of heroism, the article explores the relevant theory, practices, and scientific outcomes of mindfulness. It finds that there is evidence that heroically relevant qualities are trainable with the suite of mindfulness techniques and that an enduring experience of selflessness and service of others (the enlightened hero) may well be within the grasp of the serious practitioner. (shrink)
We are increasingly exposed to polarized media sources, with clear evidence that individuals choose those sources closest to their existing views. We also have a tradition of open face-to-face group discussion in town meetings, for example. There are a range of current proposals to revive the role of group meetings in democratic decision-making. Here, we build a simulation that instantiates aspects of reinforcement theory in a model of competing social influences. What can we expect in the interaction of polarized media (...) with group interaction along the lines of town meetings? Some surprises are evident from a computational model that includes both. Deliberative group discussion can be expected to produce opinion convergence. That convergence may not, however, be a cure for extreme views polarized at opposite ends of the opinion spectrum. In a large class of cases, we show that adding the influence of group meetings in an environment of self-selected media produces not a moderate central consensus but opinion convergence at one of the extremes defined by polarized media. (shrink)
Descartes claimed that he thought he could deduce the assumptions of his Meteorology by the contents of the Discourse. He actually began the Meteorology with assumptions. The content of the Discourse, moreover, does not indicate how he deduced the assumptions of the Meteorology. We seem to be left in a precarious position. We can examine the text as it was published, independent of Descartes’ claims, which suggests that he incorporated a presumptive or hypothetical method. On the other hand, we can (...) take Descartes’ claims as our guide and search for the epistemic foundations of the Meteorology independent of the Discourse. In this paper, I will pursue the latter route. My aim is to explain why, and how, Descartes thought that he had deduced the assumptions of the Meteorology. My interest, in this case, is solely Descartes’ physical foundation for the Meteorology, in the physics and physiology that resulted in Descartes’ explanation. With this aim, I provide an interpretation of Descartes’ World where many of its conclusions serve as evidence for the assumptions of the Meteorology. I provisionally conclude that Descartes thought that his World was the epistemic foundation for his Meteorology. (shrink)
This paper responds to recent work in the philosophy of Homotopy Type Theory by James Ladyman and Stuart Presnell. They consider one of the rules for identity, path induction, and justify it along ‘pre-mathematical’ lines. I give an alternate justification based on the philosophical framework of inferentialism. Accordingly, I construct a notion of harmony that allows the inferentialist to say when a connective or concept is meaning-bearing and this conception unifies most of the prominent conceptions of harmony through category theory. (...) This categorical harmony is stated in terms of adjoints and says that any concept definable by iterated adjoints from general categorical operations is harmonious. Moreover, it has been shown that identity in a categorical setting is determined by an adjoint in the relevant way. Furthermore, path induction as a rule comes from this definition. Thus we arrive at an account of how path induction, as a rule of inference governing identity, can be justified on mathematically motivated grounds. (shrink)
My thesis is that Descartes wrote the Discours as a plan for a universal science, as he originally entitled it. I provide an interpretation of his letters that suggests that after Descartes began drafting his Dioptrics, he started developing a system that incorporated his early treatises from the 1630s: Les Méteores, Le Monde, L’Homme, and his 1629 Traité de métaphysique. I argue against the mosaic and autobiographic interpretations that claim these were independent treatises or stages in Descartes’ life. Rather, I (...) hold that threat of condemnation concerning his heliocentric thesis resulted in him suppressing his larger project and, instead, he published a plan where he outlined his ongoing system of philosophy. (shrink)
We work with a large spatialized array of individuals in an environment of drifting food sources and predators. The behavior of each individual is generated by its simple neural net; individuals are capable of making one of two sounds and are capable of responding to sounds from their immediate neighbors by opening their mouths or hiding. An individual whose mouth is open in the presence of food is “fed” and gains points; an individual who fails to hide when a predator (...) is present is “hurt” by losing points. Opening mouths, hiding, and making sounds each exact an energy cost. There is no direct evolutionary gain for acts of cooperation or “successful communication” per se. In such an environment we start with a spatialized array of neural nets with randomized weights. Using standard learning algorithms, our individuals “train up” on the behavior of successful neighbors at regular intervals. Given that simple setup, will a community of neural nets evolve a simple language for signaling the presence of food and predators? With important qualifications, the answer is “yes.” In a simple spatial environment, pursuing individualistic gains and using partial training on successful neighbors, randomized neural nets can learn to communicate. (shrink)
Die Frage nach der Definition von Doping basiert nicht zuletzt auf naturwissenschaftlicher Forschung. Aus einer naturwissenschaftlichen Perspektive könnte man sogar behaupten, dass die aktuelle Dopingdebatte ihre Ursachen gerade in der pharmazeutischen Forschung hat, da sich das Problem des Dopings erst mit dem Vorhandensein entsprechender Mittel bzw. Methoden zur Leistungssteigerung stellt. Allerdings wird die Frage der Dopingdefinition im Folgenden nicht auf einen naturwissenschaftlichen Referenzrahmen reduziert, wie dies in den aktuellen Dopingdefinitionen häufig der Fall ist. Vielmehr werde ich die spezifische Rolle naturwissenschaftlicher (...) Forschung mit Blick auf die Dopingdefinition und die daraus resultierenden strukturellen Schwierigkeiten darstellen. (shrink)
Commentators almost universally agree that Locke denies the possibility of thinking matter in Book IV Chapter 10 of the Essay. Further, they argue that Locke must do this in order for his proof of God’s existence in the chapter to be successful. This paper disputes these claims and develops an interpretation according to which Locke allows for the possibility that a system of matter could think (even prior to any act of superaddition on God’s part). In addition, the paper argues (...) that this does not destroy Locke’s argument in the chapter, instead it helps to illuminate the nature of it. The paper proceeds in two main stages. First, Locke denies that matter can produce thought. A distinction between two senses of “production” shows that this claim is compatible with the existence of thinking matter. Second, Locke denies that God could be a system of randomly moving particles. Most commentators take this to mean that such a system could not think. But Locke is better interpreted as denying that such a system could have the wisdom and knowledge of God. (shrink)
We apply spatialized game theory and multi-agent computational modeling as philosophical tools: (1) for assessing the primary social psychological hypothesis regarding prejudice reduction, and (2) for pursuing a deeper understanding of the basic mechanisms of prejudice reduction.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.