Philosophical debates about the metaphysics of time typically revolve around two contrasting views of time. On the A-theory, time is something that itself undergoes change, as captured by the idea of the passage of time; on the B-theory, all there is to time is events standing in before/after or simultaneity relations to each other, and these temporal relations are unchanging. Philosophers typically regard the A-theory as being supported by our experience of time, and they take it that the B-theory clashes (...) with how we experience time and therefore faces the burden of having to explain away that clash. In this paper, we investigate empirically whether these intuitions about the experience of time are shared by the general public. We asked directly for people’s subjective reports of their experience of time—in particular, whether they believe themselves to have a phenomenology as of time’s passing—and we probed their understanding of what time’s passage in fact is. We find that a majority of participants do share the aforementioned intuitions, but interestingly a minority do not. (shrink)
The authors examined cue competition effects in young children using the blicket detector paradigm, in which objects are placed either singly or in pairs on a novel machine and children must judge which objects have the causal power to make the machine work. Cue competition effects were found in a 5- to 6-year-old group but not in a 4-year-old group. Equivalent levels of forward and backward blocking were found in the former group. Children's counterfactual judgments were subsequently examined by asking (...) whether or not the machine would have gone off in the absence of I of 2 objects that had been placed on it as a pair. Cue competition effects were demonstrated only in 5- to 6-year-olds using this mode of assessing causal reasoning. (shrink)
An early, very preliminary edition of this book was circulated in 1962 under the title Set-theoretical Structures in Science. There are many reasons for maintaining that such structures play a role in the philosophy of science. Perhaps the best is that they provide the right setting for investigating problems of representation and invariance in any systematic part of science, past or present. Examples are easy to cite. Sophisticated analysis of the nature of representation in perception is to be found already (...) in Plato and Aristotle. One of the great intellectual triumphs of the nineteenth century was the mechanical explanation of such familiar concepts as temperature and pressure by their representation in terms of the motion of particles. A more disturbing change of viewpoint was the realization at the beginning of the twentieth century that the separate invariant properties of space and time must be replaced by the space-time invariants of Einstein's special relativity. Another example, the focus of the longest chapter in this book, is controversy extending over several centuries on the proper representation of probability. The six major positions on this question are critically examined. Topics covered in other chapters include an unusually detailed treatment of theoretical and experimental work on visual space, the two senses of invariance represented by weak and strong reversibility of causal processes, and the representation of hidden variables in quantum mechanics. The final chapter concentrates on different kinds of representations of language, concluding with some empirical results on brain-wave representations of words and sentences. (shrink)
This book launches a sustained defense of a radical interpretation of the doctrine of the open future. Patrick Todd argues that all claims about undetermined aspects of the future are simply false.
Recently, philosophers have turned their attention to the question, not when a given agent is blameworthy for what she does, but when a further agent has the moral standing to blame her for what she does. Philosophers have proposed at least four conditions on having “moral standing”: -/- 1. One’s blame would not be “hypocritical”. 2. One is not oneself “involved in” the target agent’s wrongdoing. 3. One must be warranted in believing that the target is indeed blameworthy for the (...) wrongdoing. 4. The target’s wrongdoing must some of “one’s business”. -/- These conditions are often proposed as both conditions on one and the same thing, and as marking fundamentally different ways of “losing standing.” Here I call these claims into question. First, I claim that conditions (3) and (4) are simply conditions on different things than are conditions (1) and (2). Second, I argue that condition (2) reduces to condition (1): when “involvement” removes someone’s standing to blame, it does so only by indicating something further about that agent, viz., that he or she lacks commitment to the values that condemn the wrongdoer’s action. The result: after we clarify the nature of the non-hypocrisy condition, we will have a unified account of moral standing to blame. Issues also discussed: whether standing can ever be regained, the relationship between standing and our "moral fragility", the difference between mere inconsistency and hypocrisy, and whether a condition of standing might be derived from deeper facts about the "equality of persons". (shrink)
There is a longstanding argument that purports to show that divine foreknowledge is inconsistent with human freedom to do otherwise. Proponents of this argument, however, have for some time been met with the following reply: the argument posits what would have to be a mysterious non-causal constraint on freedom. In this paper, I argue that this objection is misguided – not because after all there can indeed be non-causal constraints on freedom (as in Pike, Fischer, and Hunt), but because the (...) success of the incompatibilist’s argument does not require the real possibility of non-causal constraints on freedom. I contend that the incompatibilist’s argument is best seen as showing that, given divine foreknowledge, something makes one unfree – and that this something is most plausibly identified, not with the foreknowledge itself, but with the causally deterministic factors that would have to be in place in order for there to be infallible foreknowledge in the first place. (shrink)
As robots slip into more domains of human life-from the operating room to the bedroom-they take on our morally important tasks and decisions, as well as create new risks from psychological to physical. This book answers the urgent call to study their ethical, legal, and policy impacts.
Joe Horton argues that partial aggregation yields unacceptable verdicts in cases with risk and multiple decisions. I begin by showing that Horton’s challenge does not depend on risk, since exactly similar arguments apply to riskless cases. The underlying conflict Horton exposes is between partial aggregation and certain principles of diachronic choice. I then provide two arguments against these diachronic principles: they conflict with intuitions about parity, prerogatives, and cyclical preferences, and they rely on an odd assumption about diachronic choice. Finally, (...) I offer an explanation, on behalf of partial aggregation, for why these diachronic principles fail. (shrink)
Many philosophers maintain that causation is to be explicated in terms of a kind of dependence between cause and effect. These “dependence” theories are opposed by “production” accounts which hold that there is some more fundamental causal “oomph”. A wide range of experimental research on everyday causal judgments seems to indicate that ordinary people operate primarily with a dependence-based notion of causation. For example, people tend to say that absences and double preventers are causes. We argue that the impression that (...) commonsense causal discourse is largely dependence-based is the result of focusing on a very narrow class of causal verbs. Almost all of the vignette-based experimental work on causal judgment has been prosecuted using the word “cause”. But much ordinary causal discourse involves special causal verbs, such as “burn” and “crack”. We find that these verbs display a quite different pattern from the verb “cause”. For instance, for absences and double preventers (Studies 1-3), we find that while people are inclined to say that X caused Y to burn, turn, crack or start, they are less inclined to think that X burned, turned, cracked or started Y. In Study 4, we find that for chains involving a distal and proximal event, people are inclined to say that the distal event is not a special cause of the outcome, though it is a “cause” of the outcome. Together, we find a surprising double dissociation between “cause” and a stock of special causal verbs. We conclude by suggesting that much commonsense causal judgment, which heavily trades in special causal verbs, might be better captured by production-based accounts of causation. (shrink)
The standard propositional exposition of necessary and sufficient conditions, as available in introductory logic texts, leads to a contradiction. It should be abolished.
Philosophers have long tried to understand scientific change in terms of a dynamics of revision within ‘theoretical frameworks,’ ‘disciplinary matrices,’ ‘scientific paradigms’ or ‘conceptual schemes.’ No-one, however, has made clear precisely how one might model such a conceptual scheme, nor what form change dynamics within such a structure could be expected to take. In this paper we take some first steps in applying network theory to the issue, modeling conceptual schemes as simple networks and the dynamics of change as cascades (...) on those networks. The results allow a new understanding of two traditional approaches—Popper and Kuhn—as well as introducing the intriguing prospect of viewing scientific change using the metaphor of selforganizing criticality. (shrink)
Smith has argued that moral realism need not be threatened by apparent moral disagreement. One reason he gives is that moral debate has tended to elicit convergence in moral views. From here, he argues inductively that current disagreements will likely be resolved on the condition that each party is rational and fully informed. The best explanation for this phenomenon, Smith argues, is that there are mind-independent moral facts that humans are capable of knowing. In this paper, I seek to challenge (...) this argument—and more recent versions of it—by arguing that historical convergence in moral views may occur for various arational reasons. If such reasons possibly result in convergence—which Smith effectively concedes—then the moral realist would require an additional a posteriori argument to establish that convergence in moral views occurred for the right reasons. Hence, Smith-style arguments, as they stand, cannot be mobilised in support of moral realism. Rather, this investigation demonstrates the necessity of a genuine history of morality for any convergence claim in support of a meta-ethical view. (shrink)
In this article, I extend the feminist use of Friedrich Nietzsche’s account of memory and forgetting to consider the contemporary externalization of memory foregrounded by transgender experience. Nietzsche’s On the Genealogy of Morals argues that memory is “burnt in” to the forgetful body as a necessary part of subject-formation and the requirements of a social order. Feminist philosophers have employed Nietzsche’s account to illuminate how gender, as memory, becomes embodied. While the account of the “burnt in” repetitions of gender allows (...) us to theorize processes of embodied identity on an individual level, analyzing gender today requires also accounting for how gender is externalized. I take up this question through the specific examples of identity documents and sex-segregated bathrooms. Returning to Nietzsche’s call to practice a resistant forgetting, I conclude by exploring the distinct strategies required to disrupt externalized memory. These strategies include contesting the use of past gender assignments in data collection and rewriting architectural reminders of gender. (shrink)
At least since Aristotle’s famous 'sea-battle' passages in On Interpretation 9, some substantial minority of philosophers has been attracted to the doctrine of the open future--the doctrine that future contingent statements are not true. But, prima facie, such views seem inconsistent with the following intuition: if something has happened, then (looking back) it was the case that it would happen. How can it be that, looking forwards, it isn’t true that there will be a sea battle, while also being true (...) that, looking backwards, it was the case that there would be a sea battle? This tension forms, in large part, what might be called the problem of future contingents. A dominant trend in temporal logic and semantic theorizing about future contingents seeks to validate both intuitions. Theorists in this tradition--including some interpretations of Aristotle, but paradigmatically, Thomason (1970), as well as more recent developments in Belnap, et. al (2001) and MacFarlane (2003, 2014)--have argued that the apparent tension between the intuitions is in fact merely apparent. In short, such theorists seek to maintain both of the following two theses: (i) the open future: Future contingents are not true, and (ii) retro-closure: From the fact that something is true, it follows that it was the case that it would be true. It is well-known that reflection on the problem of future contingents has in many ways been inspired by importantly parallel issues regarding divine foreknowledge and indeterminism. In this paper, we take up this perspective, and ask what accepting both the open future and retro-closure predicts about omniscience. When we theorize about a perfect knower, we are theorizing about what an ideal agent ought to believe. Our contention is that there isn’t an acceptable view of ideally rational belief given the assumptions of the open future and retro-closure, and thus this casts doubt on the conjunction of those assumptions. (shrink)
P.F. Strawson’s (1962) “Freedom and Resentment” has provoked a wide range of responses, both positive and negative, and an equally wide range of interpretations. In particular, beginning with Gary Watson, some have seen Strawson as suggesting a point about the “order of explanation” concerning moral responsibility: it is not that it is appropriate to hold agents responsible because they are morally responsible, rather, it is ... well, something else. Such claims are often developed in different ways, but one thing remains (...) constant: they meant to be incompatible with libertarian theories of moral responsibility. The overarching theme of this paper is that extant developments of “the reversal” face a dilemma: in order to make the proposals plausibly anti-libertarian, they must be made to be implausible on other grounds. I canvas different attempts to articulate a “Strawsonian reversal”, and argue that none is fit for the purposes for which it is intended. I conclude by suggesting a way of clarifying the intended thesis: an analogy with the concept of funniness. The result: proponents of the “reversal” need to accept the difficult result that if we blamed small children, they would be blameworthy, or instead explain how their view escapes this result, while still being a view on which our blaming practices “fix the facts” of moral responsibility. (shrink)
There is a familiar debate between Russell and Strawson concerning bivalence and ‘the present King of France’. According to the Strawsonian view, ‘The present King of France is bald’ is neither true nor false, whereas, on the Russellian view, that proposition is simply false. In this paper, I develop what I take to be a crucial connection between this debate and a different domain where bivalence has been at stake: future contingents. On the familiar ‘Aristotelian’ view, future contingent propositions are (...) neither true nor false. However, I argue that, just as there is a Russellian alternative to the Strawsonian view concerning ‘the present King of France’, according to which the relevant class of propositions all turn out false, so there is a Russellian alternative to the Aristotelian view, according to which future contingents all turn out false, not neither true nor false. The result: contrary to millennia of philosophical tradition, we can be open futurists without denying bivalence. (shrink)
The first essay of Nietzsche’s On the Genealogy of Morals seeks to uncover the roots of Judeo-Christian morality, and to expose it as born from a resentful and feeble peasant class intent on taking revenge upon their aristocratic oppressors. There is a broad consensus in the secondary literature that the ‘slave revolt’ which gives birth to this morality occurs in the 1st century AD, and is propogated by the inhabitants of Roman occupied Judea. Nietzsche himself strongly suggests such a view. (...) However, in a telling later passage from Ecce Homo, Nietzsche claims that the hsitorical Zarathustra—a Bronze age iranian religious thinker—was the first to consider the opposite of Good vs. Evil; that “Zarathustra created this most fateful of errors, morality” (EH, 'Destiny', §3). However, Nietzsche does not discuss Zarathustra or Zoroastrianism in his critique of moral values and their origin. This creates a prima facie tension. If at least part of what essentially characterises 'morality' preceded Judeo-Christianity, are moral values only contingently related to the feelings of ressentiment essential to Nietzsche’s story of the slave revolt? If the answer is 'yes', then the scope of Nietzsche's critique of morality may be somewhat limited. If the answer is 'no', then we require an answer as to why Nietzsche's genealogy—if it is an exercise in ascertaining the “real history of morality” (GM, Pref: §7)—does not extend further back in history. In this paper, I explore how the extent of Nietzsche's knowledge of Zoroastrianism informs his critique of slave morality in On the Genealogy of Morals. I argue that Nietzsche views the historical Zarathustra—like Socrates and Plato—as a forerunner of ‘morality’ in his creative conception of good and evil in metaphysical terms. From here, it is argued that the proposed tension can be dissolved by viewing Judeo-Christian morality merely as the latest and paradigmatic expression of slave morality. (shrink)
We argue that a command and control system can undermine a commander’s moral agency if it causes him/her to process information in a purely syntactic manner, or if it precludes him/her from ascertaining the truth of that information. Our case is based on the resemblance between a commander’s circumstances and the protagonist in Searle’s Chinese Room, together with a careful reading of Aristotle’s notions of ‘compulsory’ and ‘ignorance’. We further substantiate our case by considering the Vincennes Incident, when the crew (...) of a warship mistakenly shot down a civilian airliner. To support a combat commander’s moral agency, designers should strive for systems that help commanders and command teams to think and manipulate information at the level of meaning. ‘Down conversions’ of information from meaning to symbols must be adequately recovered by ‘up conversions’, and commanders must be able to check that their sensors are working and are being used correctly. Meanwhile ethicists should establish a mechanism that tracks the potential moral implications of choices in a system’s design and intended operation. Finally we highlight a gap in normative ethics, in that we have ways to deny moral agency, but not to affirm it. (shrink)
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...) affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms. (shrink)
Jean-Paul Sartre is often seen as the quintessential public intellectual, but this was not always the case. Until the mid-1940s he was not so well-known, even in France. Then suddenly, in a very short period of time, Sartre became an intellectual celebrity. How can we explain this remarkable transformation? The Existentialist Moment retraces Sartre s career and provides a compelling new explanation of his meteoric rise to fame. Baert takes the reader back to the confusing and traumatic period of the (...) Second World War and its immediate aftermath and shows how the unique political and intellectual landscape in France at this time helped to propel Sartre and existentialist philosophy to the fore. The book also explores why, from the early 1960s onwards, in France and elsewhere, the interest in Sartre and existentialism eventually waned. The Existentialist Moment ends with a bold new theory for the study of intellectuals and a provocative challenge to the widespread belief that the public intellectual is a species now on the brink of extinction. (shrink)
The Hong and Page ‘diversity trumps ability’ result has been used to argue for the more general claim that a diverse set of agents is epistemically superior to a comparable group of experts. Here we extend Hong and Page’s model to landscapes of different degrees of randomness and demonstrate the sensitivity of the ‘diversity trumps ability’ result. This analysis offers a more nuanced picture of how diversity, ability, and expertise may relate. Although models of this sort can indeed be suggestive (...) for diversity policies, we advise against interpreting such results overly broadly. (shrink)
Originally published in 1925, C. Delisle Burns’ _The Philosophy of Labour _attempts to lay down key aspects of labour and the working class of that time period, covering aspects such as economic obstacles, standards of living and patriotism. Burns does not draw on past philosophers or sociological thinkers of the working-class and instead chose to focus only on the attitude of the workers in factories, mines, roads, railways and other forms of manual labour. This title will be of (...) interest to students of philosophy. (shrink)
Various philosophers have long since been attracted to the doctrine that future contingent propositions systematically fail to be true—what is sometimes called the doctrine of the open future. However, open futurists have always struggled to articulate how their view interacts with standard principles of classical logic—most notably, with the Law of Excluded Middle. For consider the following two claims: Trump will be impeached tomorrow; Trump will not be impeached tomorrow. According to the kind of open futurist at issue, both of (...) these claims may well fail to be true. According to many, however, the disjunction of these claims can be represented as p ∨ ~p—that is, as an instance of LEM. In this essay, however, I wish to defend the view that the disjunction these claims cannot be represented as an instance of p ∨ ~p. And this is for the following reason: the latter claim is not, in fact, the strict negation of the former. More particularly, there is an important semantic distinction between the strict negation of the first claim [~] and the latter claim. However, the viability of this approach has been denied by Thomason, and more recently by MacFarlane and Cariani and Santorio, the latter of whom call the denial of the given semantic distinction “scopelessness”. According to these authors, that is, will is “scopeless” with respect to negation; whereas there is perhaps a syntactic distinction between ‘~Will p’ and ‘Will ~p’, there is no corresponding semantic distinction. And if this is so, the approach in question fails. In this paper, then, I criticize the claim that will is “scopeless” with respect to negation. I argue that will is a so-called “neg-raising” predicate—and that, in this light, we can see that the requisite scope distinctions aren’t missing, but are simply being masked. The result: a under-appreciated solution to the problem of future contingents that sees and as contraries, not contradictories. (shrink)
This Introduction has three sections, on "logical fatalism," "theological fatalism," and the problem of future contingents, respectively. In the first two sections, we focus on the crucial idea of "dependence" and the role it plays it fatalistic arguments. Arguably, the primary response to the problems of logical and theological fatalism invokes the claim that the relevant past truths or divine beliefs depend on what we do, and therefore needn't be held fixed when evaluating what we can do. We call the (...) sort of dependence needed for this response to be successful "dependence with a capital 'd'": Dependence. We consider different accounts of Dependence, especially the account implicit in the so-called "Ockhamist" response to the fatalistic arguments. Finally, we present the problem of future contingents: what could "ground" truths about the undetermined future? On the other hand, how could all such propositions fail to be true? (shrink)
For more than two decades, police in the United States have used facial recognition to surveil civilians. Local police departments deploy facial recognition technology to identify protestors’ faces while federal law enforcement agencies quietly amass driver’s license and social media photos to build databases containing billions of faces. Yet, despite the widespread use of facial recognition in law enforcement, there are neither federal laws governing the deployment of this technology nor regulations settings standards with respect to its development. To make (...) matters worse, the Fourth Amendment—intended to limit police power and enacted to protect against unreasonable searches—has struggled to rein in new surveillance technologies since its inception. -/- This Article examines the Supreme Court’s Fourth Amendment jurisprudence leading up to Carpenter v. United States and suggests that the Court is reinterpreting the amendment for the digital age. Still, the too-slow expansion of privacy protections raises challenging questions about racial bias, the legitimacy of police power, and ethical issues in artificial intelligence design. This Article proposes the development of an algorithmic auditing and accountability market that not only sets standards for AI development and limitations on governmental use of facial recognition but encourages collaboration between public interest technologists and regulators. Beyond the necessary changes to the technological and legal landscape, the current system of policing must also be reevaluated if hard-won civil liberties are to endure. (shrink)
Conceptual Engineering alleges that philosophical problems are best treated via revising or replacing our concepts (or words). The goal here is not to defend Conceptual Engineering but rather show that it can (and should) invoke Neutralism—the broad view that philosophical progress can take place when (and sometimes only when) a thoroughly neutral, non-specific theory, treatment, or methodology is adopted. A neutralist treatment of one form of skepticism is used as a case study and is compared with various non-neutral rivals. Along (...) the way, a new taxonomy for paradox is proposed. (shrink)
Descartes’ demon is a deceiver: the demon makes things appear to you other than as they really are. However, as Descartes famously pointed out in the Second Meditation, not all knowledge is imperilled by this kind of deception. You still know you are a thinking thing. Perhaps, though, there is a more virulent demon in epistemic hell, one from which none of our knowledge is safe. Jonathan Schaffer thinks so. The “Debasing Demon” he imagines threatens knowledge not via the truth (...) condition on knowledge, but via the basing condition. This demon can cause any belief to seem like it’s held on a good basis, when it’s really held on a bad basis. Several recent critics, Conee, Ballantyne & Evans ) grant Schaffer the possibility of such a debasing demon, and argue that the skeptical conclusion doesn’t follow. By contrast, we argue that on any plausible account of the epistemic basing relation, the “debasing demon” is impossible. Our argument for why this is so gestures, more generally, to the importance of avoiding common traps by embracing mistaken assumptions about what it takes for a belief to be based on a reason. (shrink)
This paper responds to recent work in the philosophy of Homotopy Type Theory by James Ladyman and Stuart Presnell. They consider one of the rules for identity, path induction, and justify it along ‘pre-mathematical’ lines. I give an alternate justification based on the philosophical framework of inferentialism. Accordingly, I construct a notion of harmony that allows the inferentialist to say when a connective or concept is meaning-bearing and this conception unifies most of the prominent conceptions of harmony through category theory. (...) This categorical harmony is stated in terms of adjoints and says that any concept definable by iterated adjoints from general categorical operations is harmonious. Moreover, it has been shown that identity in a categorical setting is determined by an adjoint in the relevant way. Furthermore, path induction as a rule comes from this definition. Thus we arrive at an account of how path induction, as a rule of inference governing identity, can be justified on mathematically motivated grounds. (shrink)
The best arguments for the 1/3 answer to the Sleeping Beauty problem all require that when Beauty awakes on Monday she should be uncertain what day it is. I argue that this claim should be rejected, thereby clearing the way to accept the 1/2 solution.
The main goal in this paper is to outline and defend a form of Relativism, under which truth is absolute but assertibility is not. I dub such a view Norm-Relativism in contrast to the more familiar forms of Truth-Relativism. The key feature of this view is that just what norm of assertion, belief, and action is in play in some context is itself relative to a perspective. In slogan form: there is no fixed, single norm for assertion, belief, and action. (...) Upshot: 'knows' is neither context-sensitive nor perspectival. (shrink)
It is widely accepted that there is what has been called a non-hypocrisy norm on the appropriateness of moral blame; roughly, one has standing to blame only if one is not guilty of the very offence one seeks to criticize. Our acceptance of this norm is embodied in the common retort to criticism, “Who are you to blame me?”. But there is a paradox lurking behind this commonplace norm. If it is always inappropriate for x to blame y for a (...) wrong that x has committed, then all cases in which x blames x (i.e. cases of self-blame) are rendered inappropriate. But it seems to be ethical common-sense that we are often, sadly, in position (indeed, excellent, privileged position) to blame ourselves for our own moral failings. And thus we have a paradox: a conflict between the inappropriateness of hypocritical blame, and the appropriateness of self-blame. We consider several ways of resolving the paradox, and contend none is as defensible as a position that simply accepts it: we should never blame ourselves. In defending this startling position, we defend a crucial distinction between self-blame and guilt. (shrink)
For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
I provide a manipulation-style argument against classical compatibilism—the claim that freedom to do otherwise is consistent with determinism. My question is simple: if Diana really gave Ernie free will, why isn't she worried that he won't use it precisely as she would like? Diana's non-nervousness, I argue, indicates Ernie's non-freedom. Arguably, the intuition that Ernie lacks freedom to do otherwise is stronger than the direct intuition that he is simply not responsible; this result highlights the importance of the denial of (...) the principle of alternative possibilities for compatibilist theories of responsibility. Along the way, I clarify the dialectical role and structure of “manipulation arguments”, and compare the manipulation argument I develop with the more familiar Consequence Argument. I contend that the two arguments are importantly mutually supporting and reinforcing. The result: classical compatibilists should be nervous—and if PAP is true, all compatibilists should be nervous. (shrink)
A Cantorian argument that there is no set of all truths. There is, for the same reason, no possible world as a maximal set of propositions. And omniscience is logically impossible.
Everyone agrees that we can’t change the past. But what about the future? Though the thought that we can change the future is familiar from popular discourse, it enjoys virtually no support from philosophers, contemporary or otherwise. In this paper, I argue that the thesis that the future is mutable has far more going for it than anyone has yet realized. The view, I hope to show, gains support from the nature of prevention, can provide a new way of responding (...) to arguments for fatalism, can account for the utility of total knowledge of the future, and can help in providing an account of the semantics of the English progressive. On the view developed, the future is mutable in the following sense: perhaps, once, it was true that you would never come to read a paper defending the mutability of the future. And then the future changed. And now you will. (shrink)
It has been an open question whether or not we can define a belief revision operation that is distinct from simple belief expansion using paraconsistent logic. In this paper, we investigate the possibility of meeting the challenge of defining a belief revision operation using the resources made available by the study of dynamic epistemic logic in the presence of paraconsistent logic. We will show that it is possible to define dynamic operations of belief revision in a paraconsistent setting.
The plane was going to crash, but it didn't. Johnny was going to bleed to death, but he didn't. Geach sees here a changing future. In this paper, I develop Geach's primary argument for the (almost universally rejected) thesis that the future is mutable (an argument from the nature of prevention), respond to the most serious objections such a view faces, and consider how Geach's view bears on traditional debates concerning divine foreknowledge and human freedom. As I hope to show, (...) Geach's view constitutes a radically new view on the logic of future contingents, and deserves the status of a theoretical contender in these debates. (shrink)
In this paper, I introduce a problem to the philosophy of religion – the problem of divine moral standing – and explain how this problem is distinct from (albeit related to) the more familiar problem of evil (with which it is often conflated). In short, the problem is this: in virtue of how God would be (or, on some given conception, is) “involved in” our actions, how is it that God has the moral standing to blame us for performing those (...) very actions? In light of the recent literature on “moral standing”, I consider God’s moral standing to blame on two models of “divine providence”: open theism, and theological determinism. I contend that God may have standing on open theism, and – perhaps surprisingly – may also have standing, even on theological determinism, given the truth of compatibilism. Thus, if you think that God could not justly both determine and blame, then you will have to abandon compatibilism. The topic of this paper thus sheds considerable light on the traditional philosophical debate about the conditions of moral responsibility. (shrink)
A scientific community can be modeled as a collection of epistemic agents attempting to answer questions, in part by communicating about their hypotheses and results. We can treat the pathways of scientific communication as a network. When we do, it becomes clear that the interaction between the structure of the network and the nature of the question under investigation affects epistemic desiderata, including accuracy and speed to community consensus. Here we build on previous work, both our own and others’, in (...) order to get a firmer grasp on precisely which features of scientific communities interact with which features of scientific questions in order to influence epistemic outcomes. (shrink)
In her recent paper ‘The Epistemology of Propaganda’ Rachel McKinnon discusses what she refers to as ‘TERF propaganda’. We take issue with three points in her paper. The first is her rejection of the claim that ‘TERF’ is a misogynistic slur. The second is the examples she presents as commitments of so-called ‘TERFs’, in order to establish that radical (and gender critical) feminists rely on a flawed ideology. The third is her claim that standpoint epistemology can be used to establish (...) that such feminists are wrong to worry about a threat of male violence in relation to trans women. In Section 1 we argue that ‘TERF’ is not a merely descriptive term; that to the extent that McKinnon offers considerations in support of the claim that ‘TERF’ is not a slur, these considerations fail; and that ‘TERF’ is a slur according to several prominent accounts in the contemporary literature. In Section 2, we argue that McKinnon misrepresents the position of gender critical feminists, and in doing so fails to establish the claim that the ideology behind these positions is flawed. In Section 3 we argue that McKinnon’s criticism of Stanley fails, and one implication of this is that those she characterizes as ‘positively privileged’ cannot rely on the standpoint-relative knowledge of those she characterizes as ‘negatively privileged’. We also emphasize in this section McKinnon’s failure to understand and account for multiple axes of oppression, of which the cis/trans axis is only one. (shrink)
It remains controversial whether touch is a truly spatial sense or not. Many philosophers suggest that, if touch is indeed spatial, it is only through its alliances with exploratory movement, and with proprioception. Here we develop the notion that a minimal yet important form of spatial perception may occur in purely passive touch. We do this by showing that the array of tactile receptive fields in the skin, and appropriately relayed to the cortex, may contain the same basic informational building (...) blocks that a creature navigating around its environment uses to build up a perception of space. We illustrate this point with preliminary evidence that perception of spatiotemporal patterns on the human skin shows some of the same features as spatial navigation in animals. We argue (a) that the receptor array defines a ‘tactile field’, (b) that this field exists in a minimal form in ‘skin space’, logically prior to any transformation into bodily or external spatial coordinates, and (c) that this field supports tactile perception without integration of concurrent proprioceptive or motor information. The basic cognitive elements of space perception may begin at lower levels of neural and perceptual organisation than previously thought. (shrink)
In this paper, I present some ruminations on Hume's argument from miracles and the distorted view of rationality that it reflects (along with religious skepticism generally) contrasting it with what I take to be a better account of rationality, one more sympathetic - at least less hostile - to religious claims.
We model scientific theories as Bayesian networks. Nodes carry credences and function as abstract representations of propositions within the structure. Directed links carry conditional probabilities and represent connections between those propositions. Updating is Bayesian across the network as a whole. The impact of evidence at one point within a scientific theory can have a very different impact on the network than does evidence of the same strength at a different point. A Bayesian model allows us to envisage and analyze the (...) differential impact of evidence and credence change at different points within a single network and across different theoretical structures. (shrink)
‘The problem with simulations is that they are doomed to succeed.’ So runs a common criticism of simulations—that they can be used to ‘prove’ anything and are thus of little or no scientific value. While this particular objection represents a minority view, especially among those who work with simulations in a scientific context, it raises a difficult question: what standards should we use to differentiate a simulation that fails from one that succeeds? In this paper we build on a structural (...) analysis of simulation developed in previous work to provide an evaluative account of the variety of ways in which simulations do fail. We expand the structural analysis in terms of the relationship between a simulation and its real-world target emphasizing the important role of aspects intended to correspond and also those specifically intended not to correspond to reality. The result is an outline both of the ways in which simulations can fail and the scientific importance of those various forms of failure. (shrink)
A small consortium of philosophers has begun work on the implications of epistemic networks (Zollman 2008 and forthcoming; Grim 2006, 2007; Weisberg and Muldoon forthcoming), building on theoretical work in economics, computer science, and engineering (Bala and Goyal 1998, Kleinberg 2001; Amaral et. al., 2004) and on some experimental work in social psychology (Mason, Jones, and Goldstone, 2008). This paper outlines core philosophical results and extends those results to the specific question of thresholds. Epistemic maximization of certain types does show (...) clear threshold effects. Intriguingly, however, those effects appear to be importantly independent from more familiar threshold effects in networks. (shrink)
In everyday language, we readily attribute experiences to groups. For example, 1 might say, “Spain celebrated winning the European Cup” or “The uncovering of corruption caused the union to think long and hard about its internal structure.” In each case, the attribution makes sense. However, it is quite difficult to give a nonreductive account of precisely what these statements mean because in each case a mental state is ascribed to a group, and it is not obvious that groups can have (...) mental states. In this article, I do not offer an explicit theory of collective experience. Instead, I draw on phenomenological analyses and empirical data in order to provide general conditions that a more specific theory of collective experience must meet in order to be coherent. (shrink)
We show that cast shadows can have a significant influence on the speed of visual search. In particular, we find that search based on the shape of a region is affected when the region is darker than the background and corresponds to a shadow formed by lighting from above. Results support the proposal that an early-level system rapidly identifies regions as shadows and then discounts them, making their shapes more difficult to access. Several constraints used by this system are mapped (...) out, including constraints on the luminance and texture of the shadow region, and on the nature of the item casting the shadow. Among other things, this system is found to distinguish between line elements (items containing only edges) and surface elements (items containing visible surfaces), with only the latter deemed capable of casting a shadow. (shrink)
Though my ultimate concern is with issues in epistemology and metaphysics, let me phrase the central question I will pursue in terms evocative of philosophy of religion: What are the implications of our logic-in particular, of Cantor and G6del-for the possibility of omniscience?
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.