In this paper, we introduce and defend the recurrent model for understanding bodily spatial phenomenology. While Longo, Azañón and Haggard (2010) propose a bottom-up model, Bermúdez (2017) emphasizes the top-down aspect of the information processing loop. We argue that both are only half of the story. Section 1 intro- duces what the issues are. Section 2 starts by explaining why the top- down, descending direction is necessary with the illustration from the ‘body-based tactile rescaling’ paradigm (de Vignemont, Ehrsson and (...)Haggard, 2005). It then argues that the bottom-up, ascending direction is also necessary, and substantiates this view with recent research on skin space and tactile field (Haggard et al., 2017). Section 3 discusses the model’s application to body ownership and bodily self-representation. Implications also extend to topics such as sense modality individuation (Macpherson, 2011), the constancy- based view of perception (Burge, 2010), and the perception/cognition divide (Firestone and Scholl, 2016). (shrink)
It remains controversial whether touch is a truly spatial sense or not. Many philosophers suggest that, if touch is indeed spatial, it is only through its alliances with exploratory movement, and with proprioception. Here we develop the notion that a minimal yet important form of spatial perception may occur in purely passive touch. We do this by showing that the array of tactile receptive fields in the skin, and appropriately relayed to the cortex, may contain the same basic informational building (...) blocks that a creature navigating around its environment uses to build up a perception of space. We illustrate this point with preliminary evidence that perception of spatiotemporal patterns on the human skin shows some of the same features as spatial navigation in animals. We argue (a) that the receptor array defines a ‘tactile field’, (b) that this field exists in a minimal form in ‘skin space’, logically prior to any transformation into bodily or external spatial coordinates, and (c) that this field supports tactile perception without integration of concurrent proprioceptive or motor information. The basic cognitive elements of space perception may begin at lower levels of neural and perceptual organisation than previously thought. (shrink)
Our perception of where touch occurs on our skin shapes our interactions with the world. Most accounts of cutaneous localisation emphasise spatial transformations from a skin-based reference frame into body-centred and external egocentric coordinates. We investigated another possible method of tactile localisation based on an intrinsic perception of ‘skin space’. The arrangement of cutaneous receptive fields (RFs) could allow one to track a stimulus as it moves across the skin, similarly to the way animals navigate using path integration. We applied (...) curved tactile motions to the hands of human volunteers. Participants identified the location midway between the start and end points of each motion path. Their bisection judgements were systematically biased towards the integrated motion path, consistent with the characteristic inward error that occurs in navigation by path integration. We thus showed that integration of continuous sensory inputs across several tactile RFs provides an intrinsic mechanism for spatial perception. (shrink)
What might early Buddhist teachings offer neuroscience and how might neuroscience inform contemporary Buddhism? Both early Buddhist teachings and cognitive neuroscience suggest that the conditioning of our cognitive apparatus and brain plays a role in agency that may be either efficacious or non-efficacious. Both consider internal time to play a central role in the efficacy of agency. Buddhism offers an approach that promises to increase the efficacy of agency. This approach is found in five early Buddhist teachings that are re-interpreted (...) here with a view to explaining how they might be understood as a dynamic basis for ‘participatory will’ in the context of existing free will debates and the neuroscientific work of PatrickHaggard (et al.). These perspectives offer Buddhism and neuroscience a basis for informing each other as the shared themes of: (1) cognition is dynamic and complex/aggregate based, (2) being dynamic, cognition lacks a fixed basis of efficacy, and (3) efficacy of cognition may be achieved by an understanding of the concept of dynamic: as harmony and efficiency and by means of Buddha-warranted processes that involve internal time. (shrink)
What might early Buddhist teachings offer neuroscience and how might neuroscience inform contemporary Buddhism? Both early Buddhist teachings and cognitive neuroscience suggest that the conditioning of our cognitive apparatus and brain plays a role in agency that may be either efficacious or non-efficacious. Both consider internal time to play a central role in the efficacy of agency. Buddhism offers an approach that promises to increase the efficacy of agency. This approach is found in five early Buddhist teachings that are re-interpreted (...) here with a view to explaining how they might be understood as a dynamic basis for ‘participatory will’ in the context of existing free will debates and the neuroscientific work of PatrickHaggard. These perspectives offer Buddhism and neuroscience a basis for informing each other as the shared themes of: cognition is dynamic and complex/aggregate based, being dynamic, cognition lacks a fixed basis of efficacy, and efficacy of cognition may be achieved by an understanding of the concept of dynamic: as harmony and efficiency and by means of Buddha-warranted processes that involve internal time. (shrink)
An early, very preliminary edition of this book was circulated in 1962 under the title Set-theoretical Structures in Science. There are many reasons for maintaining that such structures play a role in the philosophy of science. Perhaps the best is that they provide the right setting for investigating problems of representation and invariance in any systematic part of science, past or present. Examples are easy to cite. Sophisticated analysis of the nature of representation in perception is to be found already (...) in Plato and Aristotle. One of the great intellectual triumphs of the nineteenth century was the mechanical explanation of such familiar concepts as temperature and pressure by their representation in terms of the motion of particles. A more disturbing change of viewpoint was the realization at the beginning of the twentieth century that the separate invariant properties of space and time must be replaced by the space-time invariants of Einstein's special relativity. Another example, the focus of the longest chapter in this book, is controversy extending over several centuries on the proper representation of probability. The six major positions on this question are critically examined. Topics covered in other chapters include an unusually detailed treatment of theoretical and experimental work on visual space, the two senses of invariance represented by weak and strong reversibility of causal processes, and the representation of hidden variables in quantum mechanics. The final chapter concentrates on different kinds of representations of language, concluding with some empirical results on brain-wave representations of words and sentences. (shrink)
Recent scholarship in intellectual humility (IH) has attempted to provide deeper understanding of the virtue as personality trait and its impact on an individual's thoughts, beliefs, and actions. A limitations-owning perspective of IH focuses on a proper recognition of the impact of intellectual limitations and a motivation to overcome them, placing it as the mean between intellectual arrogance and intellectual servility. We developed the Limitations-Owning Intellectual Humility Scale to assess this conception of IH with related personality constructs. In Studies 1 (...) (n= 386) and 2 (n = 296), principal factor and confirmatory factor analyses revealed a three-factor model – owning one's intellectual limitations, appropriate discomfort with intellectual limitations, and love of learning. Study 3 (n = 322) demonstrated strong test-retest reliability of the measure over 5 months, while Study 4 (n = 612) revealed limitations-owning IH correlated negatively with dogmatism, closed-mindedness, and hubristic pride and positively with openness, assertiveness, authentic pride. It also predicted openness and closed-mindedness over and above education, social desirability, and other measures of IH. The limitations-owning understanding of IH and scale allow for a more nuanced, spectrum interpretation and measurement of the virtue, which directs future study inside and outside of psychology. (shrink)
This book launches a sustained defense of a radical interpretation of the doctrine of the open future. Patrick Todd argues that all claims about undetermined aspects of the future are simply false.
Recently, philosophers have turned their attention to the question, not when a given agent is blameworthy for what she does, but when a further agent has the moral standing to blame her for what she does. Philosophers have proposed at least four conditions on having “moral standing”: -/- 1. One’s blame would not be “hypocritical”. 2. One is not oneself “involved in” the target agent’s wrongdoing. 3. One must be warranted in believing that the target is indeed blameworthy for the (...) wrongdoing. 4. The target’s wrongdoing must some of “one’s business”. -/- These conditions are often proposed as both conditions on one and the same thing, and as marking fundamentally different ways of “losing standing.” Here I call these claims into question. First, I claim that conditions (3) and (4) are simply conditions on different things than are conditions (1) and (2). Second, I argue that condition (2) reduces to condition (1): when “involvement” removes someone’s standing to blame, it does so only by indicating something further about that agent, viz., that he or she lacks commitment to the values that condemn the wrongdoer’s action. The result: after we clarify the nature of the non-hypocrisy condition, we will have a unified account of moral standing to blame. Issues also discussed: whether standing can ever be regained, the relationship between standing and our "moral fragility", the difference between mere inconsistency and hypocrisy, and whether a condition of standing might be derived from deeper facts about the "equality of persons". (shrink)
P.F. Strawson’s (1962) “Freedom and Resentment” has provoked a wide range of responses, both positive and negative, and an equally wide range of interpretations. In particular, beginning with Gary Watson, some have seen Strawson as suggesting a point about the “order of explanation” concerning moral responsibility: it is not that it is appropriate to hold agents responsible because they are morally responsible, rather, it is ... well, something else. Such claims are often developed in different ways, but one thing remains (...) constant: they meant to be incompatible with libertarian theories of moral responsibility. The overarching theme of this paper is that extant developments of “the reversal” face a dilemma: in order to make the proposals plausibly anti-libertarian, they must be made to be implausible on other grounds. I canvas different attempts to articulate a “Strawsonian reversal”, and argue that none is fit for the purposes for which it is intended. I conclude by suggesting a way of clarifying the intended thesis: an analogy with the concept of funniness. The result: proponents of the “reversal” need to accept the difficult result that if we blamed small children, they would be blameworthy, or instead explain how their view escapes this result, while still being a view on which our blaming practices “fix the facts” of moral responsibility. (shrink)
At least since Aristotle’s famous 'sea-battle' passages in On Interpretation 9, some substantial minority of philosophers has been attracted to the doctrine of the open future--the doctrine that future contingent statements are not true. But, prima facie, such views seem inconsistent with the following intuition: if something has happened, then (looking back) it was the case that it would happen. How can it be that, looking forwards, it isn’t true that there will be a sea battle, while also being true (...) that, looking backwards, it was the case that there would be a sea battle? This tension forms, in large part, what might be called the problem of future contingents. A dominant trend in temporal logic and semantic theorizing about future contingents seeks to validate both intuitions. Theorists in this tradition--including some interpretations of Aristotle, but paradigmatically, Thomason (1970), as well as more recent developments in Belnap, et. al (2001) and MacFarlane (2003, 2014)--have argued that the apparent tension between the intuitions is in fact merely apparent. In short, such theorists seek to maintain both of the following two theses: (i) the open future: Future contingents are not true, and (ii) retro-closure: From the fact that something is true, it follows that it was the case that it would be true. It is well-known that reflection on the problem of future contingents has in many ways been inspired by importantly parallel issues regarding divine foreknowledge and indeterminism. In this paper, we take up this perspective, and ask what accepting both the open future and retro-closure predicts about omniscience. When we theorize about a perfect knower, we are theorizing about what an ideal agent ought to believe. Our contention is that there isn’t an acceptable view of ideally rational belief given the assumptions of the open future and retro-closure, and thus this casts doubt on the conjunction of those assumptions. (shrink)
There is a familiar debate between Russell and Strawson concerning bivalence and ‘the present King of France’. According to the Strawsonian view, ‘The present King of France is bald’ is neither true nor false, whereas, on the Russellian view, that proposition is simply false. In this paper, I develop what I take to be a crucial connection between this debate and a different domain where bivalence has been at stake: future contingents. On the familiar ‘Aristotelian’ view, future contingent propositions are (...) neither true nor false. However, I argue that, just as there is a Russellian alternative to the Strawsonian view concerning ‘the present King of France’, according to which the relevant class of propositions all turn out false, so there is a Russellian alternative to the Aristotelian view, according to which future contingents all turn out false, not neither true nor false. The result: contrary to millennia of philosophical tradition, we can be open futurists without denying bivalence. (shrink)
Various philosophers have long since been attracted to the doctrine that future contingent propositions systematically fail to be true—what is sometimes called the doctrine of the open future. However, open futurists have always struggled to articulate how their view interacts with standard principles of classical logic—most notably, with the Law of Excluded Middle. For consider the following two claims: Trump will be impeached tomorrow; Trump will not be impeached tomorrow. According to the kind of open futurist at issue, both of (...) these claims may well fail to be true. According to many, however, the disjunction of these claims can be represented as p ∨ ~p—that is, as an instance of LEM. In this essay, however, I wish to defend the view that the disjunction these claims cannot be represented as an instance of p ∨ ~p. And this is for the following reason: the latter claim is not, in fact, the strict negation of the former. More particularly, there is an important semantic distinction between the strict negation of the first claim [~] and the latter claim. However, the viability of this approach has been denied by Thomason, and more recently by MacFarlane and Cariani and Santorio, the latter of whom call the denial of the given semantic distinction “scopelessness”. According to these authors, that is, will is “scopeless” with respect to negation; whereas there is perhaps a syntactic distinction between ‘~Will p’ and ‘Will ~p’, there is no corresponding semantic distinction. And if this is so, the approach in question fails. In this paper, then, I criticize the claim that will is “scopeless” with respect to negation. I argue that will is a so-called “neg-raising” predicate—and that, in this light, we can see that the requisite scope distinctions aren’t missing, but are simply being masked. The result: a under-appreciated solution to the problem of future contingents that sees and as contraries, not contradictories. (shrink)
This Introduction has three sections, on "logical fatalism," "theological fatalism," and the problem of future contingents, respectively. In the first two sections, we focus on the crucial idea of "dependence" and the role it plays it fatalistic arguments. Arguably, the primary response to the problems of logical and theological fatalism invokes the claim that the relevant past truths or divine beliefs depend on what we do, and therefore needn't be held fixed when evaluating what we can do. We call the (...) sort of dependence needed for this response to be successful "dependence with a capital 'd'": Dependence. We consider different accounts of Dependence, especially the account implicit in the so-called "Ockhamist" response to the fatalistic arguments. Finally, we present the problem of future contingents: what could "ground" truths about the undetermined future? On the other hand, how could all such propositions fail to be true? (shrink)
The Hong and Page ‘diversity trumps ability’ result has been used to argue for the more general claim that a diverse set of agents is epistemically superior to a comparable group of experts. Here we extend Hong and Page’s model to landscapes of different degrees of randomness and demonstrate the sensitivity of the ‘diversity trumps ability’ result. This analysis offers a more nuanced picture of how diversity, ability, and expertise may relate. Although models of this sort can indeed be suggestive (...) for diversity policies, we advise against interpreting such results overly broadly. (shrink)
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...) affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms. (shrink)
This paper responds to recent work in the philosophy of Homotopy Type Theory by James Ladyman and Stuart Presnell. They consider one of the rules for identity, path induction, and justify it along ‘pre-mathematical’ lines. I give an alternate justification based on the philosophical framework of inferentialism. Accordingly, I construct a notion of harmony that allows the inferentialist to say when a connective or concept is meaning-bearing and this conception unifies most of the prominent conceptions of harmony through category theory. (...) This categorical harmony is stated in terms of adjoints and says that any concept definable by iterated adjoints from general categorical operations is harmonious. Moreover, it has been shown that identity in a categorical setting is determined by an adjoint in the relevant way. Furthermore, path induction as a rule comes from this definition. Thus we arrive at an account of how path induction, as a rule of inference governing identity, can be justified on mathematically motivated grounds. (shrink)
I provide a manipulation-style argument against classical compatibilism—the claim that freedom to do otherwise is consistent with determinism. My question is simple: if Diana really gave Ernie free will, why isn't she worried that he won't use it precisely as she would like? Diana's non-nervousness, I argue, indicates Ernie's non-freedom. Arguably, the intuition that Ernie lacks freedom to do otherwise is stronger than the direct intuition that he is simply not responsible; this result highlights the importance of the denial of (...) the principle of alternative possibilities for compatibilist theories of responsibility. Along the way, I clarify the dialectical role and structure of “manipulation arguments”, and compare the manipulation argument I develop with the more familiar Consequence Argument. I contend that the two arguments are importantly mutually supporting and reinforcing. The result: classical compatibilists should be nervous—and if PAP is true, all compatibilists should be nervous. (shrink)
It is widely accepted that there is what has been called a non-hypocrisy norm on the appropriateness of moral blame; roughly, one has standing to blame only if one is not guilty of the very offence one seeks to criticize. Our acceptance of this norm is embodied in the common retort to criticism, “Who are you to blame me?”. But there is a paradox lurking behind this commonplace norm. If it is always inappropriate for x to blame y for a (...) wrong that x has committed, then all cases in which x blames x (i.e. cases of self-blame) are rendered inappropriate. But it seems to be ethical common-sense that we are often, sadly, in position (indeed, excellent, privileged position) to blame ourselves for our own moral failings. And thus we have a paradox: a conflict between the inappropriateness of hypocritical blame, and the appropriateness of self-blame. We consider several ways of resolving the paradox, and contend none is as defensible as a position that simply accepts it: we should never blame ourselves. In defending this startling position, we defend a crucial distinction between self-blame and guilt. (shrink)
Everyone agrees that we can’t change the past. But what about the future? Though the thought that we can change the future is familiar from popular discourse, it enjoys virtually no support from philosophers, contemporary or otherwise. In this paper, I argue that the thesis that the future is mutable has far more going for it than anyone has yet realized. The view, I hope to show, gains support from the nature of prevention, can provide a new way of responding (...) to arguments for fatalism, can account for the utility of total knowledge of the future, and can help in providing an account of the semantics of the English progressive. On the view developed, the future is mutable in the following sense: perhaps, once, it was true that you would never come to read a paper defending the mutability of the future. And then the future changed. And now you will. (shrink)
Conceptual Engineering alleges that philosophical problems are best treated via revising or replacing our concepts (or words). The goal here is not to defend Conceptual Engineering but rather show that it can (and should) invoke Neutralism—the broad view that philosophical progress can take place when (and sometimes only when) a thoroughly neutral, non-specific theory, treatment, or methodology is adopted. A neutralist treatment of one form of skepticism is used as a case study and is compared with various non-neutral rivals. Along (...) the way, a new taxonomy for paradox is proposed. (shrink)
In this paper, I introduce a problem to the philosophy of religion – the problem of divine moral standing – and explain how this problem is distinct from (albeit related to) the more familiar problem of evil (with which it is often conflated). In short, the problem is this: in virtue of how God would be (or, on some given conception, is) “involved in” our actions, how is it that God has the moral standing to blame us for performing those (...) very actions? In light of the recent literature on “moral standing”, I consider God’s moral standing to blame on two models of “divine providence”: open theism, and theological determinism. I contend that God may have standing on open theism, and – perhaps surprisingly – may also have standing, even on theological determinism, given the truth of compatibilism. Thus, if you think that God could not justly both determine and blame, then you will have to abandon compatibilism. The topic of this paper thus sheds considerable light on the traditional philosophical debate about the conditions of moral responsibility. (shrink)
The plane was going to crash, but it didn't. Johnny was going to bleed to death, but he didn't. Geach sees here a changing future. In this paper, I develop Geach's primary argument for the (almost universally rejected) thesis that the future is mutable (an argument from the nature of prevention), respond to the most serious objections such a view faces, and consider how Geach's view bears on traditional debates concerning divine foreknowledge and human freedom. As I hope to show, (...) Geach's view constitutes a radically new view on the logic of future contingents, and deserves the status of a theoretical contender in these debates. (shrink)
Descartes’ demon is a deceiver: the demon makes things appear to you other than as they really are. However, as Descartes famously pointed out in the Second Meditation, not all knowledge is imperilled by this kind of deception. You still know you are a thinking thing. Perhaps, though, there is a more virulent demon in epistemic hell, one from which none of our knowledge is safe. Jonathan Schaffer thinks so. The “Debasing Demon” he imagines threatens knowledge not via the truth (...) condition on knowledge, but via the basing condition. This demon can cause any belief to seem like it’s held on a good basis, when it’s really held on a bad basis. Several recent critics, Conee, Ballantyne & Evans ) grant Schaffer the possibility of such a debasing demon, and argue that the skeptical conclusion doesn’t follow. By contrast, we argue that on any plausible account of the epistemic basing relation, the “debasing demon” is impossible. Our argument for why this is so gestures, more generally, to the importance of avoiding common traps by embracing mistaken assumptions about what it takes for a belief to be based on a reason. (shrink)
There is a longstanding argument that purports to show that divine foreknowledge is inconsistent with human freedom to do otherwise. Proponents of this argument, however, have for some time been met with the following reply: the argument posits what would have to be a mysterious non-causal constraint on freedom. In this paper, I argue that this objection is misguided – not because after all there can indeed be non-causal constraints on freedom (as in Pike, Fischer, and Hunt), but because the (...) success of the incompatibilist’s argument does not require the real possibility of non-causal constraints on freedom. I contend that the incompatibilist’s argument is best seen as showing that, given divine foreknowledge, something makes one unfree – and that this something is most plausibly identified, not with the foreknowledge itself, but with the causally deterministic factors that would have to be in place in order for there to be infallible foreknowledge in the first place. (shrink)
Joe Horton argues that partial aggregation yields unacceptable verdicts in cases with risk and multiple decisions. I begin by showing that Horton’s challenge does not depend on risk, since exactly similar arguments apply to riskless cases. The underlying conflict Horton exposes is between partial aggregation and certain principles of diachronic choice. I then provide two arguments against these diachronic principles: they conflict with intuitions about parity, prerogatives, and cyclical preferences, and they rely on an odd assumption about diachronic choice. Finally, (...) I offer an explanation, on behalf of partial aggregation, for why these diachronic principles fail. (shrink)
The main goal in this paper is to outline and defend a form of Relativism, under which truth is absolute but assertibility is not. I dub such a view Norm-Relativism in contrast to the more familiar forms of Truth-Relativism. The key feature of this view is that just what norm of assertion, belief, and action is in play in some context is itself relative to a perspective. In slogan form: there is no fixed, single norm for assertion, belief, and action. (...) Upshot: 'knows' is neither context-sensitive nor perspectival. (shrink)
In this paper I argue for an association between impurity and explanatory power in contemporary mathematics. This proposal is defended against the ancient and influential idea that purity and explanation go hand-in-hand (Aristotle, Bolzano) and recent suggestions that purity/impurity ascriptions and explanatory power are more or less distinct (Section 1). This is done by analyzing a central and deep result of additive number theory, Szemerédi’s theorem, and various of its proofs (Section 2). In particular, I focus upon the radically impure (...) (ergodic) proof due to Furstenberg (Section 3). Furstenberg’s ergodic proof is striking because it utilizes intuitively foreign and infinitary resources to prove a finitary combinatorial result and does so in a perspicuous fashion. I claim that Furstenberg’s proof is explanatory in light of its clear expression of a crucial structural result, which provides the “reason why” Szemerédi’s theorem is true. This is, however, rather surprising: how can such intuitively different conceptual resources “get a grip on” the theorem to be proved? I account for this phenomenon by articulating a new construal of the content of a mathematical statement, which I call structural content (Section 4). I argue that the availability of structural content saves intuitive epistemic distinctions made in mathematical practice and simultaneously explicates the intervention of surprising and explanatorily rich conceptual resources. Structural content also disarms general arguments for thinking that impurity and explanatory power might come apart. Finally, I sketch a proposal that, once structural content is in hand, impure resources lead to explanatory proofs via suitably understood varieties of simplification and unification (Section 5). (shrink)
It has previously been argued that Schopenhauer is a distinctive type of virtue ethicist (Hassan, 2019). The Aristotelian version of virtue ethics has traditionally been accused of being fundamentally egoistic insofar as the possession of virtues is beneficial to the possessor, and serve as the ultimate justification for obtaining them. Indeed, Schopenhauer himself makes a version of this complaint. In this chapter, I investigate whether Schopenhauer’s moral framework nevertheless suffers from this same objection of egoism in light of how he (...) conceives of the relationship between morality and ascetic 'salvation'. Drawing upon his published works and letters, I argue that Schopenhauer has the resources to avoid the objection. Because of his idiosyncratic metaphysics, I argue that Schopenhauer can also avoid the problem of self-effacement which may result from the way in which he avoids the egoism objection. The discussion thus intends to establish further nuance to Schopenhauer’s conception of virtue and its value. (shrink)
A Cantorian argument that there is no set of all truths. There is, for the same reason, no possible world as a maximal set of propositions. And omniscience is logically impossible.
The best arguments for the 1/3 answer to the Sleeping Beauty problem all require that when Beauty awakes on Monday she should be uncertain what day it is. I argue that this claim should be rejected, thereby clearing the way to accept the 1/2 solution.
A scientific community can be modeled as a collection of epistemic agents attempting to answer questions, in part by communicating about their hypotheses and results. We can treat the pathways of scientific communication as a network. When we do, it becomes clear that the interaction between the structure of the network and the nature of the question under investigation affects epistemic desiderata, including accuracy and speed to community consensus. Here we build on previous work, both our own and others’, in (...) order to get a firmer grasp on precisely which features of scientific communities interact with which features of scientific questions in order to influence epistemic outcomes. (shrink)
It has been an open question whether or not we can define a belief revision operation that is distinct from simple belief expansion using paraconsistent logic. In this paper, we investigate the possibility of meeting the challenge of defining a belief revision operation using the resources made available by the study of dynamic epistemic logic in the presence of paraconsistent logic. We will show that it is possible to define dynamic operations of belief revision in a paraconsistent setting.
For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
‘The problem with simulations is that they are doomed to succeed.’ So runs a common criticism of simulations—that they can be used to ‘prove’ anything and are thus of little or no scientific value. While this particular objection represents a minority view, especially among those who work with simulations in a scientific context, it raises a difficult question: what standards should we use to differentiate a simulation that fails from one that succeeds? In this paper we build on a structural (...) analysis of simulation developed in previous work to provide an evaluative account of the variety of ways in which simulations do fail. We expand the structural analysis in terms of the relationship between a simulation and its real-world target emphasizing the important role of aspects intended to correspond and also those specifically intended not to correspond to reality. The result is an outline both of the ways in which simulations can fail and the scientific importance of those various forms of failure. (shrink)
We model scientific theories as Bayesian networks. Nodes carry credences and function as abstract representations of propositions within the structure. Directed links carry conditional probabilities and represent connections between those propositions. Updating is Bayesian across the network as a whole. The impact of evidence at one point within a scientific theory can have a very different impact on the network than does evidence of the same strength at a different point. A Bayesian model allows us to envisage and analyze the (...) differential impact of evidence and credence change at different points within a single network and across different theoretical structures. (shrink)
A small consortium of philosophers has begun work on the implications of epistemic networks (Zollman 2008 and forthcoming; Grim 2006, 2007; Weisberg and Muldoon forthcoming), building on theoretical work in economics, computer science, and engineering (Bala and Goyal 1998, Kleinberg 2001; Amaral et. al., 2004) and on some experimental work in social psychology (Mason, Jones, and Goldstone, 2008). This paper outlines core philosophical results and extends those results to the specific question of thresholds. Epistemic maximization of certain types does show (...) clear threshold effects. Intriguingly, however, those effects appear to be importantly independent from more familiar threshold effects in networks. (shrink)
Though my ultimate concern is with issues in epistemology and metaphysics, let me phrase the central question I will pursue in terms evocative of philosophy of religion: What are the implications of our logic-in particular, of Cantor and G6del-for the possibility of omniscience?
The Law of Non-Contradiction holds that both sides of a contradiction cannot be true. Dialetheism is the view that there are contradictions both sides of which are true. Crucial to the dispute, then, is the central notion of contradiction. My first step here is to work toward clarification of that simple and central notion: Just what is a contradiction?
Smith has argued that moral realism need not be threatened by apparent moral disagreement. One reason he gives is that moral debate has tended to elicit convergence in moral views. From here, he argues inductively that current disagreements will likely be resolved on the condition that each party is rational and fully informed. The best explanation for this phenomenon, Smith argues, is that there are mind-independent moral facts that humans are capable of knowing. In this paper, I seek to challenge (...) this argument—and more recent versions of it—by arguing that historical convergence in moral views may occur for various arational reasons. If such reasons possibly result in convergence—which Smith effectively concedes—then the moral realist would require an additional a posteriori argument to establish that convergence in moral views occurred for the right reasons. Hence, Smith-style arguments, as they stand, cannot be mobilised in support of moral realism. Rather, this investigation demonstrates the necessity of a genuine history of morality for any convergence claim in support of a meta-ethical view. (shrink)
In this paper I show that a variety of Cartesian Conceptions of the mental are unworkable. In particular, I offer a much weaker conception of limited discrimination than the one advanced by Williamson (2000) and show that this weaker conception, together with some plausible background assumptions, is not only able to undermine the claim that our core mental states are luminous (roughly: if one is in such a state then one is in a position to know that one is) but (...) also the claim that introspection is infallible with respect to our core mental states (where a belief that C obtains is infallible just in case if one believes that C obtains then C obtains). The upshot is a broader and much more powerful case against the Cartesian conception of the mental than has been advanced hitherto. (shrink)
One set of neglected problems consists of paradoxes of omniscience clearly recognizable as forms of the Liar, and these I have never seen raised at all. Other neglected problems are difficulties for omniscience posed by recent work on belief de se and essential indexicals. These have not yet been given the attention they deserve.
In this paper, I articulate an argument for incompatibilism about moral responsibility and determinism. My argument comes in the form of an extended story, modeled loosely on Peter van Inwagen’s “rollback argument” scenario. I thus call it “the replication argument.” As I aim to bring out, though the argument is inspired by so-called “manipulation” and “original design” arguments, the argument is not a version of either such argument—and plausibly has advantages over both. The result, I believe, is a more convincing (...) incompatibilist argument than those we have considered previously. (shrink)
Epistemic Contextualism is the view that “knows that” is semantically context-sensitive and that properly accommodating this fact into our philosophical theory promises to solve various puzzles concerning knowledge. Yet Epistemic Contextualism faces a big—some would say fatal—problem: The Semantic Error Problem. In its prominent form, this runs thus: speakers just don’t seem to recognise that “knows that” is context-sensitive; so, if “knows that” really is context-sensitive then such speakers are systematically in error about what is said by, or how to (...) evaluate, ordinary uses of “S knows that p”; but since it's wildly implausible that ordinary speakers should exhibit such systematic error, the expression “knows that” isn't context-sensitive. We are interested in whether, and in what ways, there is such semantic error; if there is such error, how it arises and is made manifest; and, again, if there is such error to what extent it is a problem for Epistemic Contextualism. The upshot is that some forms of The Semantic Error Problem turn out to be largely unproblematic. Those that remain troublesome have analogue error problems for various competitor conceptions of knowledge. So, if error is any sort of problem, then there is a problem for every extant competitor view. (shrink)
The iterated Prisoner’s Dilemma has become the standard model for the evolution of cooperative behavior within a community of egoistic agents, frequently cited for implications in both sociology and biology. Due primarily to the work of Axelrod (1980a, 198Ob, 1984, 1985), a strategy of tit for tat (TFT) has established a reputation as being particularly robust. Nowak and Sigmund (1992) have shown, however, that in a world of stochastic error or imperfect communication, it is not TFT that finally triumphs in (...) an ecological model based on population percentages (Axelrod and Hamilton 1981), but ‘generous tit for tat’ (GTFT), which repays cooperation with a probability of cooperation approaching 1 but forgives defection with a probability of l/3. In this paper, we consider a spatialized instantiation of the stochastic Prisoner’s Dilemma, using two-dimensional cellular automata (Wolfram, 1984, 1986; Gutowitz, 1990) to model the spatial dynamics of populations of competing strategies. The surprising result is that in the spatial model it is not GIFT but still more generous strategies that are favored. The optimal strategy within this spatial ecology appears to be a form of ‘bending over backwards’, which returns cooperation for defection with a probability of 2/3 - a rate twice as generous as GTFT. (shrink)
At the most general level, "manipulation" refers one of many ways of influencing behavior, along with (but to be distinguished from) other such ways, such as coercion and rational persuasion. Like these other ways of influencing behavior, manipulation is of crucial importance in various ethical contexts. First, there are important questions concerning the moral status of manipulation itself; manipulation seems to be mor- ally problematic in ways in which (say) rational persuasion does not. Why is this so? Furthermore, the notion (...) of manipulation has played an increasingly central role in debates about free will and moral responsibility. Despite its significance in these (and other) contexts, however, the notion of manipulation itself remains deeply vexed. I would say notoriously vexed, but in fact direct philosophical treatments of the notion of manipulation are few and far between, and those that do exist are nota- ble for the sometimes widely divergent conclusions they reach concerning what it is. I begin by addressing (though certainly not resolving) the conceptual issue of how to distinguish manipulation from other ways of influencing behavior. Along the way, I also briefly address the (intimately related) question of the moral status of manipulation: what, if anything, makes it morally problematic? Then I discuss the controversial ways in which the notion of manipulation has been employed in contemporary debates about free will and moral responsibility. (shrink)
One of the basic principles of the general definition of information is its rejection of dataless information, which is reflected in its endorsement of an ontological neutrality. In general, this principles states that “there can be no information without physical implementation” (Floridi (2005)). Though this is standardly considered a commonsensical assumption, many questions arise with regard to its generalised application. In this paper a combined logic for data and information is elaborated, and specifically used to investigate the consequences of restricted (...) and unrestricted data-implementation-principles. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.