Contextualists and pragmatists agree that knowledge-denying sentences are contextually variable, in the sense that a knowledge-denying sentence might semantically express a false proposition in one context and a true proposition in another context, without any change in the properties traditionally viewed as necessary for knowledge. Minimalists deny both pragmatism and contextualism, and maintain that knowledge-denying sentences are not contextually variable. To defend their view from cases like DeRose and Stanley's high stakes bank case, minimalists like PatrickRysiew, Jessica (...) Brown, and Wayne Davis forward ‘warranted assertability maneuvers.’ The basic idea is that some knowledge-denying sentence seems contextually variable because we mistake what a speaker pragmatically conveys by uttering that sentence for what she literally says by uttering that sentence. In this paper, I raise problems for the warranted assertability maneuvers of Rysiew, Brown, and Davis, and then present a warranted assertability maneuver that should succeed if any warranted assertability maneuver will succeed. I then show how my warranted assertability maneuver fails, and how the problem with my warranted assertability maneuver generalizes to pragmatic responses in general. The upshot of my argument is that, in order to defend their view from cases like DeRose and Stanley's high stakes bank case, minimalists must prioritize the epistemological question whether the subjects in those cases know over linguistic questions about the pragmatics of various knowledge-denying sentences. (shrink)
P.F. Strawson’s (1962) “Freedom and Resentment” has provoked a wide range of responses, both positive and negative, and an equally wide range of interpretations. In particular, beginning with Gary Watson, some have seen Strawson as suggesting a point about the “order of explanation” concerning moral responsibility: it is not that it is appropriate to hold agents responsible because they are morally responsible, rather, it is ... well, something else. Such claims are often developed in different ways, but one thing remains (...) constant: they meant to be incompatible with libertarian theories of moral responsibility. The overarching theme of this paper is that extant developments of “the reversal” face a dilemma: in order to make the proposals plausibly anti-libertarian, they must be made to be implausible on other grounds. I canvas different attempts to articulate a “Strawsonian reversal”, and argue that none is fit for the purposes for which it is intended. I conclude by suggesting a way of clarifying the intended thesis: an analogy with the concept of funniness. The result: proponents of the “reversal” need to accept the difficult result that if we blamed small children, they would be blameworthy, or instead explain how their view escapes this result, while still being a view on which our blaming practices “fix the facts” of moral responsibility. (shrink)
There is a familiar debate between Russell and Strawson concerning bivalence and ‘the present King of France’. According to the Strawsonian view, ‘The present King of France is bald’ is neither true nor false, whereas, on the Russellian view, that proposition is simply false. In this paper, I develop what I take to be a crucial connection between this debate and a different domain where bivalence has been at stake: future contingents. On the familiar ‘Aristotelian’ view, future contingent propositions are (...) neither true nor false. However, I argue that, just as there is a Russellian alternative to the Strawsonian view concerning ‘the present King of France’, according to which the relevant class of propositions all turn out false, so there is a Russellian alternative to the Aristotelian view, according to which future contingents all turn out false, not neither true nor false. The result: contrary to millennia of philosophical tradition, we can be open futurists without denying bivalence. (shrink)
Various philosophers have long since been attracted to the doctrine that future contingent propositions systematically fail to be true—what is sometimes called the doctrine of the open future. However, open futurists have always struggled to articulate how their view interacts with standard principles of classical logic—most notably, with the Law of Excluded Middle. For consider the following two claims: Trump will be impeached tomorrow; Trump will not be impeached tomorrow. According to the kind of open futurist at issue, both of (...) these claims may well fail to be true. According to many, however, the disjunction of these claims can be represented as p ∨ ~p—that is, as an instance of LEM. In this essay, however, I wish to defend the view that the disjunction these claims cannot be represented as an instance of p ∨ ~p. And this is for the following reason: the latter claim is not, in fact, the strict negation of the former. More particularly, there is an important semantic distinction between the strict negation of the first claim [~] and the latter claim. However, the viability of this approach has been denied by Thomason, and more recently by MacFarlane and Cariani and Santorio, the latter of whom call the denial of the given semantic distinction “scopelessness”. According to these authors, that is, will is “scopeless” with respect to negation; whereas there is perhaps a syntactic distinction between ‘~Will p’ and ‘Will ~p’, there is no corresponding semantic distinction. And if this is so, the approach in question fails. In this paper, then, I criticize the claim that will is “scopeless” with respect to negation. I argue that will is a so-called “neg-raising” predicate—and that, in this light, we can see that the requisite scope distinctions aren’t missing, but are simply being masked. The result: a under-appreciated solution to the problem of future contingents that sees and as contraries, not contradictories. (shrink)
I provide a manipulation-style argument against classical compatibilism—the claim that freedom to do otherwise is consistent with determinism. My question is simple: if Diana really gave Ernie free will, why isn't she worried that he won't use it precisely as she would like? Diana's non-nervousness, I argue, indicates Ernie's non-freedom. Arguably, the intuition that Ernie lacks freedom to do otherwise is stronger than the direct intuition that he is simply not responsible; this result highlights the importance of the denial of (...) the principle of alternative possibilities for compatibilist theories of responsibility. Along the way, I clarify the dialectical role and structure of “manipulation arguments”, and compare the manipulation argument I develop with the more familiar Consequence Argument. I contend that the two arguments are importantly mutually supporting and reinforcing. The result: classical compatibilists should be nervous—and if PAP is true, all compatibilists should be nervous. (shrink)
This Introduction has three sections, on "logical fatalism," "theological fatalism," and the problem of future contingents, respectively. In the first two sections, we focus on the crucial idea of "dependence" and the role it plays it fatalistic arguments. Arguably, the primary response to the problems of logical and theological fatalism invokes the claim that the relevant past truths or divine beliefs depend on what we do, and therefore needn't be held fixed when evaluating what we can do. We call the (...) sort of dependence needed for this response to be successful "dependence with a capital 'd'": Dependence. We consider different accounts of Dependence, especially the account implicit in the so-called "Ockhamist" response to the fatalistic arguments. Finally, we present the problem of future contingents: what could "ground" truths about the undetermined future? On the other hand, how could all such propositions fail to be true? (shrink)
This paper considers Norton’s Material Theory of Induction. The material theory aims inter alia to neutralize Hume’s Problem of Induction. The purpose of the paper is to evaluate the material theorys capacity to achieve this end. After pulling apart two versions of the theory, I argue that neither version satisfactorily neutralizes the problem.
The plane was going to crash, but it didn't. Johnny was going to bleed to death, but he didn't. Geach sees here a changing future. In this paper, I develop Geach's primary argument for the (almost universally rejected) thesis that the future is mutable (an argument from the nature of prevention), respond to the most serious objections such a view faces, and consider how Geach's view bears on traditional debates concerning divine foreknowledge and human freedom. As I hope to show, (...) Geach's view constitutes a radically new view on the logic of future contingents, and deserves the status of a theoretical contender in these debates. (shrink)
In this paper, I introduce a problem to the philosophy of religion – the problem of divine moral standing – and explain how this problem is distinct from (albeit related to) the more familiar problem of evil (with which it is often conflated). In short, the problem is this: in virtue of how God would be (or, on some given conception, is) “involved in” our actions, how is it that God has the moral standing to blame us for performing those (...) very actions? In light of the recent literature on “moral standing”, I consider God’s moral standing to blame on two models of “divine providence”: open theism, and theological determinism. I contend that God may have standing on open theism, and – perhaps surprisingly – may also have standing, even on theological determinism, given the truth of compatibilism. Thus, if you think that God could not justly both determine and blame, then you will have to abandon compatibilism. The topic of this paper thus sheds considerable light on the traditional philosophical debate about the conditions of moral responsibility. (shrink)
Everyone agrees that we can’t change the past. But what about the future? Though the thought that we can change the future is familiar from popular discourse, it enjoys virtually no support from philosophers, contemporary or otherwise. In this paper, I argue that the thesis that the future is mutable has far more going for it than anyone has yet realized. The view, I hope to show, gains support from the nature of prevention, can provide a new way of responding (...) to arguments for fatalism, can account for the utility of total knowledge of the future, and can help in providing an account of the semantics of the English progressive. On the view developed, the future is mutable in the following sense: perhaps, once, it was true that you would never come to read a paper defending the mutability of the future. And then the future changed. And now you will. (shrink)
Recently, philosophers have turned their attention to the question, not when a given agent is blameworthy for what she does, but when a further agent has the moral standing to blame her for what she does. Philosophers have proposed at least four conditions on having “moral standing”: -/- 1. One’s blame would not be “hypocritical”. 2. One is not oneself “involved in” the target agent’s wrongdoing. 3. One must be warranted in believing that the target is indeed blameworthy for the (...) wrongdoing. 4. The target’s wrongdoing must some of “one’s business”. -/- These conditions are often proposed as both conditions on one and the same thing, and as marking fundamentally different ways of “losing standing.” Here I call these claims into question. First, I claim that conditions (3) and (4) are simply conditions on different things than are conditions (1) and (2). Second, I argue that condition (2) reduces to condition (1): when “involvement” removes someone’s standing to blame, it does so only by indicating something further about that agent, viz., that he or she lacks commitment to the values that condemn the wrongdoer’s action. The result: after we clarify the nature of the non-hypocrisy condition, we will have a unified account of moral standing to blame. Issues also discussed: whether standing can ever be regained, the relationship between standing and our "moral fragility", the difference between mere inconsistency and hypocrisy, and whether a condition of standing might be derived from deeper facts about the "equality of persons". (shrink)
This paper responds to recent work in the philosophy of Homotopy Type Theory by James Ladyman and Stuart Presnell. They consider one of the rules for identity, path induction, and justify it along ‘pre-mathematical’ lines. I give an alternate justification based on the philosophical framework of inferentialism. Accordingly, I construct a notion of harmony that allows the inferentialist to say when a connective or concept is meaning-bearing and this conception unifies most of the prominent conceptions of harmony through category theory. (...) This categorical harmony is stated in terms of adjoints and says that any concept definable by iterated adjoints from general categorical operations is harmonious. Moreover, it has been shown that identity in a categorical setting is determined by an adjoint in the relevant way. Furthermore, path induction as a rule comes from this definition. Thus we arrive at an account of how path induction, as a rule of inference governing identity, can be justified on mathematically motivated grounds. (shrink)
At least since Aristotle’s famous 'sea-battle' passages in On Interpretation 9, some substantial minority of philosophers has been attracted to the doctrine of the open future--the doctrine that future contingent statements are not true. But, prima facie, such views seem inconsistent with the following intuition: if something has happened, then (looking back) it was the case that it would happen. How can it be that, looking forwards, it isn’t true that there will be a sea battle, while also being true (...) that, looking backwards, it was the case that there would be a sea battle? This tension forms, in large part, what might be called the problem of future contingents. A dominant trend in temporal logic and semantic theorizing about future contingents seeks to validate both intuitions. Theorists in this tradition--including some interpretations of Aristotle, but paradigmatically, Thomason (1970), as well as more recent developments in Belnap, et. al (2001) and MacFarlane (2003, 2014)--have argued that the apparent tension between the intuitions is in fact merely apparent. In short, such theorists seek to maintain both of the following two theses: (i) the open future: Future contingents are not true, and (ii) retro-closure: From the fact that something is true, it follows that it was the case that it would be true. It is well-known that reflection on the problem of future contingents has in many ways been inspired by importantly parallel issues regarding divine foreknowledge and indeterminism. In this paper, we take up this perspective, and ask what accepting both the open future and retro-closure predicts about omniscience. When we theorize about a perfect knower, we are theorizing about what an ideal agent ought to believe. Our contention is that there isn’t an acceptable view of ideally rational belief given the assumptions of the open future and retro-closure, and thus this casts doubt on the conjunction of those assumptions. (shrink)
As robots slip into more domains of human life-from the operating room to the bedroom-they take on our morally important tasks and decisions, as well as create new risks from psychological to physical. This book answers the urgent call to study their ethical, legal, and policy impacts.
The main goal in this paper is to outline and defend a form of Relativism, under which truth is absolute but assertibility is not. I dub such a view Norm-Relativism in contrast to the more familiar forms of Truth-Relativism. The key feature of this view is that just what norm of assertion, belief, and action is in play in some context is itself relative to a perspective. In slogan form: there is no fixed, single norm for assertion, belief, and action. (...) Upshot: 'knows' is neither context-sensitive nor perspectival. (shrink)
It has been an open question whether or not we can define a belief revision operation that is distinct from simple belief expansion using paraconsistent logic. In this paper, we investigate the possibility of meeting the challenge of defining a belief revision operation using the resources made available by the study of dynamic epistemic logic in the presence of paraconsistent logic. We will show that it is possible to define dynamic operations of belief revision in a paraconsistent setting.
Predicates are term-to-sentence devices, and operators are sentence-to-sentence devices. What Kaplan and Montague's Paradox of the Knower demonstrates is that necessity and other modalities cannot be treated as predicates, consistent with arithmetic; they must be treated as operators instead. Such is the current wisdom.A number of previous pieces have challenged such a view by showing that a predicative treatment of modalities neednot raise the Paradox of the Knower. This paper attempts to challenge the current wisdom in another way as well: (...) to show that mere appeal to modal operators in the sense of sentence-to-sentence devices is insufficient toescape the Paradox of the Knower. A family of systems is outlined in which closed formulae can encode other formulae and in which the diagonal lemma and Paradox of the Knower are thereby demonstrable for operators in this sense. (shrink)
The best arguments for the 1/3 answer to the Sleeping Beauty problem all require that when Beauty awakes on Monday she should be uncertain what day it is. I argue that this claim should be rejected, thereby clearing the way to accept the 1/2 solution.
At the most general level, "manipulation" refers one of many ways of influencing behavior, along with (but to be distinguished from) other such ways, such as coercion and rational persuasion. Like these other ways of influencing behavior, manipulation is of crucial importance in various ethical contexts. First, there are important questions concerning the moral status of manipulation itself; manipulation seems to be mor- ally problematic in ways in which (say) rational persuasion does not. Why is this so? Furthermore, the notion (...) of manipulation has played an increasingly central role in debates about free will and moral responsibility. Despite its significance in these (and other) contexts, however, the notion of manipulation itself remains deeply vexed. I would say notoriously vexed, but in fact direct philosophical treatments of the notion of manipulation are few and far between, and those that do exist are nota- ble for the sometimes widely divergent conclusions they reach concerning what it is. I begin by addressing (though certainly not resolving) the conceptual issue of how to distinguish manipulation from other ways of influencing behavior. Along the way, I also briefly address the (intimately related) question of the moral status of manipulation: what, if anything, makes it morally problematic? Then I discuss the controversial ways in which the notion of manipulation has been employed in contemporary debates about free will and moral responsibility. (shrink)
For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...) affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms. (shrink)
Descartes' demon is a deceiver: the demon makes things appear to you other than as they really are. However, as Descartes famously pointed out in the Second Meditation, not all knowledge is imperiled by this kind of deception. You still know you are a thinking thing. Perhaps, though, there is a more virulent demon in epistemic hell, one from which none of our knowledge is safe. Jonathan Schaffer thinks so. The "debasing demon" he imagines threatens knowledge not via the truth (...) condition on knowledge, but via the basing condition. This demon can cause any belief to seem like it's held on a good basis, when it's really held on a bad basis. Several recent critics grant Schaffer the possibility of such a debasing demon, and argue that the skeptical conclusion doesn't follow. By contrast, we argue that on any plausible account of the epistemic basing relation, the "debasing demon" is impossible. Our argument for why this is so gestures, more generally, to the importance of avoiding common traps by embracing mistaken assumptions about what it takes for a belief to be based on a reason. (shrink)
In Replacing Truth, Scharp takes the concept of truth to be fundamentally incoherent. As such, Scharp reckons it to be unsuited for systematic philosophical theorising and in need of replacement – at least for regions of thought and talk which permit liar sentences and their ilk to be formulated. This replacement methodology is radical because it not only recommends that the concept of truth be replaced, but that the word ‘true’ be replaced too. Only Tarski has attempted anything like it (...) before. I dub such a view Conceptual Marxism. In assessing this view, my goals are fourfold: to summarise the many components of Scharp’s theory of truth; to highlight what I take to be some of the excess baggage carried by the view; to assess whether, and to what extent, the extreme methodology on offer is at all called for; finally, to briefly propose a less radical replacement strategy for resolving the liar paradox. (shrink)
In this paper I show that a variety of Cartesian Conceptions of the mental are unworkable. In particular, I offer a much weaker conception of limited discrimination than the one advanced by Williamson (2000) and show that this weaker conception, together with some plausible background assumptions, is not only able to undermine the claim that our core mental states are luminous (roughly: if one is in such a state then one is in a position to know that one is) but (...) also the claim that introspection is infallible with respect to our core mental states (where a belief that C obtains is infallible just in case if one believes that C obtains then C obtains). The upshot is a broader and much more powerful case against the Cartesian conception of the mental than has been advanced hitherto. (shrink)
Epistemic Contextualism is the view that “knows that” is semantically context-sensitive and that properly accommodating this fact into our philosophical theory promises to solve various puzzles concerning knowledge. Yet Epistemic Contextualism faces a big—some would say fatal—problem: The Semantic Error Problem. In its prominent form, this runs thus: speakers just don’t seem to recognise that “knows that” is context-sensitive; so, if “knows that” really is context-sensitive then such speakers are systematically in error about what is said by, or how to (...) evaluate, ordinary uses of “S knows that p”; but since it's wildly implausible that ordinary speakers should exhibit such systematic error, the expression “knows that” isn't context-sensitive. We are interested in whether, and in what ways, there is such semantic error; if there is such error, how it arises and is made manifest; and, again, if there is such error to what extent it is a problem for Epistemic Contextualism. The upshot is that some forms of The Semantic Error Problem turn out to be largely unproblematic. Those that remain troublesome have analogue error problems for various competitor conceptions of knowledge. So, if error is any sort of problem, then there is a problem for every extant competitor view. (shrink)
Conceptual Engineering alleges that philosophical problems are best treated via revising or replacing our concepts (or words). The goal here is not to defend Conceptual Engineering but rather show that it can (and should) invoke Neutralism—the broad view that philosophical progress can take place when (and sometimes only when) a thoroughly neutral, non-specific theory, treatment, or methodology is adopted. A neutralist treatment of one form of skepticism is used as a case study and is compared with various non-neutral rivals. Along (...) the way, a new taxonomy for paradox is proposed. (shrink)
Die Frage nach der Definition von Doping basiert nicht zuletzt auf naturwissenschaftlicher Forschung. Aus einer naturwissenschaftlichen Perspektive könnte man sogar behaupten, dass die aktuelle Dopingdebatte ihre Ursachen gerade in der pharmazeutischen Forschung hat, da sich das Problem des Dopings erst mit dem Vorhandensein entsprechender Mittel bzw. Methoden zur Leistungssteigerung stellt. Allerdings wird die Frage der Dopingdefinition im Folgenden nicht auf einen naturwissenschaftlichen Referenzrahmen reduziert, wie dies in den aktuellen Dopingdefinitionen häufig der Fall ist. Vielmehr werde ich die spezifische Rolle naturwissenschaftlicher (...) Forschung mit Blick auf die Dopingdefinition und die daraus resultierenden strukturellen Schwierigkeiten darstellen. (shrink)
Nations are understood to have a right to go to war, not only in defense of individual rights, but in defense of their own political standing in a given territory. This paper argues that the political defensive privilege cannot be satisfactorily explained, either on liberal cosmopolitan grounds or on pluralistic grounds. In particular, it is argued that pluralistic accounts require giving implausibly strong weight to the value of political communities, overwhelming the standing of individuals. Liberal cosmopolitans, it is argued, underestimate (...) the difficulties in disentangling a state’s role in upholding or threatening individual interests from its role in providing the social context that shapes and determines those very interests. The paper proposes an alternative theory, “prosaic statism”, which shares the individualistic assumptions of liberal cosmopolitanism, but avoids a form of fundamentalism about human rights, and is therefore less likely to recommend humanitarian intervention in non-liberal states. (shrink)
It remains controversial whether touch is a truly spatial sense or not. Many philosophers suggest that, if touch is indeed spatial, it is only through its alliances with exploratory movement, and with proprioception. Here we develop the notion that a minimal yet important form of spatial perception may occur in purely passive touch. We do this by showing that the array of tactile receptive fields in the skin, and appropriately relayed to the cortex, may contain the same basic informational building (...) blocks that a creature navigating around its environment uses to build up a perception of space. We illustrate this point with preliminary evidence that perception of spatiotemporal patterns on the human skin shows some of the same features as spatial navigation in animals. We argue (a) that the receptor array defines a ‘tactile field’, (b) that this field exists in a minimal form in ‘skin space’, logically prior to any transformation into bodily or external spatial coordinates, and (c) that this field supports tactile perception without integration of concurrent proprioceptive or motor information. The basic cognitive elements of space perception may begin at lower levels of neural and perceptual organisation than previously thought. (shrink)
Neutralism is the broad view that philosophical progress can take place when (and sometimes only when) a thoroughly neutral, non-specific theory, treatment, or methodology is adopted. The broad goal here is to articulate a distinct, specific kind of sorites paradox (The Observational Sorites Paradox) and show that it can be effectively treated via Neutralism.
Our purpose in this article is to draw attention to a connection that obtains between two dilemmas from two separate spheres: sports and the law. It is our contention that umpires in the game of cricket may face a dilemma that is similar to a dilemma confronted by legal decision makers and that comparing the nature of the dilemmas, and the arguments advanced to solve them, will serve to advance our understanding of both the law and games.
Sports physicians are continuously confronted with new biotechnological innovations. This applies not only to doping in sports, but to all kinds of so-called enhancement methods. One fundamental problem regarding the sports physician's self-image consists in a blurred distinction between therapeutic treatment and non-therapeutic performance enhancement. After a brief inventory of the sports physician's work environment I reject as insufficient the attempts to resolve the conflict of the sports physician by making it a classificatory problem. Followed by a critical assessment of (...) some ideas from the US President's Council on Bioethics, the formulation of ethical codes and attempts regarding a moral topography, it is argued that the sports physician's conflict cannot be resolved by the distinction between therapy and enhancement. Instead, we also have to consider the possibility that the therapy-based paradigm of medicine cannot do justice to the challenges of the continuously increasing technical manipulability of the human body and even our cognitive functions as well. At the same time we should not adhere to transhumanist ideas, because non-therapeutic interventions require clear criteria. Based on assistive technologies an alternative framework can be sketched that allows for the integration of therapeutic and non-therapeutic purposes. After a thorough definition of standards and criteria, the role of the sports physician might be defined as that of an assistant for enhancement. Yet the process of defining such an alternative framework is a societal and political task that cannot be accomplished by the sports physicians themselves. Until these questions are answered sports physicians continue to find themselves in a structural dilemma that they partially can come to terms with through personal integrity. (shrink)
Epistemically circular arguments have been receiving quite a bit of attention in the literature for the past decade or so. Often the goal is to determine whether reliabilists (or other foundationalists) are committed to the legitimacy of epistemically circular arguments. It is often assumed that epistemic circularity is objectionable, though sometimes reliabilists accept that their position entails the legitimacy of some epistemically circular arguments, and then go on to affirm that such arguments really are good ones. My goal in this (...) paper is to argue against the legitimacy of epistemically circular arguments. My strategy is to give a direct argument against the legitimacy of epistemically circular arguments, which rests on a principle of basis-relative safety, and then to argue that reliabilists do not have the resources to resist the argument. I argue that even if the premises of an epistemically circular argument enjoy reliabilist justification, the argument does not transmit that justification to its conclusion. The main goal of my argument is to show that epistemic circularity is always a bad thing, but it also has the positive consequence that reliabilists are freed from an awkward commitment to the legitimacy of some intuitively bad arguments. (shrink)
In this essay, we explore an issue of moral uncertainty: what we are permitted to do when we are unsure about which moral principles are correct. We develop a novel approach to this issue that incorporates important insights from previous work on moral uncertainty, while avoiding some of the difficulties that beset existing alternative approaches. Our approach is based on evaluating and choosing between option sets rather than particular conduct options. We show how our approach is particularly well-suited to address (...) this issue of moral uncertainty with respect to agents that have credence in moral theories that are not fully consequentialist. (shrink)
One of the basic principles of the general definition of information is its rejection of dataless information, which is reflected in its endorsement of an ontological neutrality. In general, this principles states that “there can be no information without physical implementation” (Floridi (2005)). Though this is standardly considered a commonsensical assumption, many questions arise with regard to its generalised application. In this paper a combined logic for data and information is elaborated, and specifically used to investigate the consequences of restricted (...) and unrestricted data-implementation-principles. (shrink)
Scalar Utilitarianism eschews foundational notions of rightness and wrongness in favor of evaluative comparisons of outcomes. I defend Scalar Utilitarianism from two critiques, the first against an argument for the thesis that Utilitarianism's commitments are fundamentally evaluative, and the second that Scalar Utilitarianism does not issue demands or sufficiently guide action. These defenses suggest a variety of more plausible Scalar Utilitarian interpretations, and I argue for a version that best represents a moral theory founded on evaluative notions, and offers better (...) answers to demandingness concerns than does the ordinary Scalar Utilitarian response. If Utilitarians seek reasonable development and explanation of their basic commitments, they may wish to reconsider Scalar Utilitarianism. (shrink)
In the domain of ontology design as well as in Knowledge Representation, modeling universals is a challenging problem.Most approaches that have addressed this problem rely on Description Logics (DLs) but many difficulties remain, due to under-constrained representation which reduces the inferences that can be drawn and further causes problems in expressiveness. In mathematical logic and program checking, type theories have proved to be appealing but, so far they have not been applied in the formalization of ontologies. To bridge this gap, (...) we present in this paper a theory for representing ontologies in a dependently typed framework which relies on strong formal foundations including both a constructive logic and a functional type system. The language of this theory defines in a precise way what ontological primitives such as classes, relations, properties, etc., and thereof roles, are. The first part of the paper details how these primitives are defined and used within the theory. In a second part, we focus on the formalization of the role primitive. A review of significant role properties leads to the specification of a role profile and most of the remaining work details through numerous examples, how the proposed theory is able to fully satisfy this profile. It is demonstrated that dependent types can model several non-trivial aspects of roles including a formal solution for generalization hierarchies, identity criteria for roles and other contributions. A discussion is given on how the theory is able to cope with many of the constraints inherent in a good role representation. (shrink)
First, I argue that scientific progress is possible in the absence of increasing verisimilitude in science’s theories. Second, I argue that increasing theoretical verisimilitude is not the central, or primary, dimension of scientific progress. Third, I defend my previous argument that unjustified changes in scientific belief may be progressive. Fourth, I illustrate how false beliefs can promote scientific progress in ways that cannot be explicated by appeal to verisimilitude.
Seit geraumer Zeit ist wieder einmal die Rede vom Ende der Philosophie als einer eigenständigen Disziplin zu vernehmen. Neurophilosophen streben eine Erklärung grundlegender philosophischer Fragen mit Hilfe neurowissenschaftlicher Forschungsergebnisse an, da nach dem Erreichen des Jahrzehnts des Gehirns einer empirisch fundierten Erklärung des Bewusstseins in allen seinen Gestalten nichts mehr im Wege stünde. In Bezug auf Descartes sieht man sich als Postcartesianer jetzt in der Rolle, das sog. Leib-Seele-Problem durch eine naturalistische Reduktion auf neurobiologische Gegebenheiten zu lösen. Ich habe mir (...) die Aufgabe gestellt, diesen Erklärungsanspruch aus einer transzendentalphilosophischen Perspektive zu prüfen. Allerdings nicht, wie es beispielsweise Gerhard Roth erwartet, durch eine dreijährige Einarbeitung in die Neurobiologie, sondern auf wissenschaftstheoretischer bzw. konzeptueller Ebene. Dabei geht es um eine Überprüfung der methodischen Brauchbarkeit des postcartesianischen Referenzrahmens für eine philosophische Theorie des Bewusstseins. Die methodenkritische Untersuchung disqualifiziert materialistisch-reduktionistischer bzw. neuro-philosophischer Ansätze führen wird, insofern diese einen philosophischen Erklärungsanspruch des Bewusstseins erheben. Es wird deutlich, dass ein problematisiertes Leib-Seele-Verhältnis nicht zur Erklärung des Bewusstseins geeignet ist. (shrink)
When do we agree? The answer might once have seemed simple and obvious; we agree that p when we each believe that p. But from a formal epistemological perspective, where degrees of belief are more fundamental than beliefs, this answer is unsatisfactory. On the one hand, there is reason to suppose that it is false; degrees of belief about p might differ when beliefs simpliciter on p do not. On the other hand, even if it is true, it is too (...) vague; for what it is to believe simpliciter ought to be explained in terms of degrees of belief. This paper presents several possible notions of agreement, and corresponding notions of disagreement. It indicates how the findings are fruitful for the epistemology of disagreement, with special reference to the notion of epistemic peerhood. (shrink)
This paper aims to contribute to the current debate about the status of the “Ought Implies Can” principle and the growing body of empirical evidence that undermines it. We report the results of an experimental study which show that people judge that agents ought to perform an action even when they also judge that those agents cannot do it and that such “ought” judgments exhibit an actor-observer effect. Because of this actor-observer effect on “ought” judgments and the Duhem-Quine thesis, talk (...) of an “empirical refutation” of OIC is empirically and methodologically unwarranted. What the empirical fact that people attribute moral obligations to unable agents shows is that OIC is not intuitive, not that OIC has been refuted. (shrink)
Most of the historically salient versions of the Cosmological Argument rest on two assumptions. The first assumption is that some contingeney (i.e., contingent fact) is such that a necessity is required to explain it. Against that assumption we will argue that necessities alone cannot explain any contingency and, furthermore, that it is impossible to explain the totality of contingencies at all.The second assumption is the Principle of Sufficient Reason. Against the Principle of Sufficient Reason we will argue that it is (...) unreasonable to require, as the Principle of Sufficient Reason does, that any given whole of contingent facts has an explanation. Instead, it depends on the results of empirical investigation whether or not one should ask for an explanation of the given whole.We argue that if a cosmological argument invokes either of the two assumptions, then it fails to prove that a necessity is needed to explain the universe of contingent facts. (shrink)
First, I answer the controversial question ’What is scientific realism?’ with extensive reference to the varied accounts of the position in the literature. Second, I provide an overview of the key developments in the debate concerning scientific realism over the past decade. Third, I provide a summary of the other contributions to this special issue.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.