According to the PubMed resource from the U.S. National Library of Medicine, over 750,000 scientific articles have been published in the ~5000 biomedical journals worldwide in the year 2007 alone. The vast majority of these publications include results from hypothesis-driven experimentation in overlapping biomedical research domains. Unfortunately, the sheer volume of information being generated by the biomedical research enterprise has made it virtually impossible for investigators to stay aware of the latest findings in their domain of interest, let alone to (...) be able to assimilate and mine data from related investigations for purposes of meta-analysis. While computers have the potential for assisting investigators in the extraction, management and analysis of these data, information contained in the traditional journal publication is still largely unstructured, free-text descriptions of study design, experimental application and results interpretation, making it difficult for computers to gain access to the content of what is being conveyed without significant manual intervention. In order to circumvent these roadblocks and make the most of the output from the biomedical research enterprise, a variety of related standards in knowledge representation are being developed, proposed and adopted in the biomedical community. In this chapter, we will explore the current status of efforts to develop minimum information standards for the representation of a biomedical experiment, ontologies composed of shared vocabularies assembled into subsumption hierarchical structures, and extensible relational data models that link the information components together in a machine-readable and human-useable framework for data mining purposes. (shrink)
Immunology researchers are beginning to explore the possibilities of reproducibility, reuse and secondary analyses of immunology data. Open-access datasets are being applied in the validation of the methods used in the original studies, leveraging studies for meta-analysis, or generating new hypotheses. To promote these goals, the ImmPort data repository was created for the broader research community to explore the wide spectrum of clinical and basic research data and associated findings. The ImmPort ecosystem consists of four components–Private Data, Shared Data, Data (...) Analysis, and Resources—for data archiving, dissemination, analyses, and reuse. To date, more than 300 studies have been made freely available through the ImmPort Shared Data portal , which allows research data to be repurposed to accelerate the translation of new insights into discoveries. (shrink)
Perceptual systems respond to proximal stimuli by forming mental representations of distal stimuli. A central goal for the philosophy of perception is to characterize the representations delivered by perceptual systems. It may be that all perceptual representations are in some way proprietarily perceptual and differ from the representational format of thought (Dretske 1981; Carey 2009; Burge 2010; Block ms.). Or it may instead be that perception and cognition always trade in the same code (Prinz 2002; Pylyshyn 2003). This paper rejects (...) both approaches in favor of perceptual pluralism, the thesis that perception delivers a multiplicity of representational formats, some proprietary and some shared with cognition. The argument for perceptual pluralism marshals a wide array of empirical evidence in favor of iconic (i.e., image-like, analog) representations in perception as well as discursive (i.e., language-like, digital) perceptual object representations. (shrink)
ABSTRACTThis paper provides a naturalistic account of inference. We posit that the core of inference is constituted by bare inferential transitions, transitions between discursive mental representations guided by rules built into the architecture of cognitive systems. In further developing the concept of BITs, we provide an account of what Boghossian [2014] calls ‘taking’—that is, the appreciation of the rule that guides an inferential transition. We argue that BITs are sufficient for implicit taking, and then, to analyse explicit taking, we posit (...) rich inferential transitions, which are transitions that the subject is disposed to endorse. (shrink)
Short‐term memory in vision is typically thought to divide into at least two memory stores: a short, fragile, high‐capacity store known as iconic memory, and a longer, durable, capacity‐limited store known as visual working memory (VWM). This paper argues that iconic memory stores icons, i.e., image‐like perceptual representations. The iconicity of iconic memory has significant consequences for understanding consciousness, nonconceptual content, and the perception–cognition border. Steven Gross and Jonathan Flombaum have recently challenged the division between iconic memory and VWM by (...) arguing against the idea of capacity limits in favor of a flexible resource‐based model of short‐term memory. I argue that, while VWM capacity is probably governed by flexible resources rather than a sharp limit, the two memory stores should still be distinguished by their representational formats. Iconic memory stores icons, while VWM stores discursive (i.e., language‐like) representations. I conclude by arguing that this format‐based distinction between memory stores entails that prominent views about consciousness and the perception–cognition border will likely have to be revised. (shrink)
According to one important proposal, the difference between perception and cognition consists in the representational formats used in the two systems (Carey, 2009; Burge, 2010; Block, 2014). In particular, it is claimed that perceptual representations are iconic, or image-like, while cognitive representations are discursive, or language-like. Taking object perception as a test case, this paper argues on empirical grounds that it requires discursive label-like representations. These representations segment the perceptual field, continuously pick out objects despite changes in their features, and (...) abstractly represent high-level features, none of which appears possible for purely iconic representations. (shrink)
According to a classic but nowadays discarded philosophical theory, perceptual experience is a complex of nonconceptual sensory states and full-blown propositional beliefs. This classical dual-component theory of experience is often taken to be obsolete. In particular, there seem to be cases in which perceptual experience and belief conflict: cases of known illusions, wherein subjects have beliefs contrary to the contents of their experiences. Modern dual-component theories reject the belief requirement and instead hold that perceptual experience is a complex of nonconceptual (...) sensory states and some other sort of conceptual state. The most popular modern dual-component theory appeals to sui generis propositional attitudes called ‘perceptual seemings’. This article argues that the classical dual-component theory has the resources to explain known illusions without giving up the claim that the conceptual components of experience are beliefs. The classical dual-component view, though often viewed as outdated and implausible, should be regarded as a serious contender in contemporary debates about the nature of perceptual experience. (shrink)
At the most general level, "manipulation" refers one of many ways of influencing behavior, along with (but to be distinguished from) other such ways, such as coercion and rational persuasion. Like these other ways of influencing behavior, manipulation is of crucial importance in various ethical contexts. First, there are important questions concerning the moral status of manipulation itself; manipulation seems to be mor- ally problematic in ways in which (say) rational persuasion does not. Why is this so? Furthermore, the notion (...) of manipulation has played an increasingly central role in debates about free will and moral responsibility. Despite its significance in these (and other) contexts, however, the notion of manipulation itself remains deeply vexed. I would say notoriously vexed, but in fact direct philosophical treatments of the notion of manipulation are few and far between, and those that do exist are nota- ble for the sometimes widely divergent conclusions they reach concerning what it is. I begin by addressing (though certainly not resolving) the conceptual issue of how to distinguish manipulation from other ways of influencing behavior. Along the way, I also briefly address the (intimately related) question of the moral status of manipulation: what, if anything, makes it morally problematic? Then I discuss the controversial ways in which the notion of manipulation has been employed in contemporary debates about free will and moral responsibility. (shrink)
Unconscious logical inference seems to rely on the syntactic structures of mental representations (Quilty-Dunn & Mandelbaum 2018). Other transitions, such as transitions using iconic representations and associative transitions, are harder to assimilate to syntax-based theories. Here we tackle these difficulties head on in the interest of a fuller taxonomy of mental transitions. Along the way we discuss how icons can be compositional without having constituent structure, and expand and defend the “symmetry condition” on Associationism (the idea that associative links (...) and transitions are perfectly symmetric). In the end, we show how a BIT (“bare inferential transition”) theory can cohabitate with these other non-inferential mental transitions. (shrink)
This paper considers Norton’s Material Theory of Induction. The material theory aims inter alia to neutralize Hume’s Problem of Induction. The purpose of the paper is to evaluate the material theorys capacity to achieve this end. After pulling apart two versions of the theory, I argue that neither version satisfactorily neutralizes the problem.
One recurring criticism of the best interests standard concerns its vagueness, and thus the inadequate guidance it offers to care providers. The lack of an agreed definition of ‘best interests’, together with the fact that several suggested considerations adopted in legislation or professional guidelines for doctors do not obviously apply across different groups of persons, result in decisions being made in murky waters. In response, bioethicists have attempted to specify the best interests standard, to reduce the indeterminacy surrounding medical decisions. (...) In this paper, we discuss the bioethicists’ response in relation to the state's possible role in clarifying the best interests standard. We identify and characterise two clarificatory strategies employed by bioethicists —elaborative and enumerative—and argue that the state should adopt the latter. Beyond the practical difficulties of the former strategy, a state adoption of it would inevitably be prejudicial in a pluralistic society. Given the gravity of best interests decisions, and the delicate task of respecting citizens with different understandings of best interests, only the enumerative strategy is viable. We argue that this does not commit the state to silence in providing guidance to and supporting healthcare providers, nor does it facilitate the abuse of the vulnerable. Finally, we address two methodological worries about adopting this approach at the state level. The adoption of the enumerative strategy is not defeatist in attitude, nor does it eventually collapse into (a form of) the elaborative strategy. (shrink)
An early, very preliminary edition of this book was circulated in 1962 under the title Set-theoretical Structures in Science. There are many reasons for maintaining that such structures play a role in the philosophy of science. Perhaps the best is that they provide the right setting for investigating problems of representation and invariance in any systematic part of science, past or present. Examples are easy to cite. Sophisticated analysis of the nature of representation in perception is to be found already (...) in Plato and Aristotle. One of the great intellectual triumphs of the nineteenth century was the mechanical explanation of such familiar concepts as temperature and pressure by their representation in terms of the motion of particles. A more disturbing change of viewpoint was the realization at the beginning of the twentieth century that the separate invariant properties of space and time must be replaced by the space-time invariants of Einstein's special relativity. Another example, the focus of the longest chapter in this book, is controversy extending over several centuries on the proper representation of probability. The six major positions on this question are critically examined. Topics covered in other chapters include an unusually detailed treatment of theoretical and experimental work on visual space, the two senses of invariance represented by weak and strong reversibility of causal processes, and the representation of hidden variables in quantum mechanics. The final chapter concentrates on different kinds of representations of language, concluding with some empirical results on brain-wave representations of words and sentences. (shrink)
Recently, philosophers have turned their attention to the question, not when a given agent is blameworthy for what she does, but when a further agent has the moral standing to blame her for what she does. Philosophers have proposed at least four conditions on having “moral standing”: -/- 1. One’s blame would not be “hypocritical”. 2. One is not oneself “involved in” the target agent’s wrongdoing. 3. One must be warranted in believing that the target is indeed blameworthy for the (...) wrongdoing. 4. The target’s wrongdoing must some of “one’s business”. -/- These conditions are often proposed as both conditions on one and the same thing, and as marking fundamentally different ways of “losing standing.” Here I call these claims into question. First, I claim that conditions (3) and (4) are simply conditions on different things than are conditions (1) and (2). Second, I argue that condition (2) reduces to condition (1): when “involvement” removes someone’s standing to blame, it does so only by indicating something further about that agent, viz., that he or she lacks commitment to the values that condemn the wrongdoer’s action. The result: after we clarify the nature of the non-hypocrisy condition, we will have a unified account of moral standing to blame. Issues also discussed: whether standing can ever be regained, the relationship between standing and our "moral fragility", the difference between mere inconsistency and hypocrisy, and whether a condition of standing might be derived from deeper facts about the "equality of persons". (shrink)
P.F. Strawson’s (1962) “Freedom and Resentment” has provoked a wide range of responses, both positive and negative, and an equally wide range of interpretations. In particular, beginning with Gary Watson, some have seen Strawson as suggesting a point about the “order of explanation” concerning moral responsibility: it is not that it is appropriate to hold agents responsible because they are morally responsible, rather, it is ... well, something else. Such claims are often developed in different ways, but one thing remains (...) constant: they meant to be incompatible with libertarian theories of moral responsibility. The overarching theme of this paper is that extant developments of “the reversal” face a dilemma: in order to make the proposals plausibly anti-libertarian, they must be made to be implausible on other grounds. I canvas different attempts to articulate a “Strawsonian reversal”, and argue that none is fit for the purposes for which it is intended. I conclude by suggesting a way of clarifying the intended thesis: an analogy with the concept of funniness. The result: proponents of the “reversal” need to accept the difficult result that if we blamed small children, they would be blameworthy, or instead explain how their view escapes this result, while still being a view on which our blaming practices “fix the facts” of moral responsibility. (shrink)
There is a familiar debate between Russell and Strawson concerning bivalence and ‘the present King of France’. According to the Strawsonian view, ‘The present King of France is bald’ is neither true nor false, whereas, on the Russellian view, that proposition is simply false. In this paper, I develop what I take to be a crucial connection between this debate and a different domain where bivalence has been at stake: future contingents. On the familiar ‘Aristotelian’ view, future contingent propositions are (...) neither true nor false. However, I argue that, just as there is a Russellian alternative to the Strawsonian view concerning ‘the present King of France’, according to which the relevant class of propositions all turn out false, so there is a Russellian alternative to the Aristotelian view, according to which future contingents all turn out false, not neither true nor false. The result: contrary to millennia of philosophical tradition, we can be open futurists without denying bivalence. (shrink)
Various philosophers have long since been attracted to the doctrine that future contingent propositions systematically fail to be true—what is sometimes called the doctrine of the open future. However, open futurists have always struggled to articulate how their view interacts with standard principles of classical logic—most notably, with the Law of Excluded Middle. For consider the following two claims: Trump will be impeached tomorrow; Trump will not be impeached tomorrow. According to the kind of open futurist at issue, both of (...) these claims may well fail to be true. According to many, however, the disjunction of these claims can be represented as p ∨ ~p—that is, as an instance of LEM. In this essay, however, I wish to defend the view that the disjunction these claims cannot be represented as an instance of p ∨ ~p. And this is for the following reason: the latter claim is not, in fact, the strict negation of the former. More particularly, there is an important semantic distinction between the strict negation of the first claim [~] and the latter claim. However, the viability of this approach has been denied by Thomason, and more recently by MacFarlane and Cariani and Santorio, the latter of whom call the denial of the given semantic distinction “scopelessness”. According to these authors, that is, will is “scopeless” with respect to negation; whereas there is perhaps a syntactic distinction between ‘~Will p’ and ‘Will ~p’, there is no corresponding semantic distinction. And if this is so, the approach in question fails. In this paper, then, I criticize the claim that will is “scopeless” with respect to negation. I argue that will is a so-called “neg-raising” predicate—and that, in this light, we can see that the requisite scope distinctions aren’t missing, but are simply being masked. The result: a under-appreciated solution to the problem of future contingents that sees and as contraries, not contradictories. (shrink)
As robots slip into more domains of human life-from the operating room to the bedroom-they take on our morally important tasks and decisions, as well as create new risks from psychological to physical. This book answers the urgent call to study their ethical, legal, and policy impacts.
While naturalism is used in positive senses by the tradition of analytical philosophy, with Ludwig Wittgenstein its best example, and by the tradition of phenomenology, with Maurice Merleau-Ponty its best exemplar, it also has an extremely negative sense on both of these fronts. Hence, both Merleau-Ponty and Wittgenstein in their basic thrusts adamantly reject reductionistic naturalism. Although Merleau-Ponty’s phenomenology rejects the naturalism Husserl rejects, he early on found a place for the “truth of naturalism.” In a parallel way, Wittgenstein accepts (...) a certain positive sense of naturalism, while rejecting Quine’s kind of naturalism. It is the aim of this paper to investigate the common ground in the views of Wittgenstein and Merleau-Ponty regarding the naturalism that they each espouse and that which they each reject. (shrink)
Since the fundamental challenge that I laid at the doorstep of the pluralists was to defend, with nonderivative models, a strong notion of genic cause, it is fatal that Waters has failed to meet that challenge. Waters agrees with me that there is only a single cause operating in these models, but he argues for a notion of causal ‘parsing’ to sustain the viability of some form of pluralism. Waters and his colleagues have some very interesting and important ideas about (...) the sciences, involving pluralism and parsing or partitioning causes, but they are ideas in search of an example. He thinks he has found an example in the case of hierarchical and genic selection. I think he has not. (shrink)
The main goal in this paper is to outline and defend a form of Relativism, under which truth is absolute but assertibility is not. I dub such a view Norm-Relativism in contrast to the more familiar forms of Truth-Relativism. The key feature of this view is that just what norm of assertion, belief, and action is in play in some context is itself relative to a perspective. In slogan form: there is no fixed, single norm for assertion, belief, and action. (...) Upshot: 'knows' is neither context-sensitive nor perspectival. (shrink)
I provide a manipulation-style argument against classical compatibilism—the claim that freedom to do otherwise is consistent with determinism. My question is simple: if Diana really gave Ernie free will, why isn't she worried that he won't use it precisely as she would like? Diana's non-nervousness, I argue, indicates Ernie's non-freedom. Arguably, the intuition that Ernie lacks freedom to do otherwise is stronger than the direct intuition that he is simply not responsible; this result highlights the importance of the denial of (...) the principle of alternative possibilities for compatibilist theories of responsibility. Along the way, I clarify the dialectical role and structure of “manipulation arguments”, and compare the manipulation argument I develop with the more familiar Consequence Argument. I contend that the two arguments are importantly mutually supporting and reinforcing. The result: classical compatibilists should be nervous—and if PAP is true, all compatibilists should be nervous. (shrink)
It has been an open question whether or not we can define a belief revision operation that is distinct from simple belief expansion using paraconsistent logic. In this paper, we investigate the possibility of meeting the challenge of defining a belief revision operation using the resources made available by the study of dynamic epistemic logic in the presence of paraconsistent logic. We will show that it is possible to define dynamic operations of belief revision in a paraconsistent setting.
This Introduction has three sections, on "logical fatalism," "theological fatalism," and the problem of future contingents, respectively. In the first two sections, we focus on the crucial idea of "dependence" and the role it plays it fatalistic arguments. Arguably, the primary response to the problems of logical and theological fatalism invokes the claim that the relevant past truths or divine beliefs depend on what we do, and therefore needn't be held fixed when evaluating what we can do. We call the (...) sort of dependence needed for this response to be successful "dependence with a capital 'd'": Dependence. We consider different accounts of Dependence, especially the account implicit in the so-called "Ockhamist" response to the fatalistic arguments. Finally, we present the problem of future contingents: what could "ground" truths about the undetermined future? On the other hand, how could all such propositions fail to be true? (shrink)
The best arguments for the 1/3 answer to the Sleeping Beauty problem all require that when Beauty awakes on Monday she should be uncertain what day it is. I argue that this claim should be rejected, thereby clearing the way to accept the 1/2 solution.
For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
The plane was going to crash, but it didn't. Johnny was going to bleed to death, but he didn't. Geach sees here a changing future. In this paper, I develop Geach's primary argument for the (almost universally rejected) thesis that the future is mutable (an argument from the nature of prevention), respond to the most serious objections such a view faces, and consider how Geach's view bears on traditional debates concerning divine foreknowledge and human freedom. As I hope to show, (...) Geach's view constitutes a radically new view on the logic of future contingents, and deserves the status of a theoretical contender in these debates. (shrink)
In this paper, I introduce a problem to the philosophy of religion – the problem of divine moral standing – and explain how this problem is distinct from (albeit related to) the more familiar problem of evil (with which it is often conflated). In short, the problem is this: in virtue of how God would be (or, on some given conception, is) “involved in” our actions, how is it that God has the moral standing to blame us for performing those (...) very actions? In light of the recent literature on “moral standing”, I consider God’s moral standing to blame on two models of “divine providence”: open theism, and theological determinism. I contend that God may have standing on open theism, and – perhaps surprisingly – may also have standing, even on theological determinism, given the truth of compatibilism. Thus, if you think that God could not justly both determine and blame, then you will have to abandon compatibilism. The topic of this paper thus sheds considerable light on the traditional philosophical debate about the conditions of moral responsibility. (shrink)
Everyone agrees that we can’t change the past. But what about the future? Though the thought that we can change the future is familiar from popular discourse, it enjoys virtually no support from philosophers, contemporary or otherwise. In this paper, I argue that the thesis that the future is mutable has far more going for it than anyone has yet realized. The view, I hope to show, gains support from the nature of prevention, can provide a new way of responding (...) to arguments for fatalism, can account for the utility of total knowledge of the future, and can help in providing an account of the semantics of the English progressive. On the view developed, the future is mutable in the following sense: perhaps, once, it was true that you would never come to read a paper defending the mutability of the future. And then the future changed. And now you will. (shrink)
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...) affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms. (shrink)
In Replacing Truth, Scharp takes the concept of truth to be fundamentally incoherent. As such, Scharp reckons it to be unsuited for systematic philosophical theorising and in need of replacement – at least for regions of thought and talk which permit liar sentences and their ilk to be formulated. This replacement methodology is radical because it not only recommends that the concept of truth be replaced, but that the word ‘true’ be replaced too. Only Tarski has attempted anything like it (...) before. I dub such a view Conceptual Marxism. In assessing this view, my goals are fourfold: to summarise the many components of Scharp’s theory of truth; to highlight what I take to be some of the excess baggage carried by the view; to assess whether, and to what extent, the extreme methodology on offer is at all called for; finally, to briefly propose a less radical replacement strategy for resolving the liar paradox. (shrink)
This paper responds to recent work in the philosophy of Homotopy Type Theory by James Ladyman and Stuart Presnell. They consider one of the rules for identity, path induction, and justify it along ‘pre-mathematical’ lines. I give an alternate justification based on the philosophical framework of inferentialism. Accordingly, I construct a notion of harmony that allows the inferentialist to say when a connective or concept is meaning-bearing and this conception unifies most of the prominent conceptions of harmony through category theory. (...) This categorical harmony is stated in terms of adjoints and says that any concept definable by iterated adjoints from general categorical operations is harmonious. Moreover, it has been shown that identity in a categorical setting is determined by an adjoint in the relevant way. Furthermore, path induction as a rule comes from this definition. Thus we arrive at an account of how path induction, as a rule of inference governing identity, can be justified on mathematically motivated grounds. (shrink)
At least since Aristotle’s famous 'sea-battle' passages in On Interpretation 9, some substantial minority of philosophers has been attracted to the doctrine of the open future--the doctrine that future contingent statements are not true. But, prima facie, such views seem inconsistent with the following intuition: if something has happened, then (looking back) it was the case that it would happen. How can it be that, looking forwards, it isn’t true that there will be a sea battle, while also being true (...) that, looking backwards, it was the case that there would be a sea battle? This tension forms, in large part, what might be called the problem of future contingents. A dominant trend in temporal logic and semantic theorizing about future contingents seeks to validate both intuitions. Theorists in this tradition--including some interpretations of Aristotle, but paradigmatically, Thomason (1970), as well as more recent developments in Belnap, et. al (2001) and MacFarlane (2003, 2014)--have argued that the apparent tension between the intuitions is in fact merely apparent. In short, such theorists seek to maintain both of the following two theses: (i) the open future: Future contingents are not true, and (ii) retro-closure: From the fact that something is true, it follows that it was the case that it would be true. It is well-known that reflection on the problem of future contingents has in many ways been inspired by importantly parallel issues regarding divine foreknowledge and indeterminism. In this paper, we take up this perspective, and ask what accepting both the open future and retro-closure predicts about omniscience. When we theorize about a perfect knower, we are theorizing about what an ideal agent ought to believe. Our contention is that there isn’t an acceptable view of ideally rational belief given the assumptions of the open future and retro-closure, and thus this casts doubt on the conjunction of those assumptions. (shrink)
Descartes' demon is a deceiver: the demon makes things appear to you other than as they really are. However, as Descartes famously pointed out in the Second Meditation, not all knowledge is imperiled by this kind of deception. You still know you are a thinking thing. Perhaps, though, there is a more virulent demon in epistemic hell, one from which none of our knowledge is safe. Jonathan Schaffer thinks so. The "debasing demon" he imagines threatens knowledge not via the truth (...) condition on knowledge, but via the basing condition. This demon can cause any belief to seem like it's held on a good basis, when it's really held on a bad basis. Several recent critics grant Schaffer the possibility of such a debasing demon, and argue that the skeptical conclusion doesn't follow. By contrast, we argue that on any plausible account of the epistemic basing relation, the "debasing demon" is impossible. Our argument for why this is so gestures, more generally, to the importance of avoiding common traps by embracing mistaken assumptions about what it takes for a belief to be based on a reason. (shrink)
In this paper I show that a variety of Cartesian Conceptions of the mental are unworkable. In particular, I offer a much weaker conception of limited discrimination than the one advanced by Williamson (2000) and show that this weaker conception, together with some plausible background assumptions, is not only able to undermine the claim that our core mental states are luminous (roughly: if one is in such a state then one is in a position to know that one is) but (...) also the claim that introspection is infallible with respect to our core mental states (where a belief that C obtains is infallible just in case if one believes that C obtains then C obtains). The upshot is a broader and much more powerful case against the Cartesian conception of the mental than has been advanced hitherto. (shrink)
Epistemic Contextualism is the view that “knows that” is semantically context-sensitive and that properly accommodating this fact into our philosophical theory promises to solve various puzzles concerning knowledge. Yet Epistemic Contextualism faces a big—some would say fatal—problem: The Semantic Error Problem. In its prominent form, this runs thus: speakers just don’t seem to recognise that “knows that” is context-sensitive; so, if “knows that” really is context-sensitive then such speakers are systematically in error about what is said by, or how to (...) evaluate, ordinary uses of “S knows that p”; but since it's wildly implausible that ordinary speakers should exhibit such systematic error, the expression “knows that” isn't context-sensitive. We are interested in whether, and in what ways, there is such semantic error; if there is such error, how it arises and is made manifest; and, again, if there is such error to what extent it is a problem for Epistemic Contextualism. The upshot is that some forms of The Semantic Error Problem turn out to be largely unproblematic. Those that remain troublesome have analogue error problems for various competitor conceptions of knowledge. So, if error is any sort of problem, then there is a problem for every extant competitor view. (shrink)
Conceptual Engineering alleges that philosophical problems are best treated via revising or replacing our concepts (or words). The goal here is not to defend Conceptual Engineering but rather show that it can (and should) invoke Neutralism—the broad view that philosophical progress can take place when (and sometimes only when) a thoroughly neutral, non-specific theory, treatment, or methodology is adopted. A neutralist treatment of one form of skepticism is used as a case study and is compared with various non-neutral rivals. Along (...) the way, a new taxonomy for paradox is proposed. (shrink)
We argue that a command and control system can undermine a commander’s moral agency if it causes him/her to process information in a purely syntactic manner, or if it precludes him/her from ascertaining the truth of that information. Our case is based on the resemblance between a commander’s circumstances and the protagonist in Searle’s Chinese Room, together with a careful reading of Aristotle’s notions of ‘compulsory’ and ‘ignorance’. We further substantiate our case by considering the Vincennes Incident, when the crew (...) of a warship mistakenly shot down a civilian airliner. To support a combat commander’s moral agency, designers should strive for systems that help commanders and command teams to think and manipulate information at the level of meaning. ‘Down conversions’ of information from meaning to symbols must be adequately recovered by ‘up conversions’, and commanders must be able to check that their sensors are working and are being used correctly. Meanwhile ethicists should establish a mechanism that tracks the potential moral implications of choices in a system’s design and intended operation. Finally we highlight a gap in normative ethics, in that we have ways to deny moral agency, but not to affirm it. (shrink)
According to the standard framing of racial appeals in political speech, politicians generally rely on coded language to communicate racial messages. Yet recent years have demonstrated that politicians often express quite explicit forms of racism in mainstream political discourse. The standard framing can explain neither why these appeals work politically nor how they work semantically. This paper moves beyond the standard framing, focusing on the politics and semantics of one type of explicit appeal, candid racial communication (CRC). The linguistic vehicles (...) of CRC are neither true code words, nor slurs, but a conventionally defined class of “racialized terms.” . (shrink)
Die Frage nach der Definition von Doping basiert nicht zuletzt auf naturwissenschaftlicher Forschung. Aus einer naturwissenschaftlichen Perspektive könnte man sogar behaupten, dass die aktuelle Dopingdebatte ihre Ursachen gerade in der pharmazeutischen Forschung hat, da sich das Problem des Dopings erst mit dem Vorhandensein entsprechender Mittel bzw. Methoden zur Leistungssteigerung stellt. Allerdings wird die Frage der Dopingdefinition im Folgenden nicht auf einen naturwissenschaftlichen Referenzrahmen reduziert, wie dies in den aktuellen Dopingdefinitionen häufig der Fall ist. Vielmehr werde ich die spezifische Rolle naturwissenschaftlicher (...) Forschung mit Blick auf die Dopingdefinition und die daraus resultierenden strukturellen Schwierigkeiten darstellen. (shrink)
Nations are understood to have a right to go to war, not only in defense of individual rights, but in defense of their own political standing in a given territory. This paper argues that the political defensive privilege cannot be satisfactorily explained, either on liberal cosmopolitan grounds or on pluralistic grounds. In particular, it is argued that pluralistic accounts require giving implausibly strong weight to the value of political communities, overwhelming the standing of individuals. Liberal cosmopolitans, it is argued, underestimate (...) the difficulties in disentangling a state’s role in upholding or threatening individual interests from its role in providing the social context that shapes and determines those very interests. The paper proposes an alternative theory, “prosaic statism”, which shares the individualistic assumptions of liberal cosmopolitanism, but avoids a form of fundamentalism about human rights, and is therefore less likely to recommend humanitarian intervention in non-liberal states. (shrink)
It remains controversial whether touch is a truly spatial sense or not. Many philosophers suggest that, if touch is indeed spatial, it is only through its alliances with exploratory movement, and with proprioception. Here we develop the notion that a minimal yet important form of spatial perception may occur in purely passive touch. We do this by showing that the array of tactile receptive fields in the skin, and appropriately relayed to the cortex, may contain the same basic informational building (...) blocks that a creature navigating around its environment uses to build up a perception of space. We illustrate this point with preliminary evidence that perception of spatiotemporal patterns on the human skin shows some of the same features as spatial navigation in animals. We argue (a) that the receptor array defines a ‘tactile field’, (b) that this field exists in a minimal form in ‘skin space’, logically prior to any transformation into bodily or external spatial coordinates, and (c) that this field supports tactile perception without integration of concurrent proprioceptive or motor information. The basic cognitive elements of space perception may begin at lower levels of neural and perceptual organisation than previously thought. (shrink)
Neutralism is the broad view that philosophical progress can take place when (and sometimes only when) a thoroughly neutral, non-specific theory, treatment, or methodology is adopted. The broad goal here is to articulate a distinct, specific kind of sorites paradox (The Observational Sorites Paradox) and show that it can be effectively treated via Neutralism.
Our purpose in this article is to draw attention to a connection that obtains between two dilemmas from two separate spheres: sports and the law. It is our contention that umpires in the game of cricket may face a dilemma that is similar to a dilemma confronted by legal decision makers and that comparing the nature of the dilemmas, and the arguments advanced to solve them, will serve to advance our understanding of both the law and games.
Sports physicians are continuously confronted with new biotechnological innovations. This applies not only to doping in sports, but to all kinds of so-called enhancement methods. One fundamental problem regarding the sports physician's self-image consists in a blurred distinction between therapeutic treatment and non-therapeutic performance enhancement. After a brief inventory of the sports physician's work environment I reject as insufficient the attempts to resolve the conflict of the sports physician by making it a classificatory problem. Followed by a critical assessment of (...) some ideas from the US President's Council on Bioethics, the formulation of ethical codes and attempts regarding a moral topography, it is argued that the sports physician's conflict cannot be resolved by the distinction between therapy and enhancement. Instead, we also have to consider the possibility that the therapy-based paradigm of medicine cannot do justice to the challenges of the continuously increasing technical manipulability of the human body and even our cognitive functions as well. At the same time we should not adhere to transhumanist ideas, because non-therapeutic interventions require clear criteria. Based on assistive technologies an alternative framework can be sketched that allows for the integration of therapeutic and non-therapeutic purposes. After a thorough definition of standards and criteria, the role of the sports physician might be defined as that of an assistant for enhancement. Yet the process of defining such an alternative framework is a societal and political task that cannot be accomplished by the sports physicians themselves. Until these questions are answered sports physicians continue to find themselves in a structural dilemma that they partially can come to terms with through personal integrity. (shrink)
This paper asks how Kant’s mature theory of freedom handles an objection pertaining to chance. This question is significant given that Kant raises this criticism against libertarianism in his early writings on freedom before coming to adopt a libertarian view of freedom in the Critical period. After motivating the problem of how Kant can hold that the free actions of human beings lack determining grounds while at the same maintain that these are not the result of ‘blind chance,’ I argue (...) that Kant’s Critical doctrine of transcendental idealism, while creating the conceptual space’ for libertarian freedom, is not intended to provide an answer to the problem of chance with respect to our free agency. I go on to show how the resources for a refutation of chance only come about in the practical philosophy. In the 2nd Critique, Kant famously argues for the reality of freedom on the basis of our consciousness of the moral law as the law of a free will. However, Kant also comes to build into his account of the will a genuine power of choice, which involves the capacity to deviate from the moral law. I conclude by showing that this apparent tension can be resolved by turning to his argument for the impossibility of a diabolical will. This involves a consideration of the distinct kind of grounding relationship that practical laws have to the human will, as well as the way that transcendental idealism makes this possible. (shrink)
This work describes a seminal framework of law by one of the founders of the field of law and economics, Judge Guido Calabresi. It broadens what is known as the framework of law among legal scholars, and posits a calabresi theorem which is developed and explained, in part, in comparison to the coase theorem. The framework provides policymakers a tool for creating balanced policies.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.