This article aims to show that Williamson's anti-luminosity argument does not succeed if we presuppose a constitutive connection between the phenomenal and the doxastic. In contrast to other luminists, however, my strategy is not to critically focus on the refined safety condition in terms of degrees of confidence that anti-luminists typically use in this context. Instead, I will argue that, given a certain conception of what Chalmers calls ‘direct phenomenal concepts,’ luminosity is guaranteed even if the refined safety condition in (...) terms of degrees of confidence is taken for granted. (shrink)
Is understanding epistemic in nature? Does a correct account of what constitutes understanding of a concept mention epistemological notions such as knowledge, justification or epistemic rationality? We defend the view that understanding is epistemic in nature – we defend epistemological conceptions of understanding. We focus our discussion with a critical evaluation of Tim Williamson's challenges to epistemological conceptions of understanding in The Philosophy of Philosophy. Against Williamson, we distinguish three kinds of epistemological conceptions and argue that Williamson's arguments succeed against (...) only the most heavily committed kind, and leave the less heavily committed kinds untouched. Further, we argue that Williamson's elaboration of lessons from his arguments point in a direction opposite of his own conclusions and give vivid articulation and support to epistemological conceptions. We suggest also that skepticism about Williamson's larger metaphilosophical conclusions – according to which understanding plays no special role in the epistemology of philosophy – may be in order. (shrink)
The paper critically examines recent work on justifications and excuses in epistemology. I start with a discussion of Gerken’s claim that the “excuse maneuver” is ad hoc. Recent work from Timothy Williamson and Clayton Littlejohn provides resources to advance the debate. Focusing in particular on a key insight in Williamson’s view, I then consider an additional worry for the so-called excuse maneuver. I call it the “excuses are not enough” objection. Dealing with this objection generates pressure in two directions: one (...) is to show that excuses are a positive enough normative standing to help certain externalists with important cases; the other is to do so in a way that does not lead back to Gerken’s objection. I show how a Williamson-inspired framework is flexible enough to deal with both sources of pressure. Perhaps surprisingly, I draw on recent virtue epistemology. (shrink)
The paper starts by describing and clarifying what Williamson calls the consequence fallacy. I show two ways in which one might commit the fallacy. The first, which is rather trivial, involves overlooking background information; the second way, which is the more philosophically interesting, involves overlooking prior probabilities. In the following section, I describe a powerful form of sceptical argument, which is the main topic of the paper, elaborating on previous work by Huemer. The argument attempts to show the impossibility of (...) defeasible justification, justification based on evidence which does not entail the (allegedly) justified proposition or belief. I then discuss the relation between the consequence fallacy, or some similar enough reasoning, and that form of argument. I argue that one can resist that form of sceptical argument if one gives up the idea that a belief cannot be justified unless it is supported by the totality of the evidence available to the subject—a principle entailed by many prominent epistemological views, most clearly by epistemological evidentialism. The justification, in the relevant cases, should instead derive solely from the prior probability of the proposition. A justification of this sort, that does not rely on evidence, would amount to a form of entitlement, in (something like) Crispin Wright’s sense. I conclude with some discussion of how to understand prior probabilities, and how to develop the notion of entitlement in an externalist epistemological framework. (shrink)
In a paper called 'Definiteness and Knowability', Tim Williamson addresses the question whether one must accept that vagueness is an epistemic phenomenon if one adopts classical logic and a disquotational principle for truth. Some have suggested that one must not, hence that classical logic and the disquotational principle may be preserved without endorsing epistemicism. Williamson’s paper, however, finds ‘no plausible way of substantiating that possibility’. Its moral is that ‘either classical logic fails, or the disquotational principle does, or vagueness is (...) an epistemic phenomenon’. The moral of this paper, on the contrary, is that there is a plausible way of substantiating that possibility. The option it contemplates looks like a view that Williamson dismisses at the beginning of his paper, and that others regard as unworthy of serious consideration. (shrink)
*This work is no longer under development* Two major themes in the literature on indicative conditionals are that the content of indicative conditionals typically depends on what is known;1 that conditionals are intimately related to conditional probabilities.2 In possible world semantics for counterfactual conditionals, a standard assumption is that conditionals whose antecedents are metaphysically impossible are vacuously true.3 This aspect has recently been brought to the fore, and defended by Tim Williamson, who uses it in to characterize alethic necessity by (...) exploiting such equivalences as: A⇔¬A A. One might wish to postulate an analogous connection for indicative conditionals, with indicatives whose antecedents are epistemically impossible being vacuously true: and indeed, the modal account of indicative conditionals of Brian Weatherson has exactly this feature.4 This allows one to characterize an epistemic modal by the equivalence A⇔¬A→A. For simplicity, in what follows we write A as KA and think of it as expressing that subject S knows that A.5 The connection to probability has received much attention. Stalnaker suggested, as a way of articulating the ‘Ramsey Test’, the following very general schema for indicative conditionals relative to some probability function P: P = P 1For example, Nolan ; Weatherson ; Gillies. 2For example Stalnaker ; McGee ; Adams. 3Lewis. See Nolan for criticism. 4‘epistemically possible’ here means incompatible with what is known. 5This idea was suggested to me in conversation by John Hawthorne. I do not know of it being explored in print. The plausibility of this characterization will depend on the exact sense of ‘epistemically possible’ in play—if it is compatibility with what a single subject knows, then can be read ‘the relevant subject knows that p’. If it is more delicately formulated, we might be able to read as the epistemic modal ‘must’. (shrink)
A counterpossible conditional is a counterfactual with an impossible antecedent. Common sense delivers the view that some such conditionals are true, and some are false. In recent publications, Timothy Williamson has defended the view that all are true. In this paper we defend the common sense view against Williamson’s objections.
Timothy Williamson has defended the claim that the semantics of the indicative ‘if’ is given by the material conditional. Putative counterexamples can be handled by better understanding the role played in our assessment of indicatives by a fallible cognitive heuristic, called the Suppositional Procedure. Williamson’s Suppositional Conjecture has it that the Suppositional Procedure is humans’ primary way of prospectively assessing conditionals. This paper raises some doubts on the Suppositional Procedure and Conjecture.
Timothy Williamson has fruitfully exploited formal resources to shed considerable light on the nature of knowledge. In the paper under examination, Williamson turns his attention to Gettier cases, showing how they can be motivated formally. At the same time, he disparages the kind of justification he thinks gives rise to these cases. He favors instead his own notion of justification for which Gettier cases cannot arise. We take issue both with his disparagement of the kind of justification that figures in (...) Gettier cases and the specifics of the formal motivation. (shrink)
Williamson has a strikingly economical way of showing how justified true belief can fail to constitute knowledge: he models a class of Gettier cases by means of two simple constraints. His constraints can be shown to rely on some unstated assumptions about the relationship between reality and appearance. These assumptions are epistemologically non-trivial but can be defended as plausible idealizations of our actual predicament, in part because they align well with empirical work on the metacognitive dimension of experience.
Much recent discussion has focused on the nature of artifacts, particularly on whether artifacts have essences. While the general consensus is that artifacts are at least intention-dependent, an equally common view is function essentialism about artifacts, the view that artifacts are essentially functional objects and that membership in an artifact kind is determined by a particular, shared function. This paper argues that function essentialism about artifacts is false. First, the two component conditions of function essentialism are given a clear and (...) precise formulation, after which counterexamples are offered to each. Second, ways to handle the counterexamples suggested by Randall Dipert and Simon Evnine are considered and rejected. Third, I then consider the prospects for restricting function essentialism to so-called technical artifacts, as Lynne Baker does, and argue that this, too, fails. This paper thereby consolidates the scattered literature on function essentialism and shows that, despite the seeming plausibility of the thesis, it should be rejected as an account of artifact essences. (shrink)
I attempt to meet some criticisms that Williamson makes of my attempt to carry out Prior's project of reducing possibility discourse to actualist discourse.
According to many philosophers, psychological explanation canlegitimately be given in terms of belief and desire, but not in termsof knowledge. To explain why someone does what they do (so the common wisdom holds) you can appeal to what they think or what they want, but not what they know. Timothy Williamson has recently argued against this view. Knowledge, Williamson insists, plays an essential role in ordinary psychological explanation.Williamson's argument works on two fronts.First, he argues against the claim that, unlike knowledge, (...) belief is``composite'' (representable as a conjunction of a narrow and a broadcondition). Belief's failure to be composite, Williamson thinks, undermines the usual motivations for psychological explanation in terms of belief rather than knowledge.Unfortunately, we claim, the motivations Williamson argues against donot depend on the claim that belief is composite, so what he saysleaves the case for a psychology of belief unscathed.Second, Williamson argues that knowledge can sometimes provide abetter explanation of action than belief can.We argue that, in the cases considered, explanations that cite beliefs(but not knowledge) are no less successful than explanations that citeknowledge. Thus, we conclude that Williamson's arguments fail both coming andgoing: they fail to undermine a psychology of belief, and they fail tomotivate a psychology of knowledge. (shrink)
Tim Button explores the relationship between minds, words, and world. He argues that the two main strands of scepticism are deeply related and can be overcome, but that there is a limit to how much we can show. We must position ourselves somewhere between internal realism and external realism, and we cannot hope to say exactly where.
If the semantic value of predicates are, as Williamson assumes, properties, then epistemicism is immediate. Epistemicism fails, so also this properties view of predicates. I use examination of Williamsons position as a foil, showing that his two positive arguments for bivalence fail, and that his efforts to rescue epistemicism from obvious problems fail to the point of incoherence. In Part II I argue that, despite the properties view’s problems, it has an important role to play in combinatorial semantics. We may (...) separate the problem of how smallest parts of language get attached to the world from the problem of how those parts combine to form complex semantic values. For the latter problem we idealize and treat the smallest semantic values as properties (and referents). So doing functions to put to one side how the smallest parts get worldly attachment, a problem that would just get in the way of understanding the combinatorics. Attachment to the world has to be studied separately, and I review some of the options. As a bonus we see why, mostly, higher order vagueness is an artifact of taking properties as semantic values literally instead of as a simplifying idealization. (shrink)
In this paper I examine Williamson’s (2000) claim that all evidence is propositional. I propose to reject this claim. I give two objections to two premises of Williamson’s argument. The first is a critique of Williamson’s claim that we choose between hypotheses on the basis of our evidence. The second objection is that Williamson’s claim that evidence is an explanandum of an hypothesis leads to counter-intuitive consequences and thus is not central to what evidence is, at least on an ordinary (...) understanding. (shrink)
Experiences of embodied remembering are familiar and diverse. We settle bodily into familiar chairs or find our way easily round familiar rooms. We inhabit our own kitchens or cars or workspaces effectively and comfortably, and feel disrupted when our habitual and accustomed objects or technologies change or break or are not available. Hearing a particular song can viscerally bring back either one conversation long ago, or just the urge to dance. Some people explicitly use their bodies to record, store, or (...) cue memories. Others can move skilfully, without stopping to think, in complex and changing environments thanks to the cumulative expertise accrued in their history of fighting fires, or dancing, or playing hockey. The forms of memory involved in these cases may be distinct, operating at different timescales and levels, and by way of different mechanisms and media, but they often cooperate in the many contexts of our practices of remembering. (shrink)
Timothy Williamson’s Knowledge and its Limits has been highly influential since the beginning of this century. It can be read as a systematic response to scepticism. One of the most important notions in this response is the notion of «evidence,» which will be the focus of the present paper. I attempt to show primarily two things. First, the notion of evidence invoked by Williamson does not address the sceptical worry: he stipulates an objective notion of evidence, but this begs the (...) question against his opponent. Second, his related thesis «Evidence equals Knowledge» does not sit well with his own content externalism: he promises to relate epistemology to philosophy of mind, but he fails to live up to this commitment in his crucial chapter on scepticism. Other minor problems concerning evidence will also be discussed in due course. (shrink)
The paper provides a review of the collection of scientific works «Williamson on Modality» and contains a brief summary of the main ideas of the articles published in the book.
Being social creatures in a complex world, we do things together. We act jointly. While cooperation, in its broadest sense, can involve merely getting out of each other’s way, or refusing to deceive other people, it is also essential to human nature that it involves more active forms of collaboration and coordination (Tomasello 2009; Sterelny 2012). We collaborate with others in many ordinary activities which, though at times similar to those of other animals, take unique and diverse cultural and psychological (...) forms in human beings. But we also work closely and interactively with each other in more peculiar and flexible practices which are in distinctive ways both species-specific and culturally and historically contingent: from team sports to shared labour, from committee work to mass demonstrations, from dancing to reminiscing together about old times. (shrink)
The phenomenal character of perceptual experience involves the representation of colour, shape and motion. Does it also involve the representation of high-level categories? Is the recognition of a tomato as a tomato contained within perceptual phenomenality? Proponents of a conservative view of the reach of phenomenal content say ’No’, whereas those who take a liberal view of perceptual phenomenality say ’Yes’. I clarify the debate between conservatives and liberals, and argue in favour of the liberal view that high-level content can (...) directly inform the phenomenal character of perception. (shrink)
Abstract objects are standardly taken to be causally inert, however principled arguments for this claim are rarely given. As a result, a number of recent authors have claimed that abstract objects are causally efficacious. These authors take abstracta to be temporally located in order to enter into causal relations but lack a spatial location. In this paper, I argue that such a position is untenable by showing first that causation requires its relata to have a temporal location, but second, that (...) if an entity is temporally located then it is spatiotemporally located since this follows from the theory of Relativity. Since abstract objects lack a spatiotemporal location, then if something is causally efficacious, it is not abstract. (shrink)
Standard Type Theory, ${\textrm {STT}}$, tells us that $b^n$ is well-formed iff $n=m+1$. However, Linnebo and Rayo [23] have advocated the use of Cumulative Type Theory, $\textrm {CTT}$, which has more relaxed type-restrictions: according to $\textrm {CTT}$, $b^\beta $ is well-formed iff $\beta>\alpha $. In this paper, we set ourselves against $\textrm {CTT}$. We begin our case by arguing against Linnebo and Rayo’s claim that $\textrm {CTT}$ sheds new philosophical light on set theory. We then argue that, while $\textrm {CTT}$ (...) ’s type-restrictions are unjustifiable, the type-restrictions imposed by ${\textrm {STT}}$ are justified by a Fregean semantics. What is more, this Fregean semantics provides us with a principled way to resist Linnebo and Rayo’s Semantic Argument for $\textrm {CTT}$. We end by examining an alternative approach to cumulative types due to Florio and Jones [10]; we argue that their theory is best seen as a misleadingly formulated version of ${\textrm {STT}}$. (shrink)
There are empirical grounds to doubt the effectiveness of a common and intuitive approach to teaching debiasing strategies in critical thinking courses. We summarize some of the grounds before suggesting a broader taxonomy of debiasing strategies. This four-level taxonomy enables a useful diagnosis of biasing factors and situations, and illuminates more strategies for more effective bias mitigation located in the shaping of situational factors and reasoning infrastructure—sometimes called “nudges” in the literature. The question, we contend, then becomes how best to (...) teach the construction and use of such infrastructures. (shrink)
objects are standardly taken to be causally inert, but this claim is rarely explicitly argued for. In the context of his platonism about musical works, in order for musical works to be audible, Julian Dodd argues that abstracta are causally efficacious in virtue of their concrete tokens participating in events. I attempt to provide a principled argument for the causal inertness of abstracta by first rejecting Dodd’s arguments from events, and then extending and generalizing the causal exclusion argument to the (...) abstract/concrete distinction. For reasons of parsimony, if concrete tokens or instantiations of abstract objects account for all causal work, then there is no reason to attribute causal efficacy to abstracta, and thus reason to maintain their causal inertness. I then consider how one of the main arguments against causal exclusion, namely Stephen Yablo’s notion of “proportionality”, could be modified to support the causal efficacy of abstracta. I argue that from a few simple premises Yablo’s account in fact supports their causal inertness. Having a principled reason for the causal inertness of abstracta appears to entail that the musical platonist must admit that we never literally hear the musical work, but only its performances. I sketch a solution to this problem available to Dodd, so that the musical platonist can maintain that musical works are abstract objects and are causally inert while retaining their audibility. (shrink)
The attention economy — the market where consumers’ attention is exchanged for goods and services — poses a variety of threats to individuals’ autonomy, which, at minimum, involves the ability to set and pursue ends for oneself. It has been argued that the threat wireless mobile devices pose to autonomy gives rise to a duty to oneself to be a digital minimalist, one whose interactions with digital technologies are intentional such that they do not conflict with their ends. In this (...) paper, we argue that there is a corresponding duty to others to be an attention ecologist, one who promotes digital minimalism in others. Although the moral reasons for being an attention ecologist are similar to those that motivate the duty to oneself, the arguments diverge in important ways. We explore the application of this duty in various domains where we have special obligations to promote autonomy in virtue of the different roles we play in the lives of others, such as parents and teachers. We also discuss the consequences of our arguments for employers, software developers, and policy makers. (shrink)
This paper offers some refinements to a particular objection to act consequentialism, the “causal impotence” objection. According to proponents of the objection, when we find circumstances in which severe, unnecessary harms result entirely from voluntary acts, it seems as if we should be able to indict at least one act among those acts, but act consequentialism appears to lack the resources to offer this indictment. Our aim is to show is that the most promising response on behalf of act consequentialism, (...) the threshold argument, cannot offer a fully general prescription about what to do in cases of collective action. (shrink)
The phenomenology of agency has, until recently, been rather neglected, overlooked by both philosophers of action and philosophers of consciousness alike. Thankfully, all that has changed, and of late there has been an explosion of interest in what it is like to be an agent. 1 This burgeoning field crosses the traditional boundaries between disciplines: philosophers of psychopathology are speculating about the role that unusual experiences of agency might play in accounting for disorders of thought and action; cognitive scientists are (...) developing models of how the phenomenology of agency is generated; and philosophers of mind are drawing connections between the phenomenology of agency and the nature of introspection, phenomenal character, and agency itself. My aim in this paper is not to provide an exhaustive survey of this recent literature, but to provide a.. (shrink)
The sensitivity condition on knowledge says that one knows that P only if one would not believe that P if P were false. Difficulties for this condition are now well documented. Keith DeRose has recently suggested a revised sensitivity condition that is designed to avoid some of these difficulties. We argue, however, that there are decisive objections to DeRose’s revised condition. Yet rather than simply abandoning his proposed condition, we uncover a rationale for its adoption, a rationale which suggests a (...) further revision that avoids our objections as well as others. The payoff is considerable: along the way to our revision, we learn lessons about the epistemic significance of certain explanatory relations, about how we ought to envisage epistemic closure principles, and about the epistemic significance of methods of belief formation. (shrink)
By definition, pain is a sensory and emotional experience that is felt in a particular part of the body. The precise relationship between somatic events at the site where pain is experienced, and central processing giving rise to the mental experience of pain remains the subject of debate, but there is little disagreement in scholarly circles that both aspects of pain are critical to its experience. Recent experimental work, however, suggests a public view that is at odds with this conceptualisation. (...) By demonstrating that the public does not necessarily endorse central tenets of the “mental” view of pain (subjectivity, privacy, and incorrigibility), experimental philosophers have argued that the public holds a more “body-centric” view than most clinicians and scholars. Such a discrepancy would have important implications for how the public interacts with pain science and clinical care. In response, we tested the hypothesis that the public is capable of a more “mind-centric” view of pain. Using a series of vignettes, we demonstrate that in situations which highlight mental aspects of pain the public can, and does, recognize pain as a mental phenomenon. We also demonstrate that the public view is subject to context effects, by showing that the public’s view is modified when situations emphasizing mental and somatic aspects of pain are presented together. (shrink)
I defend the intention-dependence of artifacts, which says that something is an artifact of kind K only if it is the successful product of an intention to make an artifact of kind K. I consider objections from two directions. First, that artifacts are often mind- and intention-dependent, but that this isn’t necessary, as shown by swamp cases. I offer various error theories for why someone would have artifact intuitions in such cases. Second, that while artifacts are necessarily mind-dependent, they aren’t (...) necessarily intention-dependent. I consider and reject three kinds of cases which purport to show this: accidental creation, automated production, and mass production. I argue that intentions are present in all of these cases, but not where we would normally expect. (shrink)
This paper defends an interpretation of the representational function of sensation in Kant's theory of empirical cognition. Against those who argue that sensations are ?subjective representations? and hence can only represent the sensory state of the subject, I argue that Kant appeals to different notions of subjectivity, and that the subjectivity of sensations is consistent with sensations representing external, spatial objects. Against those who claim that sensations cannot be representational at all, because sensations are not cognitively sophisticated enough to possess (...) intentionality, I argue that Kant does not use the term ?Vorstellung? to refer to intentional mental states exclusively. Sensations do not possess their own intentionality, but they nevertheless perform a representational function in virtue of their role as the matter of empirical intuition. In empirical intuition, the sensory qualities given in sensation are combined with the representation of space to constitute the intuited appearance. The representational function of sensation consists in sensation being the medium out of which intuited appearances are constituted: the qualities of sensations stand in for what the understanding will judge (conceptualize) as material substance. (shrink)
In this paper we present the comparative study by Jonathan Stoltz, about the similarities between Timothy Williamson's epistemology and the Indo-Tibetan epistemological traditions that developed between the eighth and thirteenth centuries, which lie above that for them the knowledge is a kind of mental state as well as a restricted study of the return to metaphysics, a name that refers to the interesting position of authority born of an attempt to overcome his classical and strong Anglo-American analytical roots.
Phenomenalist interpretations of Kant are out of fashion. The most common complaint from anti-phenomenalist critics is that a phenomenalist reading of Kant would collapse Kantian idealism into Berkeleyan idealism. This would be unacceptable because Berkeleyan idealism is incompatible with core elements of Kant’s empirical realism. In this paper, I argue that not all phenomenalist readings threaten empirical realism. First, I distinguish several variants of phenomenalism, and then show that Berkeley’s idealism is characterized by his commitment to most of them. I (...) then make the case that two forms of phenomenalism are consistent with Kant’s empirical realism. The comparison between Kant and Berkeley runs throughout the paper, with special emphasis on the significance of their theories of intentionality. (shrink)
In the first Critique, Kant attempts to prove what we can call the "Principle of Intensive Magnitudes," according to which every possible object of experience will possess a determinate "degree" of reality. Curiously, Kant argues for this principle by inferring from a psychological premise about internal sensations (they have intensive magnitudes) to a metaphysical thesis about external objects (they also have intensive magnitudes). Most commentators dismiss the argument as a failure. In this article I give a reconstruction of Kant's argument (...) that attempts to rehabilitate the argument back into his broader transcendental theory of experience. I argue that we can make sense of the argument's central inference by appeal to Kant's theory of empirical intuition and by an analysis of the way in which Kant thinks sensory matter constitutes our most basic representations of objects. (shrink)
According to conventional wisdom, the split-brain syndrome puts paid to the thesis that consciousness is necessarily unified. The aim of this paper is to challenge that view. I argue both that disunity models of the split-brain are highly problematic, and that there is much to recommend a model of the split-brain—the switch model—according to which split-brain patients retain a fully unified consciousness at all times. Although the task of examining the unity of consciousness through the lens of the split-brain syndrome (...) is not a new one—such projects date back to Nagel’s seminal paper on the topic—the time is ripe for a re-evaluation of the issues. (shrink)
In May 1999, David Lewis sent Timothy Williamson an intriguing letter about knowledge and vagueness. This paper has a brief discussion of Lewis on evidence, and a longer discussion of a distinctive theory of vagueness Lewis puts forward in this letter, one rather different from standard forms of supervaluationism. Lewis's theory enables him to provide distinctive responses to the challenges to supervaluationism famously offered in chapter 5 of Timothy Williamson's 1994 book Vagueness. However these responses bring out a number of (...) very surprising features of Lewis's own view. -/- The letter from Lewis itself is available on the blog of The Age of Metaphysical Revolution Project, University of Manchester. (shrink)
In the beginning of the 1970s, Michel Foucault dismisses the terminology of ‘exclusion’ for his projected analytics of modern power. This rejection has had major repercussions on the theory of neoliberal subject-formation. Many researchers disproportionately stress how neoliberal dispositifs produce entrepreneurial subjects, albeit in different ways, while minimizing how these dispositifs sometimes emphatically refuse to produce neoliberal subjects. Relying on Saskia Sassen’s work on financialization, I argue that neoliberal dispositifs not only apply entrepreneurial norms, but also suspend their application for (...) groups that threaten to harm the population’s profitability. Neoliberal dispositifs not only produce entrepreneurial subjects, but also surplus populations that are expelled from the overall population to maintain its productivity. Here, the concept of ‘exclusion’ is appropriate if understood in Agamben’s sense of an inclusive exclusion. The surplus population is part of neoliberal dispositifs, but only as the element to be abandoned. (shrink)
This paper examines Timothy Williamson's recent 'expertise defense' of armchair philosophy mounted by skeptical experimental philosophers. The skeptical experimental philosophers argue that the methodology of traditional 'armchair' philosophers rests up trusting their own intuitions about particular problem cases. Empirical studies suggest that these intuitions are not generally shared and that such intuitions are strongly influenced factors that are not truth conducive such as cultural background or whether or not the question is asked in a messy or tidy office. Williamson's response (...) is that the skeptical armchair philosophers trust the expertise of the social scientists, as they trust and use the methods of the social sciences to undermine trust in the judgment of armchair philosophers. Given this, the burden of proof is on the skeptical experimental philosopher to give us a reason to doubt the expertise of the armchair philosopher. I examine how our understanding of the history of philosophy is significant in this context. And suggest that prevalent false beliefs about the history of philosophy can lead to mistrust of the expertise of philosophers. (shrink)
Explanation does not exist in a metaphysical vacuum. Conceptions of the structure of a phenomenon play an important role in guiding attempts to explain it, and erroneous conceptions of a phenomenon may direct investigation in misleading directions. I believe that there is a case to be made for thinking that much work on the neural underpinnings of consciousness—what is often called the neural correlates of consciousness—is driven by an erroneous conception of the structure of consciousness. The aim of this paper (...) is lay bare some connections between the explanation of consciousness and the structure of consciousness, and to argue for a conception of the structure of consciousness that is more adequate than that which currently drives much research into the neural correlates of consciousness. (shrink)
Many authors in ethics, economics, and political science endorse the Lottery Requirement, that is, the following thesis: where different parties have equal moral claims to one indivisible good, it is morally obligatory to let a fair lottery decide which party is to receive the good. This article defends skepticism about the Lottery Requirement. It distinguishes three broad strategies of defending such a requirement: the surrogate satisfaction account, the procedural account, and the ideal consent account, and argues that none of these (...) strategies succeed. The article then discusses and discharges some remaining grounds for resistance to these skeptical conclusions, as well as the possibility of defending a weaker version of a normative lottery principle. The conclusion is that we have no reason to believe that where equal claims conflict, we are morally required to hold a lottery, as opposed to simply picking one of the parties on more subjective grounds or out of pure whim. In addition to the practical consequences of this skeptical view, the article sketches some theoretical implications for debates about saving the greater number and about axiomatic utilitarianism. (shrink)
Purpose – Contemporary technology has been implicated in the rise of perfectionism, a personality trait that is associated with depression, suicide and other ills. is paper explores how technology can be developed to promote an alternative to perfectionism, which is a self- constructionist ethic. Design/methodology/approach – is paper takes the form of a philosophical discussion. A conceptual framework is developed by connecting the literature on perfectionism and personal meaning with discussions in information ethics on the self, the ontic trust and (...) technologies of the self. To illustrate these themes, the example of selfies and self-portraits is discussed. Findings – e self today must be understood as both individualistic and relational, i.e., hybrid; the trouble is balance. To realize balance, the self should be recognized as part of the ontic trust to which all information organisms and objects belong. us technologically-mediated self-care takes on a deeper urgency. e selfie is one example of a technology for self-care that has gone astray (i.e., lost some of its care-conducive aspects), but this can be remedied if selfie-making technology incorporates relevant aspects of self-portraiture. is example provides a path for developing self- constructionist and meaningful technologies more generally. Practical implications – Technology development should proceed with self-care and meaning in mind. e comparison of selfies and self-portraits, situated historically and theoretically, provides some guidance in this regard. Some specific avenues for development are presented. Originality/value – e question of the self has not been much discussed in information ethics. is paper links the self to the ontic trust: the self can be fruitfully understood as an agent within the ontic trust to which we all belong. (shrink)
Although delusions are typically regarded as beliefs of a certain kind, there have been worries about the doxastic conception of delusions since at least Bleuler’s time. ‘Anti-doxasticists,’ as we might call them, do not merely worry about the claim that delusions are beliefs, they reject it. Reimer’s paper weighs into the debate between ‘doxasticists’ and ‘anti-doxasticists’ by suggesting that one of the main arguments given against the doxastic conception of delusions—what we might call the functional role objection—is based on a (...) fallacy. She also draws attention to certain parallels between delusions and what she calls “nihilistic philosophical doctrines,” such as the skeptical position that we have no .. (shrink)
Evidential Pluralism maintains that in order to establish a causal claim one normally needs to establish the existence of an appropriate conditional correlation and the existence of an appropriate mechanism complex, so when assessing a causal claim one ought to consider both association studies and mechanistic studies. Hitherto, Evidential Pluralism has been applied to medicine, leading to the EBM+ programme, which recommends that evidence-based medicine should systematically evaluate mechanistic studies alongside clinical studies. This paper argues that Evidential Pluralism can also (...) be fruitfully applied to the social sciences. In particular, Evidential Pluralism provides (i) a new approach to evidence-based policy; (ii) an account of the evidential relationships in more theoretical research; and (iii) new philosophical motivation for mixed methods research. The application of Evidential Pluralism to the social sciences is also defended against two objections. (shrink)
The recently rising field of Critical Data Studies is still facing fundamental questions. Among these is the enigma of digital subjectivation. Who are the subjects of Big Data? A field where this question is particularly pressing is finance. Since the 1990s traders have been steadily integrated into computerized data assemblages, which calls for an ontology that eliminates the distinction between human sovereign subjects and non-human instrumental objects. The latter subjectivize traders in pre-conscious ways, because human consciousness runs too slow to (...) follow the volatility of the market. In response to this conundrum Social Studies of Finance has drawn on Actor-Network Theory to interpret financial markets as technically constructed networks of human and non-human actors. I argue that in order to develop an explicitly critical data study it might be advantageous to refer to Maurizio Lazzarato’s theory of machinic subjugation instead. Although both accounts describe financial digital subjectivation similarly, Lazzarato has the advantage of coupling his description to a clear critique of and resistance to finance. (shrink)
Many philosophers are impressed by the progress achieved by physical sciences. This has had an especially deep effect on their ontological views: it has made many of them physicalists. Physicalists believe that everything is physical: more precisely, that all entities, properties, relations, and facts are those which are studied by physics or other physical sciences. They may not all agree with the spirit of Rutherford's quoted remark that 'there is physics; and there is stamp-collecting',' but they all grant physical science (...) a unique ontological authority: the authority to tell us what there is. Physicalism is now almost orthodox in much philosophy, notably in much recent philosophy of mind. But although often invoked, it is rarely explicitly defined. It should be. The claim that everything is physical is not as clear as it seems. In this paper, we examine a number of proposed definitions of physicalism and reasons for being a physicalist. We will argue both that physicalism lacks a clear and credible definition, and that in no non-vacuous interpretation is it true. We are concerned here only with physicalism as a doctrine about the empirical world. In particular, it should not be confused with nominalism, the doctrine that there are no universals.2 Nominalism and physicalism are quite independent doctrines. Believers in universals may as consistently assert as deny that the only properties and relations are those studied by physical science. And nominalists may with equal consistency assert or deny that physical science could provide enough predicates to describe the world. That is the question which concerns physicalists, not whether physical predicates name real universals. (We will for brevity write as if they do, but we do not need that assumption.). (shrink)
It is widely agreed that perceptual experience is a form of intentionality, i.e., that it has representational content. Many philosophers take this to mean that like belief, experience has propositional content, that it can be true or false. I accept that perceptual experience has intentionality; but I dispute the claim that it has propositional content. This claim does not follow from the fact that experience is intentional, nor does it follow from the fact that experiences are accurate or inaccurate. I (...) end by considering the relationship between this question and the question of whether experience has non-conceptual content. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.