The growing literature on philosophical thought experiments has so far focused almost exclusively on the role of thought experiments in confirming or refuting philosophical hypotheses or theories. In this paper we draw attention to an additional and largely ignored role that thought experiments frequently play in our philosophical practice: some thought experiments do not merely serve as means for testing various philosophical hypotheses or theories, but also serve as facilitators for conceiving and articulating new ones. As we will put it, (...) they serve as ‘heuristics for theory discovery’. Our purpose in the paper is two-fold: to make a case that this additional role of thought experiments deserves the attention of philosophers interested in the methodology of philosophy; to sketch a tentative taxonomy of a number of distinct ways in which philosophical thought experiments can aid theory discovery, which can guide future research on this role of thought experiments. (shrink)
In a recent paper in this journal, Carter and Peterson raise two distinctly epistemological puzzles that arise for anyone aspiring to defend the precautionary principle. The first puzzle trades on an application of epistemic contextualism to the precautionary principle; the second puzzle concerns the compatibility of the precautionary principle with the de minimis rule. In this note, I argue that neither puzzle should worry defenders of the precautionary principle. The first puzzle can be shown to be an instance of the (...) familiar but conceptually harmless challenge of adjudicating between relevant interests to reach assessments of threats when applying the precautionary principle. The second puzzle can be shown to rely on a subtle but crucial misrepresentation of the relevant probabilities at play when applying the precautionary principle. (shrink)
In this paper I propose a teleological account of epistemic reasons. In recent years, the main challenge for any such account has been to explicate a sense in which epistemic reasons depend on the value of epistemic properties. I argue that while epistemic reasons do not directly depend on the value of epistemic properties, they depend on a different class of reasons which are value based in a direct sense, namely reasons to form beliefs about certain propositions or subject matters. (...) In short, S has an epistemic reason to believe that p if and only if S is such that if S has reason to form a belief about p, then S ought to believe that p. I then propose a teleological explanation of this relationship. It is also shown how the proposal can avoid various subsidiary objections commonly thought to riddle the teleological account. (shrink)
A popular account of luck, with a firm basis in common sense, holds that a necessary condition for an event to be lucky, is that it was suitably improbable. It has recently been proposed that this improbability condition is best understood in epistemic terms. Two different versions of this proposal have been advanced. According to my own proposal :361–377, 2010), whether an event is lucky for some agent depends on whether the agent was in a position to know that the (...) event would occur. And according to Stoutenburg :319–334, 2015, Synthese, 1–15, 2018), whether an event is lucky for an agent depends on whether the event was guaranteed or certain to occur in light of the agent’s evidence. In this paper, I argue that we should prefer the account in terms of knowledge over that in terms of evidential certainty. (shrink)
Epistemic instrumentalists seek to understand the normativity of epistemic norms on the model practical instrumental norms governing the relation between aims and means. Non-instrumentalists often object that this commits instrumentalists to implausible epistemic assessments. I argue that this objection presupposes an implausibly strong interpretation of epistemic norms. Once we realize that epistemic norms should be understood in terms of permissibility rather than obligation, and that evidence only occasionally provide normative reasons for belief, an instrumentalist account becomes available that delivers the (...) correct epistemic verdicts. On this account, epistemic permissibility can be understood on the model of the wide-scope instrumental norm for instrumental rationality, while normative evidential reasons for belief can be understood in terms of instrumental transmission. (shrink)
A popular account of epistemic justification holds that justification, in essence, aims at truth. An influential objection against this account points out that it is committed to holding that only true beliefs could be justified, which most epistemologists regard as sufficient reason to reject the account. In this paper I defend the view that epistemic justification aims at truth, not by denying that it is committed to epistemic justification being factive, but by showing that, when we focus on the relevant (...) sense of ‘justification’, it isn’t in fact possible for a belief to be at once justified and false. To this end, I consider and reject three popular intuitions speaking in favor of the possibility of justified false beliefs, and show that a factive account of epistemic justification is less detrimental to our normal belief forming practices than often supposed. (shrink)
Inquiry is an aim-directed activity, and as such governed by instrumental normativity. If you have reason to figure out a question, you have reason to take means to figuring it out. Beliefs are governed by epistemic normativity. On a certain pervasive understanding, this means that you are permitted – maybe required – to believe what you have sufficient evidence for. The norms of inquiry and epistemic norms both govern us as agents in pursuit of knowledge and understanding, and, on the (...) surface, they do so in harmony. Recently, however, Jane Friedman (2020) has pointed out that they are in tension with each other. In this paper, I aim to resolve this tension by showing that reasons for acts of inquiry – zetetic reasons – and epistemic reasons for belief can both be understood as flowing from the same general normative principle: the transmission principle for instrumental reasons. The resulting account is a version of epistemic instrumentalism that offers an attractive unity between zetetic and epistemic normativity. (shrink)
In a recent article, I criticized Kathrin Glüer and Åsa Wikforss's so-called “no guidance argument” against the truth norm for belief, for conflating the conditions under which that norm recommends belief with the psychological state one must be in to apply the norm. In response, Glüer and Wikforss have offered a new formulation of the no guidance argument, which makes it apparent that no such conflation is made. However, their new formulation of the argument presupposes a much too narrow understanding (...) of what it takes for a norm to influence behaviour, and betrays a fundamental misunderstanding of the point of the truth norm. Once this is taken into account, it becomes clear that the no guidance argument fails. (shrink)
In his influential discussion of the aim of belief, David Owens argues that any talk of such an ‘aim’ is at best metaphorical. In order for the ‘aim’ of belief to be a genuine aim, it must be weighable with other aims in deliberation, but Owens claims that this is impossible. In previous work, I have pointed out that if we look at a broader range of deliberative contexts involving belief, it becomes clear that the putative aim of belief is (...) capable of being weighed against other aims. Recently, however, Ema Sullivan-Bissett and Paul Noordhof have objected to this response on the grounds that it employs an undefended conception of the aim of belief not shared by Owens, and that it equivocates between importantly different contexts of doxastic deliberation. In this note, I argue that both of these objections fail. (shrink)
Epistemic instrumentalists think that epistemic normativity is just a special kind of instrumental normativity. According to them, you have epistemic reason to believe a proposition insofar as doing so is conducive to certain epistemic goals or aims—say, to believe what is true and avoid believing what is false. Perhaps the most prominent challenge for instrumentalists in recent years has been to explain, or explain away, why one’s epistemic reasons often do not seem to depend on one’s aims. This challenge can (...) arguably be met. But a different challenge looms: instrumental reasons in the practical domain have various properties that epistemic reasons do not seem to share. In this chapter, we offer a way for epistemic instrumentalists to overcome this challenge. Our main thesis takes the form of a conditional: if we accept an independently plausible transmission principle of instrumental normativity, we can maintain that epistemic reasons in fact do share the relevant properties of practical instrumental reasons. In addition, we can explain why epistemic reasons seem to lack these properties in the first place: some properties of epistemic reasons are elusive, or easy to overlook, because we tend to think and talk about epistemic reasons in an ‘elliptical’ manner. (shrink)
The predominant view in developmental psychology is that young children are able to reason with the concept of desire prior to being able to reason with the concept of belief. We propose an explanation of this phenomenon that focuses on the cognitive tasks that competence with the belief and desire concepts enable young children to perform. We show that cognitive tasks that are typically considered fundamental to our competence with the belief and desire concepts can be performed with the concept (...) of desire in the absence of competence with the concept of belief, whereas the reverse is considerably less feasible. (shrink)
When one has both epistemic and practical reasons for or against some belief, how do these reasons combine into an all-things-considered reason for or against that belief? The question might seem to presuppose the existence of practical reasons for belief. But we can rid the question of this presupposition. Once we do, a highly general ‘Combinatorial Problem’ emerges. The problem has been thought to be intractable due to certain differences in the combinatorial properties of epistemic and practical reasons. Here we (...) bring good news: if we accept an independently motivated version of epistemic instrumentalism—the view that epistemic reasons are a species of instrumental reasons—we can reduce The Combinatorial Problem to the relatively benign problem of how to weigh different instrumental reasons against each other. As an added benefit, the instrumentalist account can explain the apparent intractability of The Combinatorial Problem in terms of a common tendency to think and talk about epistemic reasons in an elliptical manner. (shrink)
Many epistemologists have been attracted to the view that knowledge-wh can be reduced to knowledge-that. An important challenge to this, presented by Jonathan Schaffer, is the problem of “convergent knowledge”: reductive accounts imply that any two knowledge-wh ascriptions with identical true answers to the questions embedded in their wh-clauses are materially equivalent, but according to Schaffer, there are counterexamples to this equivalence. Parallel to this, Schaffer has presented a very similar argument against binary accounts of knowledge, and thereby in favour (...) of his alternative contrastive account, relying on similar examples of apparently inequivalent knowledge ascriptions, which binary accounts treat as equivalent. In this article, I develop a unified diagnosis and solution to these problems for the reductive and binary accounts, based on a general theory of knowledge ascriptions that embed presuppositional expressions. All of Schaffer's apparent counterexamples embed presuppositional expressions, and once the effect of these is taken into account, it becomes apparent that the counterexamples depend on an illicit equivocation of contexts. Since epistemologists often rely on knowledge ascriptions that embed presuppositional expressions, the general theory of them presented here will have ramifications beyond defusing Schaffer's argument. (shrink)
It seems obvious that when higher-order evidence makes it rational for one to doubt that one’s own belief on some matter is rational, this can undermine the rationality of that belief. This is known as higher-order defeat. However, despite its intuitive plausibility, it has proved puzzling how higher-order defeat works, exactly. To highlight two prominent sources of puzzlement, higher-order defeat seems to defy being understood in terms of conditionalization; and higher-order defeat can sometimes place agents in what seem like epistemic (...) dilemmas. This chapter draws attention to an overlooked aspect of higher-order defeat, namely that it can undermine the resilience of one’s beliefs. The notion of resilience was originally devised to understand how one should reflect the ‘weight’ of one’s evidence. But it can also be applied to understand how one should reflect one’s higher-order evidence. The idea is particularly useful for understanding cases where one’s higher-order evidence indicates that one has failed in correctly assessing the evidence, without indicating whether one has over- or underestimated the degree of evidential support for a proposition. But it is exactly in such cases that the puzzles of higher-order defeat seem most compelling. (shrink)
It is widely agreed that obsessive-compulsive disorder involves irrationality. But where in the complex of states and processes that constitutes OCD should this irrationality be located? A pervasive assumption in both the psychiatric and philosophical literature is that the seat of irrationality is located in the obsessive thoughts characteristic of OCD. Building on a puzzle about insight into OCD (Taylor 2022), we challenge this pervasive assumption, and argue instead that the irrationality of OCD is located in the emotions that are (...) characteristic of OCD, such as anxiety or fear. In particular, we propose to understand the irrationality of OCD as a matter of harboring recalcitrant emotions. We argue that this account not only solves the puzzle about insight, but also makes better sense of how OCD sufferers experience and describe their condition and helps explain some otherwise puzzling features of compulsive behavior. (shrink)
In a recent paper (2008), I presented two arguments against the thesis that intentional states are essentially normative. In this paper, I defend those arguments from two recent responses, one from Nick Zangwill in his (2010), and one from Daniel Laurier in the present volume, and offer improvements of my arguments in light of Laurier’s criticism.
Many philosophers have sought to account for doxastic and epistemic norms by supposing that belief ‘aims at truth.’ A central challenge for this approach is to articulate a version of the truth-aim that is at once weak enough to be compatible with the many truth-independent influences on belief formation, and strong enough to explain the relevant norms in the desired way. One phenomenon in particular has seemed to require a relatively strong construal of the truth-aim thesis, namely ‘transparency’ in doxastic (...) deliberation. In this paper, I argue that the debate over transparency has been in the grip of a false presupposition, namely that the phenomenon must be explained in terms of being a feature of deliberation framed by the concept of belief. Giving up this presupposition makes it possible to adopt weaker and more plausible versions of the truth-aim thesis in accounting for doxastic and epistemic norms. (shrink)
Psychological studies on fictional persuasion demonstrate that being engaged with fiction systematically affects our beliefs about the real world, in ways that seem insensitive to the truth. This threatens to undermine the widely accepted view that beliefs are essentially regulated in ways that tend to ensure their truth, and may tempt various non-doxastic interpretations of the belief-seeming attitudes we form as a result of engaging with fiction. I evaluate this threat, and argue that it is benign. Even if the relevant (...) attitudes are best seen as genuine beliefs, as I think they often are, their lack of appropriate sensitivity to the truth does not undermine the essential tie between belief and truth. To this end, I shall consider what I take to be the three most plausible models of the cognitive mechanisms underlying fictional persuasion, and argue that on none of these models does fictional persuasion undermine the essential truth-tie. (shrink)
The debate on the epistemology of disagreement has so far focused almost exclusively on cases of disagreement between individual persons. Yet, many social epistemologists agree that at least certain kinds of groups are equally capable of having beliefs that are open to epistemic evaluation. If so, we should expect a comprehensive epistemology of disagreement to accommodate cases of disagreement between group agents, such as juries, governments, companies, and the like. However, this raises a number of fundamental questions concerning what it (...) means for groups to be epistemic peers and to disagree with each other. In this paper, we explore what group peer disagreement amounts to given that we think of group belief in terms of List and Pettit’s ‘belief aggregation model’. We then discuss how the so-called ‘equal weight view’ of peer disagreement is best accommodated within this framework. The account that seems most promising to us says, roughly, that the parties to a group peer disagreement should adopt the belief that results from applying the most suitable belief aggregation function for the combined group on all members of the combined group. To motivate this view, we test it against various intuitive cases, derive some of its notable implications, and discuss how it relates to the equal weight view of individual peer disagreement. (shrink)
Our aim in this chapter is to draw attention to what we see as a disturbing feature of conciliationist views of disagreement. Roughly put, the trouble is that conciliatory responses to in-group disagreement can lead to the frustration of a group's epistemic priorities: that is, the group's favoured trade-off between the "Jamesian goals" of truth-seeking and error-avoidance. We show how this problem can arise within a simple belief aggregation framework, and draw some general lessons about when the problem is most (...) pronounced. We close with a tentative proposal for how to solve the problem raised without rejecting conciliationism. (shrink)
People tend to think that they know others better than others know them. This phenomenon is known as the “illusion of asymmetric insight.” While the illusion has been well documented by a series of recent experiments, less has been done to explain it. In this paper, we argue that extant explanations are inadequate because they either get the explanatory direction wrong or fail to accommodate the experimental results in a sufficiently nuanced way. Instead, we propose a new explanation that does (...) not face these problems. The explanation is based on two other well-documented psychological phenomena: the tendency to accommodate ambiguous evidence in a biased way, and the tendency to overestimate how much better we know ourselves than we know others. (shrink)
David Owens objected to the truth-aim account of belief on the grounds that the putative aim of belief does not meet a necessary condition on aims, namely, that aims can be weighed against other aims. If the putative aim of belief cannot be weighed, then belief does not have an aim after all. Asbjørn Steglich-Petersen responded to this objection by appeal to other deliberative contexts in which the aim could be weighed, and we argued that this response to (...) Owens failed for two reasons. Steglich-Petersen has since responded to our defence of Owens’s objection. Here we reply to Steglich-Petersen and conclude, once again, that Owens’s challenge to the truth-aim approach remains to be answered. (shrink)
ABSTRACTIn a series of articles, Asbjørn Steglich-Petersen and Nick Zangwill argue that, since propositional attitude ascription judgements do not behave like normative judgements in being subject to a priori normative supervenience and the Because Constraint, PAs cannot be constitutively normative.1 I argue that, for a specific version of normativism, according to which PAs are normative commitments, these arguments fail. To this end, I argue that commitments and obligations should be distinguished. Then, I show that the intuitions allegedly governing (...) all normative judgements do not even purport to hold for commitment-attributing judgements.RÉSUMÉDans une série d'articles, Asbjørn Steglich-Petersen et Nick Zangwill font valoir que, puisque les jugements d'attribution d'attitude propositionnelle ne se comportent pas comme des jugements normatifs en étant soumis à la survenance normative a priori et à la contrainte du Parce que, les AP ne peuvent être constitutivement normatives. Je soutiens que, pour une version spécifique du normativisme, selon laquelle les AP sont des engagements normatifs, ces arguments échouent. À cette fin, je soutiens d'abord que les engagements et les obligations devraient être séparés. Ensuite, je démontre que les intuitions qui régiraient prétendument tous les jugements normatifs ne prétendent même pas s'appliquer aux jugements attributifs d'un engagement. (shrink)
In this chapter we argue that some beliefs present a problem for the truth-aim teleological account of belief, according to which it is constitutive of belief that it is aimed at truth. We draw on empirical literature which shows that subjects form beliefs about the real world when they read fictional narratives, even when those narratives are presented as fiction, and subjects are warned that the narratives may contain falsehoods. We consider Nishi Shah’s teleologist’s dilemma and a response to it (...) from Asbjørn Steglich-Petersen which appeals to weak truth regulation as a feature common to all belief. We argue that beliefs from fiction indicate that there is not a basic level of truth regulation common to all beliefs, and thus the teleologist’s dilemma remains. We consider two objections to our argument. First, that the attitudes gained through reading fiction are not beliefs, and thus teleologists are not required to account for them in their theory. We respond to this concern by defending a doxastic account of the attitudes gained from fiction. Second, that these beliefs are in fact appropriately truth-aimed, insofar as readers form beliefs upon what they take to be author testimony. We respond to this concern by suggesting that the conditions under which one can form justified beliefs upon testimony are not met in the cases we discuss. Lastly, we gesture towards a teleological account grounded in biological function, which is not vulnerable to our argument. We conclude that beliefs from fiction present a problem for the truth-aim teleological account of belief. (shrink)
In a recent paper in this journal, we proposed two novel puzzles associated with the precautionary principle. Both are puzzles that materialise, we argue, once we investigate the principle through an epistemological lens, and each constitutes a philosophical hurdle for any proponent of a plausible version of the precautionary principle. Steglich-Petersen claims, also in this journal, that he has resolved our puzzles. In this short note, we explain why we remain skeptical.
Bartolomeo Mastri’s Disputations on Metaphysics is the single most important work on metaphysics produced in the Scotist school during the Early Modern period. This contribution guides through the work by highlighting a selection of key passages that convey an impression of its historical-literary context, its subject matter, its main motifs and scientific aims, but also its limitations. Especially, we see Mastri emphasizing the theological aspect of theology, though he in the end refrains from exploring this aspect of metaphysics within his (...) work on metaphysics. I suggest that this discrepancy between Mastri’s concept of metaphysics and his work on metaphysics showcases the difficulty of organizing this discipline during the phase of transition from the traditional commentary format typical of medieval scholasticism to the Early Modern scholastic Cursus philosophicus literature. (shrink)
Standard epistemology takes it for granted that there is a special kind of value: epistemic value. This claim does not seem to sit well with act utilitarianism, however, since it holds that only welfare is of real value. I first develop a particularly utilitarian sense of “epistemic value”, according to which it is closely analogous to the nature of financial value. I then demonstrate the promise this approach has for two current puzzles in the intersection of epistemology and value theory: (...) first, the problem of why knowledge is better than mere true belief, and second, the relation between epistemic justification and responsibility. (shrink)
Frames, i.e., recursive attribute-value structures, are a general format for the decomposition of lexical concepts. Attributes assign unique values to objects and thus describe functional relations. Concepts can be classified into four groups: sortal, individual, relational and functional concepts. The classification is reflected by different grammatical roles of the corresponding nouns. The paper aims at a cognitively adequate decomposition, particularly, of sortal concepts by means of frames. Using typed feature structures, an explicit formalism for the characterization of cognitive frames is (...) developed. The frame model can be extended to account for typicality effects. Applying the paradigm of object-related neural synchronization, furthermore, a biologically motivated model for the cortical implementation of frames is developed. Cortically distributed synchronization patterns may be regarded as the fingerprints of concepts. (shrink)
The idea that humans should abandon their individuality and use technology to bind themselves together into hivemind societies seems both farfetched and frightening – something that is redolent of the worst dystopias from science fiction. In this article, we argue that these common reactions to the ideal of a hivemind society are mistaken. The idea that humans could form hiveminds is sufficiently plausible for its axiological consequences to be taken seriously. Furthermore, far from being a dystopian nightmare, the hivemind society (...) could be desirable and could enable a form of sentient flourishing. Consequently, we should not be so quick to deny it. We provide two arguments in support of this claim – the axiological openness argument and the desirability argument – and then defend it against three major objections. (shrink)
The purpose of the present chapter is to survey the work on epistemic norms of action, practical deliberation and assertion and to consider how these norms are interrelated. On a more constructive note, we will argue that if there are important similarities between the epistemic norms of action and assertion, it has important ramifications for the debates over speech acts and harm. Thus, we hope that the chapter will indicate how thinking about assertions as a speech act can benefit from (...) a broader action theoretic setting. We will proceed as follows. In Section 2, we provide a survey of epistemic norms of action and practical deliberation. In Section 3, we turn to the epistemic norms of assertion. In Section 4, we consider arguments for and against commonality of the epistemic norms of actions, practical deliberation and assertion. In Section 5, we discuss some of the ramifications of the debates over epistemic norms of assertion such as whether they may be extended to other linguistic phenomena such as Gricean implicature. In Section 6, we consider the consequences of the debate about the epistemic norms of action and practical deliberation for debates about speech and harm. (shrink)
Lippert-Rasmussen and Petersen discuss my ‘Moral case for legal age change’ in their article ‘Age change, official age and fairness in health’. They argue that in important healthcare settings (such as distributing vital organs for dying patients), the state should treat people on the basis of their chronological age because chronological age is a better proxy for what matters from the point of view of justice than adjusted official age. While adjusted legal age should not be used in deciding (...) who gets scarce vital organs, I remind the readers that using chronological age as a proxy is problematic as well. Using age as a proxy could give wrong results and it is better, if possible, for states to use the vital information directly than use age as a proxy. (shrink)
The role of scientists as experts is crucial to public policymaking. However, the expert role is contested and unsettled in both public and scholarly discourse. In this paper, I provide a systematic account of the role of scientists as experts in policymaking by examining whether there are any normatively relevant differences between this role and the role of scientists as researchers. Two different interpretations can be given of how the two roles relate to each other. The separability view states that (...) there is a normatively relevant difference between the two roles, whereas the inseparability view denies that there is such a difference. Based on a systematic analysis of the central aspects of the role of scientists as experts – that is, its aim, context, mode of output, and standards, I propose a moderate version of the separability view. Whereas the aim of scientific research is typically to produce new knowledge through the use of scientific method for evaluation and dissemination in internal settings, the aim of the expert is to provide policymakers and the public with relevant and applicable knowledge that can premise political reasoning and deliberation. (shrink)
I argue for patternism, a new answer to the question of when some objects compose a whole. None of the standard principles of composition comfortably capture our natural judgments, such as that my cat exists and my table exists, but there is nothing wholly composed of them. Patternism holds, very roughly, that some things compose a whole whenever together they form a “real pattern”. Plausibly we are inclined to acknowledge the existence of my cat and my table but not of (...) their fusion, because the first two have a kind of internal organizational coherence that their putative fusion lacks. Kolmogorov complexity theory supplies the needed rigorous sense of “internal organizational coherence”. (shrink)
I argue that, contrary to intuition, it would be both possible and permissible to design people - whether artificial or organic - who by their nature desire to do tasks we find unpleasant.
Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not (...) be as bad as Bostrom suggests. If the superintelligence must *learn* complex final goals, then this means such a superintelligence must in effect *reason* about its own goals. And because it will be especially clear to a superintelligence that there are no sharp lines between one agent's goals and another's, that reasoning could therefore automatically be ethical in nature. (shrink)
Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: first, (...) it is unclear how any intelligent system could learn its final values, since to judge one supposedly "final" value against another seems to require a further background standard for judging. Second, it is unclear how to determine the content of a system's values based on its physical or computational structure. Finally, there is the distinctly ethical question of which values we should best aim for the system to learn. I outline a potential answer to these interrelated puzzles, centering on a "miktotelic" proposal for blending a complex, learnable final value out of many simpler ones. (shrink)
What does it mean to think? In the following article I will show Gilles Deleuze’s answer to this question. According to him ’to think is to create — there is no other creation — but to create is first of all to engender ' thinking ' in thought ’. To understand what this means, to grasp the radical nature of such an event, we need to see how for Deleuze to engender thinking in thought means a repetition of that genetic (...) process which has brought forth the thinking subject in the first place. In this event that which otherwise subsists beneath normal experience, as life- and consciousness sustaining forces, now become conscious experience. The implication of this is that true thinking means the creation of a new life and consciousness. Via a close-reading of chapter two of Difference and Repetition I show how this leads the thinker into a radical metamorphosis of consciousness, a process of Stirb und Werde. (shrink)
The standard rule of single privative modification replaces privative modifiers by Boolean negation. This rule is valid, for sure, but also simplistic. If an individual a instantiates the privatively modified property (MF) then it is true that a instantiates the property of not being an F, but the rule fails to express the fact that the properties (MF) and F have something in common. We replace Boolean negation by property negation, enabling us to operate on contrary rather than contradictory properties. (...) To this end, we apply our theory of intensional essentialism, which operates on properties (intensions) rather than their extensions. We argue that each property F is necessarily associated with an essence, which is the set of the so-called requisites of F that jointly define F. Privation deprives F of some but not all of its requisites, replacing them by their contradictories. We show that properties formed from iterated privatives, such as being an imaginary fake banknote, give rise to a trifurcation of cases between returning to the original root property or to a property contrary to it or being semantically undecidable for want of further information. In order to determine which of the three forks the bearers of particular instances of multiply modified properties land upon we must examine the requisites, both of unmodified and modified properties. Requisites underpin our presuppositional theory of positive predication. Whereas privation is about being deprived of certain properties, the assignment of requisites to properties makes positive predication possible, which is the predication of properties the bearers must have because they have a certain property formed by means of privation. (shrink)
Cora Diamond has criticized capacity-based approaches to determining the moral status of animals, arguing instead that the morally significant fact is that we have relationships to animals as our fellow creatures. This paper explores implications of her approach to fish and the practice of fish farming. Fish differ from most other animals due to their appearances and under-water existence, and it is not obvious that fish belong to our fellow creatures, and – if so – what it means for our (...) treatment of them. In particular: if fish are fellow creatures, can we treat them in the way done in contemporary salmon farming ? Iris Murdoch points out that moral differences are conceptual differences, that is differences in how we see the world. Similarly, Diamond argues that we should not consider ‘animal’ or ‘human’ as biological classifications – they are conceptual configurations that shape the way we think and make sense of the world. In this article, we explore the implication of the fellow creature concept for the case of fish, which challenge our ordinary understandings of companionship with animals. We argue that farmed salmon should be consibdered as a special kind of fellow creatures living in water, and discuss how scientific research on biological features of fish may influence how we see them. We also sketch how Diamond’s approach implies a need for reform of current salmon farming practices. (shrink)
Assume we could someday create artificial creatures with intelligence comparable to our own. Could it be ethical use them as unpaid labor? There is very little philosophical literature on this topic, but the consensus so far has been that such robot servitude would merely be a new form of slavery. Against this consensus I defend the permissibility of robot servitude, and in particular the controversial case of designing robots so that they want to serve human ends. A typical objection to (...) this case draws an analogy to the genetic engineering of humans: if designing eager robot servants is permissible, it should also be permissible to design eager human servants. Few ethical views can easily explain even the wrongness of such human engineering, however, and those few explanations that are available break the analogy with engineering robots. The case turns out to be illustrative of profound problems in the field of population ethics. (shrink)
This paper is the twin of (Duží and Jespersen, in submission), which provides a logical rule for transparent quantification into hyperprop- ositional contexts de dicto, as in: Mary believes that the Evening Star is a planet; therefore, there is a concept c such that Mary be- lieves that what c conceptualizes is a planet. Here we provide two logical rules for transparent quantification into hyperpropositional contexts de re. (As a by-product, we also offer rules for possible- world propositional contexts.) One (...) rule validates this inference: Mary believes of the Evening Star that it is a planet; therefore, there is an x such that Mary believes of x that it is a planet. The other rule validates this inference: the Evening Star is such that it is believed by Mary to be a planet; therefore, there is an x such that x is believed by Mary to be a planet. Issues unique to the de re variant include partiality and existential presupposition, sub- stitutivity of co-referential (as opposed to co-denoting or synony- mous) terms, anaphora, and active vs. passive voice. The validity of quantifying-in presupposes an extensional logic of hyperinten- sions preserving transparency and compositionality in hyperinten- sional contexts. This requires raising the bar for what qualifies as co-denotation or equivalence in extensional contexts. Our logic is Tichý’s Transparent Intensional Logic. The syntax of TIL is the typed lambda calculus; its highly expressive semantics is based on a procedural redefinition of, inter alia, functional abstraction and application. The two non-standard features we need are a hyper- intension (called Trivialization) that presents other hyperintensions and a four-place substitution function (called Sub) defined over hy- perintensions. (shrink)
A recently proposed model of sensory processing suggests that perceptual experience is updated in discrete steps. We show that the data advanced to support discrete perception are in fact compatible with a continuous account of perception. Physiological and psychophysical constraints, moreover, as well as our awake-primate imaging data, imply that human neuronal networks cannot support discrete updates of perceptual content at the maximal update rates consistent with phenomenology. A more comprehensive approach to understanding the physiology of perception (and experience at (...) large) is therefore called for, and we briefly outline our take on the problem. (shrink)
The aim of this critical commentary is to distinguish and analytically discuss some important variations in which legal moralism is defined in the literature. As such, the aim is not to evaluate the most plausible version of legal moralism, but to find the most plausible definition of legal moralism. As a theory of criminalization, i.e. a theory that aims to justify the criminal law we should retain, legal moralism can be, and has been, defined as follows: the immorality of an (...) act of type A is a sufficient reason for the criminalization of A, even if A does not cause someone to be harmed. In what follows, I critically examine some of the key definitions and proposals that have, unfortunately, not always been carefully distinguished. Finally, I propose a definition that seems to capture the essence of what many philosophers refer to when they talk about legal moralism, while also providing more clarity. (shrink)
In this chapter I'd like to focus on a small corner of sexbot ethics that is rarely considered elsewhere: the question of whether and when being a sexbot might be good---or bad---*for the sexbot*. You might think this means you are in for a dry sermon about the evils of robot slavery. If so, you'd be wrong; the ethics of robot servitude are far more complicated than that. In fact, if the arguments here are right, designing a robot to serve (...) humans sexually may be very good for the robots themselves. (shrink)
There are writers in both metaphysics and algorithmic information theory (AIT) who seem to think that the latter could provide a formal theory of the former. This paper is intended as a step in that direction. It demonstrates how AIT might be used to define basic metaphysical notions such as *object* and *property* for a simple, idealized world. The extent to which these definitions capture intuitions about the metaphysics of the simple world, times the extent to which we think the (...) simple world is analogous to our own, will determine a lower bound for basing a metaphysics for *our* world on AIT. (shrink)
This is a review of The Turing Guide (2017), written by Jack Copeland, Jonathan Bowen, Mark Sprevak, Robin Wilson, and others. The review includes a new sociological approach to the problem of computability in physics.
Five arguments are presented in favour of the proposal that people who opt in as organ donors should receive a tax break. These arguments appeal to welfare, autonomy, fairness, distributive justice and self-ownership, respectively. Eight worries about the proposal are considered in this paper. These objections focus upon no-effect and counter-productiveness, the Titmuss concern about social meaning, exploitation of the poor, commodification, inequality and unequal status, the notion that there are better alternatives, unacceptable expense, and concerns about the veto of (...) relatives. The paper argues that none of the objections to the proposal is very telling. (shrink)
Current debate and policy surrounding the use of genetic editing in humans often relies on a binary distinction between therapy and human enhancement. In this paper, we argue that this dichotomy fails to take into account perhaps the most significant potential uses of CRISPR-Cas9 gene editing in humans. We argue that genetic treatment of sporadic Alzheimer’s disease, breast- and ovarian-cancer causing BRCA1/2 mutations and the introduction of HIV resistance in humans should be considered within a new category of genetic protection (...) treatments. We find that if this category is not introduced, life-altering research might be unnecessarily limited by current or future policy. Otherwise ad hoc decisions might be made, which introduce a risk of unforeseen moral costs, and might overlook or fail to address some important opportunities. (shrink)
Naturalism is normally taken to be an ideology, censuring non-naturalistic alternatives. But as many critics have pointed out, this ideological stance looks internally incoherent, since it is not obviously endorsed by naturalistic methods. Naturalists who have addressed this problem universally foreswear the normative component of naturalism by, in effect, giving up science’s exclusive claim to legitimacy. This option makes naturalism into an empty expression of personal preference that can carry no weight in the philosophical or political spheres. In response to (...) this dilemma, I argue that on a popular construal of naturalism as a commitment to inference to the best explanation, methodological naturalism can be both normative and internally coherent. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.