Does cruel behavior towards robots lead to vice, whereas kind behavior does not lead to virtue? This paper presents a critical response to Sparrow’s argument that there is an asymmetry in the way we (should) think about virtue and robots. It discusses how much we should praise virtue as opposed to vice, how virtue relates to practical knowledge and wisdom, how much illusion is needed for it to be a barrier to virtue, the relation between virtue and consequences, the moral (...) relevance of the reality requirement and the different ways one can deal with it, the risk of anthropocentric bias in this discussion, and the underlying epistemological assumptions and political questions. This response is not only relevant to Sparrow’s argument or to robot ethics but also touches upon central issues in virtue ethics. (shrink)
In response to challenges to moral philosophy presented by other disciplines and facing a diversity of approaches to the foundation and focus of morality, this paper argues for a pluralist meta-ethics that is methodologically hierarchical and guided by the principle of subsidiarity. Inspired by Deweyan pragmatism, this novel and original application of the subsidiarity principle and the related methodological proposal for a cascading meta-ethical architecture offer a “dirty” and instrumentalist understanding of meta-ethics that promises to work, not only in moral (...) philosophy but also in the (rest of the) real world, and that facilitates collaboration with other disciplines outside moral philosophy. (shrink)
This special issue introduces the study of financial technologies and finance to the field of philosophy of technology, bringing together two different fields that have not traditionally been in dialogue. The included articles are: Digital Art as ‘Monetised Graphics’: Enforcing Intellectual Property on the Blockchain, by Martin Zeilinger; Fundamentals of Algorithmic Markets: Liquidity, Contingency, and the Incomputability of Exchange, by Laura Lotti; ‘Crises of Modernity’ Discourses and the Rise of Financial Technologies in a Contested Mechanized World, by Marinus Ossewaarde; Two (...) Technical Images: Blockchain and High-Frequency Trading, by Diego Viana; and The Blockchain as a Narrative Technology: Investigating the Social Ontology and Normative Configurations of Cryptocurrencies, by Wessel Reijers and MarkCoeckelbergh. (shrink)
Usually technological innovation and artistic work are seen as very distinctive practices, and innovation of technologies is understood in terms of design and human intention. Moreover, thinking about technological innovation is usually categorized as “technical” and disconnected from thinking about culture and the social. Drawing on work by Dewey, Heidegger, Latour, and Wittgenstein and responding to academic discourses about craft and design, ethics and responsible innovation, transdisciplinarity, and participation, this essay questions these assumptions and examines what kind of knowledge and (...) practices are involved in art and technological innovation. It argues that technological innovation is indeed “technical”, but, if conceptualized as techne, can be understood as art and performance. It is argued that in practice, innovative techne is not only connected to episteme as theoretical knowledge but also has the mode of poiesis: it is not just the outcome of human design and intention but rather involves a performative process in which there is a “dialogue” between form and matter and between creator and environment in which humans and non-humans participate. Moreover, this art is embedded in broader cultural patterns and grammars—ultimately a ‘form of life’—that shape and make possible the innovation. In that sense, there is no gap between science and society—a gap that is often assumed in STS and in, for instance, discourse on responsible innovation. It is concluded that technology and art were only relatively recently and unfortunately divorced, conceptually, but that in practices and performances they were always linked. If we understand technological innovation as a poetic, participative, and performative process, then bringing together technological innovation and artistic practices should not be seen as a marginal or luxury project but instead as one that is central, necessary, and vital for cultural-technological change. This conceptualization supports not only a different approach to innovation but has also social-transformative potential and has implications for ethics of technology and responsible innovation. (shrink)
An account of distinctively mathematical explanation (DME) should satisfy three desiderata: it should account for the modal import of some DMEs; it should distinguish uses of mathematics in explanation that are distinctively mathematical from those that are not (Baron [2016]); and it should also account for the directionality of DMEs (Craver and Povich [2017]). Baron’s (forthcoming) deductive-mathematical account, because it is modelled on the deductive-nomological account, is unlikely to satisfy these desiderata. I provide a counterfactual account of DME, the Narrow (...) Ontic Counterfactual Account (NOCA), that can satisfy all three desiderata. NOCA appeals to ontic considerations to account for explanatory asymmetry and ground the relevant counterfactuals. NOCA provides a unification of the causal and the non-causal, the ontic and the modal, by identifying a common core that all explanations share and in virtue of which they are explanatory. (shrink)
Batterman and Rice ([2014]) argue that minimal models possess explanatory power that cannot be captured by what they call ‘common features’ approaches to explanation. Minimal models are explanatory, according to Batterman and Rice, not in virtue of accurately representing relevant features, but in virtue of answering three questions that provide a ‘story about why large classes of features are irrelevant to the explanandum phenomenon’ ([2014], p. 356). In this article, I argue, first, that a method (the renormalization group) they propose (...) to answer the three questions cannot answer them, at least not by itself. Second, I argue that answers to the three questions are unnecessary to account for the explanatoriness of their minimal models. Finally, I argue that a common features account, what I call the ‘generalized ontic conception of explanation’, can capture the explanatoriness of minimal models. (shrink)
An archive of Mark Sharlow's two blogs, "The Unfinishable Scroll" and "Religion: the Next Version." Covers Sharlow's views on metaphysics, epistemology, mind, science, religion, and politics. Includes topics and ideas not found in his papers.
Lange argues that some natural phenomena can be explained by appeal to mathematical, rather than natural, facts. In these “distinctively mathematical” explanations, the core explanatory facts are either modally stronger than facts about ordinary causal law or understood to be constitutive of the physical task or arrangement at issue. Craver and Povich argue that Lange’s account of DME fails to exclude certain “reversals”. Lange has replied that his account can avoid these directionality charges. Specifically, Lange argues that in legitimate DMEs, (...) but not in their “reversals,” the empirical fact appealed to in the explanation is “understood to be constitutive of the physical task or arrangement at issue” in the explanandum. I argue that Lange’s reply is unsatisfactory because it leaves the crucial notion of being “understood to be constitutive of the physical task or arrangement” obscure in ways that fail to block “reversals” except by an apparent ad hoc stipulation or by abandoning the reliance on understanding and instead accepting a strong realism about essence. (shrink)
We sketch the mechanistic approach to levels, contrast it with other senses of “level,” and explore some of its metaphysical implications. This perspective allows us to articulate what it means for things to be at different levels, to distinguish mechanistic levels from realization relations, and to describe the structure of multilevel explanations, the evidence by which they are evaluated, and the scientific unity that results from them. This approach is not intended to solve all metaphysical problems surrounding physicalism. Yet it (...) provides a framework for thinking about how the macroscopic phenomena of our world are or might be related to its most fundamental entities and activities. (shrink)
Philosophers of psychology debate, among other things, which psychological models, if any, are (or provide) mechanistic explanations. This should seem a little strange given that there is rough consensus on the following two claims: 1) a mechanism is an organized collection of entities and activities that produces, underlies, or maintains a phenomenon, and 2) a mechanistic explanation describes, represents, or provides information about the mechanism producing, underlying, or maintaining the phenomenon to be explained (i.e. the explanandum phenomenon) (Bechtel and Abrahamsen (...) 2005; Craver 2007). If there is a rough consensus on what mechanisms are and that mechanistic explanations describe, represent, or provide information about them, then how is there no consensus on which psychological models are (or provide) mechanistic explanations? Surely the psychological models that are mechanistic explanations are the models that describe, represent, or provide information about mechanisms. That is true, of course; the trouble arises when determining what exactly that involves. Philosophical disagreement over which psychological models are mechanistic explanations is often disagreement about what it means to describe, represent, or provide information about a mechanism, among other things (Hochstein 2016; Levy 2013). In addition, one's position in this debate depends on a host of other seemingly arcane metaphysical issues, such as the nature of mechanisms, computational and functional properties (Piccinini 2016), and realization (Piccinini and Maley 2014), as well as the relation between models, methodologies, and explanations (Craver 2014; Levy 2013; Zednik 2015). Although I inevitably advocate a position, my primary aim in this chapter is to spell out all these relationships and canvas the positions that have been taken (or could be taken) with respect to mechanistic explanation in psychology, using dynamical systems models and cognitive models (or functional analyses) as examples. (shrink)
Mechanistic explanations satisfy widely held norms of explanation: the ability to manipulate and answer counterfactual questions about the explanandum phenomenon. A currently debated issue is whether any nonmechanistic explanations can satisfy these explanatory norms. Weiskopf argues that the models of object recognition and categorization, JIM, SUSTAIN, and ALCOVE, are not mechanistic yet satisfy these norms of explanation. In this article I argue that these models are mechanism sketches. My argument applies recent research using model-based functional magnetic resonance imaging, a novel (...) neuroimaging method whose significance for current debates on psychological models and mechanistic explanation has yet to be explored. (shrink)
Extra-mathematical explanations explain natural phenomena primarily by appeal to mathematical facts. Philosophers disagree about whether there are extra-mathematical explanations, the correct account of them if they exist, and their implications (e.g., for the philosophy of scientific explanation and for the metaphysics of mathematics) (Baker 2005, 2009; Bangu 2008; Colyvan 1998; Craver and Povich 2017; Lange 2013, 2016, 2018; Mancosu 2008; Povich 2019, 2020; Steiner 1978). In this discussion note, I present three desiderata for any account of extra-mathematical explanation and argue (...) that Baron’s (2020) U-Counterfactual Theory fails to meet each of them. I conclude with some reasons for pessimism that a successful account will be forthcoming. (shrink)
Lange’s collection of expanded, mostly previously published essays, packed with numerous, beautiful examples of putatively non-causal explanations from biology, physics, and mathematics, challenges the increasingly ossified causal consensus about scientific explanation, and, in so doing, launches a new field of philosophic investigation. However, those who embraced causal monism about explanation have done so because appeal to causal factors sorts good from bad scientific explanations and because the explanatory force of good explanations seems to derive from revealing the relevant causal (or (...) ontic) structures. The taxonomic project of collecting examples and sorting their types is an essential starting place for a theory of non-causal explanation. But the title of Lange’s book requires something further: showing that the putative explanations are, in fact, explanatory and revealing the non-causal source of their explanatory power. This project is incomplete if there are examples of putative non-causal explanations that fit the form but that nobody would accept as explanatory (absent a radical revision of intuitions). Here we provide some reasons for thinking that there are such examples. (shrink)
Using path-breaking discoveries of cognitive science, Mark Johnson argues that humans are fundamentally imaginative moral animals, challenging the view that morality is simply a system of universal laws dictated by reason. According to the Western moral tradition, we make ethical decisions by applying universal laws to concrete situations. But Johnson shows how research in cognitive science undermines this view and reveals that imagination has an essential role in ethical deliberation. Expanding his innovative studies of human reason in Metaphors We (...) Live By and The Body in the Mind, Johnson provides the tools for more practical, realistic, and constructive moral reflection. (shrink)
Autonomist accounts of cognitive science suggest that cognitive model building and theory construction (can or should) proceed independently of findings in neuroscience. Common functionalist justifications of autonomy rely on there being relatively few constraints between neural structure and cognitive function (e.g., Weiskopf, 2011). In contrast, an integrative mechanistic perspective stresses the mutual constraining of structure and function (e.g., Piccinini & Craver, 2011; Povich, 2015). In this paper, I show how model-based cognitive neuroscience (MBCN) epitomizes the integrative mechanistic perspective and concentrates (...) the most revolutionary elements of the cognitive neuroscience revolution (Boone & Piccinini, 2016). I also show how the prominent subset account of functional realization supports the integrative mechanistic perspective I take on MBCN and use it to clarify the intralevel and interlevel components of integration. (shrink)
In “What Makes a Scientific Explanation Distinctively Mathematical?” (2013b), Lange uses several compelling examples to argue that certain explanations for natural phenomena appeal primarily to mathematical, rather than natural, facts. In such explanations, the core explanatory facts are modally stronger than facts about causation, regularity, and other natural relations. We show that Lange's account of distinctively mathematical explanation is flawed in that it fails to account for the implicit directionality in each of his examples. This inadequacy is remediable in each (...) case by appeal to ontic facts that account for why the explanation is acceptable in one direction and unacceptable in the other direction. The mathematics involved in these examples cannot play this crucial normative role. While Lange's examples fail to demonstrate the existence of distinctively mathematical explanations, they help to emphasize that many superficially natural scientific explanations rely for their explanatory force on relations of stronger-than-natural necessity. These are not opposing kinds of scientific explanations; they are different aspects of scientific explanation. (shrink)
Bird’s Essays in Collective Epistemology, Oxford University Press, Oxford, 2014) account of social knowledge denies that scientific social knowledge supervenes solely on the mental states of individuals. Lackey objects that SK cannot accommodate a knowledge-action principle and the role of group defeaters. I argue that Lackey’s knowledge-action principle is ambiguous. On one disambiguation, it is false; on the other, it is true but poses no threat to SK. Regarding group defeaters, I argue that there are at least two options available (...) to the defender of SK, both taken from literature on individual defeaters and applied to group defeaters. Finally, I argue that Lackey’s description of the case of Dr. N.—as a case in which the scientific community does not know but is merely in a position to know—is mistaken. It assumes that Dr. N.’s publication is not scientific knowledge. An analogy to the individual case shows that it is plausible that the scientific community is not merely in a position to know, although its members are. This leaves intact a conception of social knowledge on which it does not supervene on the mental states of individuals. (shrink)
An important strand in philosophy of science takes scientific explanation to consist in the conveyance of some kind of information. Here I argue that this idea is also implicit in some core arguments of mechanists, some of whom are proponents of an ontic conception of explanation that might be thought inconsistent with it. However, informational accounts seem to conflict with some lay and scientific commonsense judgments and a central goal of the theory of explanation, because information is relative to the (...) background knowledge of agents. Sometimes we make lay judgments about whether a model is an explanation simpliciter, not just an explanation relative to some particular agent. And as philosophers of explanation, we would like a philosophical account to tell us when a model is an explanation simpliciter, not just when a model is an explanation relative to some particular agent. Thus, even if one’s account of explanation is not concerned with explanation qua communicative or speech act, the account’s reliance on the concept of information generates a prima facie conflict between the claims that 1) explanation is the conveyance of information, 2) information is relative to the background knowledge of an agent, and 3) some models are explanations not relative to the background knowledge of any particular agent. I sketch a solution to this puzzle by distinguishing informationally what I call “explanation simpliciter” from what I call “explanation-to,” relativizing the latter to an individual’s background knowledge and the former to what I call “total scientific background knowledge”. (shrink)
Beliefs are concrete particulars containing ideas of properties and notions of things, which also are concrete. The claim made in a belief report is that the agent has a belief (i) whose content is a specific singular proposition, and (ii) which involves certain of the agent's notions and ideas in a certain way. No words in the report stand for the notions and ideas, so they are unarticulated constituents of the report's content (like the relevant place in "it's raining"). The (...) belief puzzles (Hesperus, Cicero, Pierre) involve reports about two different notions. So the analysis gets the puzzling truth values right. (shrink)
Mark Eli Kalderon presents an original study of perception, taking as its starting point a puzzle in Empedocles' theory of vision: if perception is a mode of material assimilation, how can we perceive colors at a distance? Kalderon argues that the theory of perception offered by Aristotle in answer to the puzzle is both attractive and defensible.
According to a naïve view sometimes apparent in the writings of moral philosophers, ‘ought’ often expresses a relation between agents and actions – the relation that obtains between an agent and an action when that action is what that agent ought to do. It is not part of this naïve view that ‘ought’ always expresses this relation – on the contrary, adherents of the naïve view are happy to allow that ‘ought’ also has an epistemic sense, on which it means, (...) roughly, that some proposition is likely to be the case, and adherents of the naïve view are also typically happy to allow that ‘ought’ also has an evaluative sense, on which it means, roughly, that were things ideal, some proposition would be the case.1 What is important to the naïve view is not that these other senses of ‘ought’ do not exist, but rather that they are not exhaustive – for what they leave out, is the important deliberative sense of ‘ought’, which is the central subject of moral inquiry about what we ought to do and why – and it is this deliberative sense of ‘ought’ which the naïve view understands to express a relation between agents and actions.2 In contrast, logically and linguistically sophisticated philosophers – with a few notable exceptions3 – have rejected this naïve view. According to a dominant perspective in the interpretation of deontic logic and in linguistic semantics, for example, articulated by Roderick Chisholm (1964) and Bernard Williams (1981) in philosophy and in the dominant paradigm in linguistic semantics as articulated in particular by.. (shrink)
Negative facts get a bad press. One reason for this is that it is not clear what negative facts are. We provide a theory of negative facts on which they are no stranger than positive atomic facts. We show that none of the usual arguments hold water against this account. Negative facts exist in the usual sense of existence and conform to an acceptable Eleatic principle. Furthermore, there are good reasons to want them around, including their roles in causation, chance-making (...) and truth-making, and in constituting holes and edges. (shrink)
Causation is one of philosophy's most venerable and thoroughly-analyzed concepts. However, the study of how ordinary people make causal judgments is a much more recent addition to the philosophical arsenal. One of the most prominent views of causal explanation, especially in the realm of harmful or potentially harmful behavior, is that unusual or counternormative events are accorded privileged status in ordinary causal explanations. This is a fundamental assumption in psychological theories of counterfactual reasoning, and has been transported to philosophy by (...) Hitchcock and Knobe (2009). A different view--the basis of the culpable control model of blame (CCM)--is that primary causal status is accorded to behaviors that arouse negative evaluative reactions, including behaviors that stem from nefarious motives, negligence or recklessness, a faulty character, or behaviors that lead to harmful or potentially harmful consequences. This paper describes four empirical studies that show consistent support for the CCM. (shrink)
Fitting Attitudes accounts of value analogize or equate being good with being desirable, on the premise that ‘desirable’ means not, ‘able to be desired’, as Mill has been accused of mistakenly assuming, but ‘ought to be desired’, or something similar. The appeal of this idea is visible in the critical reaction to Mill, which generally goes along with his equation of ‘good’ with ‘desirable’ and only balks at the second step, and it crosses broad boundaries in terms of philosophers’ other (...) commitments. For example, Fitting Attitudes accounts play a central role both in T.M. Scanlon’s [1998] case against teleology, and in Michael Smith [2003], [unpublished] and Doug Portmore’s [2007] cases for it. And of course they have a long and distinguished history. (shrink)
YouTube has been implicated in the transformation of users into extremists and conspiracy theorists. The alleged mechanism for this radicalizing process is YouTube’s recommender system, which is optimized to amplify and promote clips that users are likely to watch through to the end. YouTube optimizes for watch-through for economic reasons: people who watch a video through to the end are likely to then watch the next recommended video as well, which means that more advertisements can be served to them. This (...) is a seemingly innocuous design choice, but it has a troubling side-effect. Critics of YouTube have alleged that the recommender system tends to recommend extremist content and conspiracy theories, as such videos are especially likely to capture and keep users’ attention. To date, the problem of radicalization via the YouTube recommender system has been a matter of speculation. The current study represents the first systematic, pre-registered attempt to establish whether and to what extent the recommender system tends to promote such content. We begin by contextualizing our study in the framework of technological seduction. Next, we explain our methodology. After that, we present our results, which are consistent with the radicalization hypothesis. Finally, we discuss our findings, as well as directions for future research and recommendations for users, industry, and policy-makers. (shrink)
Each truth has a truthmaker: an entity in virtue of whose existence that truth is true. So say truthmaker maximalists. Arguments for maximalism are hard to find, whereas those against are legion. Most accept that maximalism comes at a significant cost, which many judge to be too high. The scales would seem to be balanced against maximalism. Yet, as I show here, maximalism can be derived from an acceptable premise which many will pre-theoretically accept.
In his contribution, Mark Alfano lays out a new (to virtue theory) naturalistic way of determining what the virtues are, what it would take for them to be realized, and what it would take for them to be at least possible. This method is derived in large part from David Lewis’s development of Frank Ramsey’s method of implicit definition. The basic idea is to define a set of terms not individually but in tandem. This is accomplished by assembling all (...) and only the common sense platitudes that involve them (e.g., typically, people want to be virtuous), conjoining those platitudes, and replacing the terms in question by existentially quantified variables. If the resulting sentence is satisfied, then whatever satisfies are the virtues. If it isn’t satisfied, there are a couple of options. First, one could just admit defeat by saying that people can’t be virtuous. More plausibly, one could weaken the conjunction by dropping a small number of the platitudes from it (and potentially adding some others). Alfano suggests that the most attractive way to do this is by dropping the platitudes that deal with cross-situational consistency and replacing them with platitudes that involve social construction: basically, people are virtuous (when they are) at least in part because other people signal their expectations of virtuous conduct, which induces virtuous conduct, which in turn induces further signals of expected virtuous conduct, and so on. (shrink)
Many scholars agree that the Internet plays a pivotal role in self-radicalization, which can lead to behaviours ranging from lone-wolf terrorism to participation in white nationalist rallies to mundane bigotry and voting for extremist candidates. However, the mechanisms by which the Internet facilitates self-radicalization are disputed; some fault the individuals who end up self-radicalized, while others lay the blame on the technology itself. In this paper, we explore the role played by technological design decisions in online self-radicalization in its myriad (...) guises, encompassing extreme as well as more mundane forms. We begin by characterizing the phenomenon of technological seduction. Next, we distinguish between top-down seduction and bottom-up seduction. We then situate both forms of technological seduction within the theoretical model of dynamical systems theory. We conclude by articulating strategies for combatting online self-radicalization. (shrink)
I develop and defend a truthmaker semantics for the relevant logic R. The approach begins with a simple philosophical idea and develops it in various directions, so as to build a technically adequate relevant semantics. The central philosophical idea is that truths are true in virtue of specific states. Developing the idea formally results in a semantics on which truthmakers are relevant to what they make true. A very natural notion of conditionality is added, giving us relevant implication. I then (...) investigate ways to add conjunction, disjunction, and negation; and I discuss how to justify contraposition and excluded middle within a truthmaker semantics. (shrink)
In “On Sense and Reference,” surrounding his discussion of how we describe what people say and think, identity is Frege’s first stop and his last. We will follow Frege’s plan here, but we will stop also in the land of make-believe.
Johnston presents an argument for a form of immortality that divests the notion of any supernatural elements. The book is packed with illuminating philosophical reflection on the question of what we are, and what it is for us to persist over time.
This paper brings together two erstwhile distinct strands of philosophical inquiry: the extended mind hypothesis and the situationist challenge to virtue theory. According to proponents of the extended mind hypothesis, the vehicles of at least some mental states (beliefs, desires, emotions) are not located solely within the confines of the nervous system (central or peripheral) or even the skin of the agent whose states they are. When external props, tools, and other systems are suitably integrated into the functional apparatus of (...) the agent, they are partial bearers of her cognitions, motivations, memories, and so on. According to proponents of the situationist challenge to virtue theory, dispositions located solely within the confines of the nervous system (central or peripheral) or even the skin of the agent to whom they are attributed typically do not meet the normative standards associated with either virtue or vice (moral, epistemic, or otherwise) because they are too susceptible to moderating external variables, such as mood modulators, ambient sensibilia, and social expectation signaling. We here draw on both of these literatures to formulate two novel views – the embedded and extended character hypotheses – according to which the vehicles of not just mental states but longer-lasting, wider-ranging, and normatively-evaluable agentic dispositions are sometimes located partially beyond the confines of the agent’s skin. (shrink)
In this paper, I describe some of what I take to be the more interesting features of friendship, then explore the extent to which other virtues can be reconstructed as sharing those features. I use trustworthiness as my example throughout, but I think that other virtues such as generosity & gratitude, pride & respect, and the producer’s & consumer’s sense of humor can also be analyzed with this model. The aim of the paper is not to demonstrate that all moral (...) virtues are exactly like friendship in all important respects, but rather to articulate a fruitful model in which to explore the virtues. Section 2 explores the relational nature of friendship, drawing on Aristotle’s discussion of friendship in the Nicomachean Ethics. Section 3 catalogues four motivations for taking seriously the friendship model of virtue. Section 4 applies the friendship model in depth to the virtue of trustworthiness. (shrink)
Impossible worlds are representations of impossible things and impossible happenings. They earn their keep in a semantic or metaphysical theory if they do the right theoretical work for us. As it happens, a worlds-based account provides the best philosophical story about semantic content, knowledge and belief states, cognitive significance and cognitive information, and informative deductive reasoning. A worlds-based story may also provide the best semantics for counterfactuals. But to function well, all these accounts need use of impossible and as well (...) as possible worlds. So what are impossible worlds? Graham Priest claims that any of the usual stories about possible worlds can be told about impossible worlds, too. But far from it. I'll argue that impossible worlds cannot be genuine worlds, of the kind proposed by Lewis, McDaniel or Yagisawa. Nor can they be ersatz worlds on the model proposed by Melia or Sider. Constructing impossible worlds, it turns out, requires novel metaphysical resources. (shrink)
The basic idea of expressivism is that for some sentences ‘P’, believing that P is not just a matter of having an ordinary descriptive belief. This is a way of capturing the idea that the meaning of some sentences either exceeds their factual/descriptive content or doesn’t consist in any particular factual/descriptive content at all, even in context. The paradigmatic application for expressivism is within metaethics, and holds that believing that stealing is wrong involves having some kind of desire-like attitude, with (...) world-tomind direction of fit, either in place of, or in addition to, being in a representational state of mind with mind-to-world direction of fit. Because expressivists refer to the state of believing that P as the state of mind ‘expressed’ by ‘P’, this view can also be described as the view that ‘stealing is wrong’ expresses a state of mind that involves a desire-like attitude instead of, or in addition to, a representational state of mind. According to some expressivists - unrestrained expressivists, as I’ll call them - there need be no special relationship among the different kinds of state of mind that can be expressed by sentences. Pick your favorite state of mind, the unrestrained expressivist allows, and there could, at least in principle, be a sentence that expressed it. Expressivists who seem to have been unrestrained plausibly include Ayer in Language, Truth, and Logic, and Simon Blackburn in many of his writings, including his [1984], [1993], and.. (shrink)
The replication crisis has caused researchers to distinguish between exact replications, which duplicate all aspects of a study that could potentially affect the results, and direct replications, which duplicate only those aspects of the study that are thought to be theoretically essential to reproduce the original effect. The replication crisis has also prompted researchers to think more carefully about the possibility of making Type I errors when rejecting null hypotheses. In this context, the present article considers the utility of two (...) types of Type I error probability: the Neyman–Pearson long run Type I error rate and the Fisherian sample-specific Type I error probability. It is argued that the Neyman–Pearson Type I error rate is inapplicable in social science because it refers to a long run of exact replications, and social science deals with irreversible units that make exact replications impossible. Instead, the Fisherian sample-specific Type I error probability is recommended as a more meaningful way to conceptualize false positive results in social science because it can be applied to each sample-specific decision about rejecting the same substantive null hypothesis in a series of direct replications. It is concluded that the replication crisis may be partly due to researchers’ unrealistic expectations about replicability based on their consideration of the Neyman–Pearson Type I error rate across a long run of exact replications. (shrink)
This paper will examine the nature of mechanisms and the distinction between the relevant and irrelevant parts involved in a mechanism’s operation. I first consider Craver’s account of this distinction in his book on the nature of mechanisms, and explain some problems. I then offer a novel account of the distinction that appeals to some resources from Mackie’s theory of causation. I end by explaining how this account enables us to better understand what mechanisms are and their various features.
The concepts of placebos and placebo effects refer to extremely diverse phenomena. I recommend dissolving the concepts of placebos and placebo effects into loosely related groups of specific mechanisms, including expectation-fulfillment, classical conditioning, and attentional-somatic feedback loops. If this approach is on the right track, it has three main implications for the ethics of informed consent. First, because of the expectation-fulfillment mechanism, the process of informing cannot be considered independently from the potential effects of treatment. Obtaining informed consent influences the (...) effects of treatment. This provides support for the authorized concealment and authorized deception paradigms, and perhaps even for outright deceptive placebo use. Second, doctors may easily fail to consider the potential benefits of conditioning, leading them to misjudge the trade-off between beneficence and autonomy. Third, how attentional-somatic feedback loops play out depends not only on the content of the informing process but also on its framing. This suggests a role for libertarian paternalism in clinical practice. (shrink)
We argue that the interaction of biased media coverage and widespread employment of the recognition heuristic can produce epistemic injustices. First, we explain the recognition heuristic as studied by Gerd Gigerenzer and colleagues, highlighting how some of its components are largely external to, and outside the control of, the cognitive agent. We then connect the recognition heuristic with recent work on the hypotheses of embedded, extended, and scaffolded cognition, arguing that the recognition heuristic is best understood as an instance of (...) scaffolded cognition. In section three, we consider the double-edged sword of cognitive scaffolding. On the one hand, scaffolds can reduce the internal processing demands on cognitive agents while increasing their access to information. On the other hand, the use of scaffolding leaves cognitive agents increasingly vulnerable to forming false beliefs or failing to form beliefs at all about particular topics. With respect to the recognition heuristic, agents rely on third parties (such as the media) to report not just what’s true but also what’s important or valuable. This makes cognitive agents relying on these third parties vulnerable to two erroneous influences: 1) because they don’t recognize something, it isn’t important or valuable, and 2) because they do recognize something, it is important or valuable. Call the latter the Kardashian Inference and the former the Darfur Inference. In section four, we use Fricker’s (2007) concept of epistemic injustice to characterize the nature and harm of these false inferences, with special emphasis on the Darfur Inference. In section five, we use data-mining and an empirical study to show how Gigerenzer’s population estimation task is liable to produce Darfur Inferences. We conclude with some speculative remarks on more important Darfur Inferences, and how to avoid them by scaffolding better. One primary way to accomplish this it to shift the burden of embodying the virtue of epistemic justice from the hearer or consumer of media to the media themselves. (shrink)
Symposium contribution on Mark Schroeder's Slaves of the Passions. Argues that Schroeder's account of agent-neutral reasons cannot be made to work, that the limited scope of his distinctive proposal in the epistemology of reasons undermines its plausibility, and that Schroeder faces an uncomfortable tension between the initial motivation for his view and the details of the view he develops.
I know that I could have been where you are right now and that you could have been where I am right now, but that neither of us could have been turnips or natural numbers. This knowledge of metaphysical modality stands in need of explanation. I will offer an account based on our knowledge of the natures, or essencess, of things. I will argue that essences need not be viewed as metaphysically bizarre entities; that we can conceptualise and refer to (...) essences; and that we can gain knowledge of them. We can know about which properties are, and which properties are not, essential to a given entity. This knowledge of essence offers a route to knowledge of the ways those entities must be or could be. (shrink)
Protests and counter-protests seek to draw and direct attention and concern with confronting images and slogans. In recent years, as protests and counter-protests have partially migrated to the digital space, such images and slogans have also gone online. Two main ways in which these images and slogans are translated to the online space is through the use of emoji and hashtags. Despite sustained academic interest in online protests, hashtag activism and the use of emoji across social media platforms, little is (...) known about the specific functional role that emoji and hashtags play in online social movements. In an effort to fill this gap, the current paper studies both hashtags and emoji in the context of the Twitter discourse around the Black Lives Matter movement. (shrink)
Hobbes emphasized that the state of nature is a state of war because it is characterized by fundamental and generalized distrust. Exiting the state of nature and the conflicts it inevitably fosters is therefore a matter of establishing trust. Extant discussions of trust in the philosophical literature, however, focus either on isolated dyads of trusting individuals or trust in large, faceless institutions. In this paper, I begin to fill the gap between these extremes by analyzing what I call the topology (...) of communities of trust. Such communities are best understood in terms of interlocking dyadic relationships that approximate the ideal of being symmetric, Euclidean, reflexive, and transitive. Few communities of trust live up to this demanding ideal, and those that do tend to be small (between three and fifteen individuals). Nevertheless, such communities of trust serve as the conditions for the possibility of various important prudential epistemic, cultural, and mental health goods. However, communities of trust also make possible various problematic phenomena. They can become insular and walled-off from the surrounding community, leading to distrust of out-groups. And they can lead their members to abandon public goods for tribal or parochial goods. These drawbacks of communities of trust arise from some of the same mecha-nisms that give them positive prudential, epistemic, cultural, and mental health value – and so can at most be mitigated, not eliminated. (shrink)
This book investigates what change is, according to Aristotle, and how it affects his conception of being. Mark Sentesy argues that change leads Aristotle to develop first-order metaphysical concepts such as matter, potency, actuality, sources of being, and the teleology of emerging things. He shows that Aristotle’s distinctive ontological claim—that being is inescapably diverse in kind—is anchored in his argument for the existence of change. -/- Aristotle may be the only thinker to have given a noncircular definition of change. (...) When he gave this definition, arguing that change is real was a losing proposition. To show that it exists, he had to rework the way philosophers understood reality. His groundbreaking analysis of change has long been interpreted through a Platonist lens, however, in which being is conceived as unchanging. Offering a comprehensive reexamination of the relationship between change and being in Aristotle, Sentesy makes an important contribution to scholarship on Aristotle, ancient philosophy, the history and philosophy of science, and metaphysics. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.