Alongside existing research into the social, political and economic impacts of the Web, there is a need to study the Web from a cognitive and epistemic perspective. This is particularly so as new and emerging technologies alter the nature of our interactive engagements with the Web, transforming the extent to which our thoughts and actions are shaped by the online environment. Situated and ecological approaches to cognition are relevant to understanding the cognitive significance of the Web because of the emphasis (...) they place on forces and factors that reside at the level of agent–world interactions. In particular, by adopting a situated or ecological approach to cognition, we are able to assess the significance of the Web from the perspective of research into embodied, extended, embedded, social and collective cognition. The results of this analysis help to reshape the interdisciplinary configuration of Web Science, expanding its theoretical and empirical remit to include the disciplines of both cognitive science and the philosophy of mind. (shrink)
Proponents of the extended mind have suggested that phenomenal transparency may be important to the way we evaluate putative cases of cognitive extension. In particular, it has been suggested that in order for a bio-external resource to count as part of the machinery of the mind, it must qualify as a form of transparent equipment or transparent technology. The present paper challenges this claim. It also challenges the idea that phenomenological properties can be used to settle disputes regarding the constitutional (...) status of bio-external resources in episodes of extended cognizing. Rather than regard phenomenal transparency as a criterion for cognitive extension, we suggest that transparency is a feature of situations that support the ascription of certain cognitive/mental dispositional properties to both ourselves and others. By directing attention to the forces and factors that motivate disposition ascriptions, we arrive at a clearer picture of the role of transparency in arguments for extended cognition and the extended mind. As it turns out, transparency is neither necessary nor sufficient for cognitive extension, but this does not mean that it is entirely irrelevant to our understanding of the circumstances in which episodes of extended cognizing are apt to arise. (shrink)
Suppose that disgust can provide evidence of moral wrongdoing. What account of disgust might make sense of this? A recent and promising theory is the social contagion view, proposed by Alexandra Plakias. After criticizing both its descriptive and normative claims, I draw two conclusions. First, we should question the wisdom of drawing so straight a line from biological poisons and pathogens to social counterparts. Second, we don’t need to explain the evidential value of disgust by appealing to what the response (...) tracks. These lessons point toward an alternative: namely, that disgust is a moral heuristic. On the heuristic view, disgust is a trigger for the subconscious use of a particular rule: I show how this view fits with a plausible hypothesis about the social function of disgust, and then apply it to Leon Kass’s famous use of repugnance to criticize cloning. (shrink)
In this paper, we argue that several recent ‘wide’ perspectives on cognition (embodied, embedded, extended, enactive, and distributed) are only partially relevant to the study of cognition. While these wide accounts override traditional methodological individualism, the study of cognition has already progressed beyond these proposed perspectives towards building integrated explanations of the mechanisms involved, including not only internal submechanisms but also interactions with others, groups, cognitive artifacts, and their environment. The claim is substantiated with reference to recent developments in the (...) study of “mindreading” and debates on emotions. We claim that the current practice in cognitive (neuro)science has undergone, in effect, a silent mechanistic revolution, and has turned from initial binary oppositions and abstract proposals towards the integration of wide perspectives with the rest of the cognitive (neuro)sciences. (shrink)
In this chapter, we analyze the relationships between the Internet and its users in terms of situated cognition theory. We first argue that the Internet is a new kind of cognitive ecology, providing almost constant access to a vast amount of digital information that is increasingly more integrated into our cognitive routines. We then briefly introduce situated cognition theory and its species of embedded, embodied, extended, distributed and collective cognition. Having thus set the stage, we begin by taking an embedded (...) cognition view and analyze how the Internet aids certain cognitive tasks. After that, we conceptualize how the Internet enables new kinds of embodied interaction, extends certain aspects of our embodiment, and examine how wearable technologies that monitor physiological, behavioral and contextual states transform the embodied self. On the basis of the degree of cognitive integration between a user and Internet resource, we then look at how and when the Internet extends our cognitive processes. We end this chapter with a discussion of distributed and collective cognition as facilitated by the Internet. (shrink)
Is the mind flat? Chater (2018) has recently argued that it is and that, contrary to traditional psychology and standard folk image, depth of mind is just an illusory confabulation. In this paper, we argue that while there is a kernel of something correct in Chater’s thesis, this does not in itself add up to a critique of mental depth per se. We use Chater’s ideas as a springboard for creating a new understanding of mental depth which builds upon findings (...) in contemporary cognitive science. First, we rely on the predictive processing framework in order to determine a proposed neural contribution to mental depth, specifically in hierarchical predictive knowledge. Second, drawing from an embodied approach to cognition, we argue that mental depth results from the depth of our embodied skills and the situations in which we are embedded. This allows us to introduce to a new realist notion of mental depth, one which can only be explained once we attend to the dense patterns of skillful interaction within a rich artefactual and social environment. (shrink)
Decisions are made under uncertainty when there are distinct outcomes of a given action, and one is uncertain to which the act will lead. Decisions are made under indeterminacy when there are distinct outcomes of a given action, and it is indeterminate to which the act will lead. This paper develops a theory of (synchronic and diachronic) decision-making under indeterminacy that portrays the rational response to such situations as inconstant. Rational agents have to capriciously and randomly choose how to resolve (...) the indeterminacy relevant to a given choice-situation, but such capricious choices once made constrain how they will choose in the future. The account is illustrated by the case of self-interested action in situations where it is indeterminate whether you yourself will survive to benefit or suffer the consequences. The conclusion emphasizes some distinctive anti-hedging predictions of the account. (shrink)
Inscrutability arguments threaten to reduce interpretationist metasemantic theories to absurdity. Can we find some way to block the arguments? A highly influential proposal in this regard is David Lewis’ ‘ eligibility ’ response: some theories are better than others, not because they fit the data better, but because they are framed in terms of more natural properties. The purposes of this paper are to outline the nature of the eligibility proposal, making the case that it is not ad hoc, but (...) instead flows naturally from three independently motivated elements; and to show that severe limitations afflict the proposal. In conclusion, I pick out the element of the eligibility response that is responsible for the limitations: future work in this area should therefore concentrate on amending this aspect of the overall theory. (shrink)
Might it be that world itself, independently of what we know about it or how we represent it, is metaphysically indeterminate? This article tackles in turn a series of questions: In what sorts of cases might we posit metaphysical indeterminacy? What is it for a given case of indefiniteness to be 'metaphysical'? How does the phenomenon relate to 'ontic vagueness', the existence of 'vague objects', 'de re indeterminacy' and the like? How might the logic work? Are there reasons for postulating (...) this distinctive sort of indefiniteness? Conversely, are there reasons for denying that there is indefiniteness of this sort? (shrink)
A review of Robert. B. Stewart's edited volume concerning a discussion between William Dembski and Michael Ruse. Further contributions are included from William Lane Craig and others.
Jeff Paris proves a generalized Dutch Book theorem. If a belief state is not a generalized probability then one faces ‘sure loss’ books of bets. In Williams I showed that Joyce’s accuracy-domination theorem applies to the same set of generalized probabilities. What is the relationship between these two results? This note shows that both results are easy corollaries of the core result that Paris appeals to in proving his dutch book theorem. We see that every point of accuracy-domination defines a (...) dutch book, but we only have a partial converse. (shrink)
Although the Evans argument against vague identity has been much discussed, proposah for blocking it have not so far satisfied general conditions which any solution ought to meet. Moreover, the relation between ontically vague identity and ontic vagueness more generally has not yet been satisfactorily addressed. I advocate a way of resisting the Evans argument which satisfies the conditions. To show how this approach can vindicate particular cases of ontically vague identity, I develop a framework for describing ontic vagueness in (...) general in terms of multiple actualities. This provides aprìncipled approach to ontically vague identity which is unaffected by the Evans argument. (shrink)
Worlds where things divide forever ("gunk" worlds) are apparently conceivable. The conceivability of such scenarios has been used as an argument against "nihilist" or "near-nihilist" answers to the special composition question. I argue that the mereological nihilist has the resources to explain away the illusion that gunk is possible.
Lewis (1973) gave a short argument against conditional excluded middle, based on his treatment of ‘might’ counterfactuals. Bennett (2003), with much of the recent literature, gives an alternative take on ‘might’ counterfactuals. But Bennett claims the might-argument against CEM still goes through. This turns on a specific claim I call Bennett’s Hypothesis. I argue that independently of issues to do with the proper analysis of might-counterfactuals, Bennett’s Hypothesis is inconsistent with CEM. But Bennett’s Hypothesis is independently objectionable, so we should (...) resolve this tension by dropping the Hypothesis, not by dropping CEM. (shrink)
Many accounts of structural rationality give a special role to logic. This paper reviews the problem case of clear-eyed logical uncertainty. An account of rational norms on belief that does not give a special role to logic is developed: doxastic probabilism.
Joyce (1998) gives an argument for probabilism: the doctrine that rational credences should conform to the axioms of probability. In doing so, he provides a distinctive take on how the normative force of probabilism relates to the injunction to believe what is true. But Joyce presupposes that the truth values of the propositions over which credences are defined are classical. I generalize the core of Joyce’s argument to remove this presupposition. On the same assumptions as Joyce uses, the credences of (...) a rational agent should always be weighted averages of truth value assignments. In the special case where the truth values are classical, the weighted averages of truth value assignments are exactly the probability functions. But in the more general case, probabilistic axioms formulated in terms of classical logic are violated—but we will show that generalized versions of the axioms formulated in terms of non-classical logics are satisfied. (shrink)
Revisionary theories of logic or truth require revisionary theories of mind. This essay outlines nonclassically based theories of rational belief, desire, and decision making, singling out the supervaluational family for special attention. To see these nonclassical theories of mind in action, this essay examines a debate between David Lewis and Derek Parfit over what matters in survival. Lewis argued that indeterminacy in personal identity allows caring about psychological connectedness and caring about personal identity to amount to the same thing. The (...) essay argues that Lewis's treatment of two of Parfit's puzzle cases—degreed survival and fission—presuppose different nonclassical treatments of belief and desire. (shrink)
John Hawthorne in a recent paper takes issue with Lewisian accounts of counterfactuals, when relevant laws of nature are chancy. I respond to his arguments on behalf of the Lewisian, and conclude that while some can be rebutted, the case against the original Lewisian account is strong.I develop a neo-Lewisian account of what makes for closeness of worlds. I argue that my revised version avoids Hawthorne’s challenges. I argue that this is closer to the spirit of Lewis’s first (non-chancy) proposal (...) than is Lewis’s own suggested modification. (shrink)
I outline and motivate a way of implementing a closest world theory of indicatives, appealing to Stalnaker's framework of open conversational possibilities. Stalnakerian conversational dynamics helps us resolve two outstanding puzzles for a such a theory of indicative conditionals. The first puzzle -- concerning so-called 'reverse Sobel sequences' -- can be resolved by conversation dynamics in a theoryneutral way: the explanation works as much for Lewisian counterfactuals as for the account of indicatives developed here. Resolving the second puzzle, by contrast, (...) relies on the interplay between the particular theory of indicative conditionals developed here and Stalnakerian dynamics. The upshot is an attractive resolution of the so-called "Gibbard phenomenon" for indicative conditionals. (shrink)
Some argue that theories of universals should incorporate structural universals, in order to allow for the metaphysical possibility of worlds of 'infinite descending complexity' ('onion worlds'). I argue that the possibility of such worlds does not establish the need for structural universals. So long as we admit the metaphysical possibility of emergent universals, there is an attractive alternative description of such cases.
There are advantages to thrift over honest toil. If we can make do without numbers we avoid challenging questions over the metaphysics and epistemology of such entities; and we have a good idea, I think, of what a nominalistic metaphysics should look like. But minimizing ontology brings its own problems; for it seems to lead to error theory— saying that large swathes of common-sense and best science are false. Should recherche philosophical arguments really convince us to give all this up? (...) Such Moorean considerations are explicitly part of the motivation for the recent resurgence of structured metaphysics, which allow a minimal (perhaps nominalistic) fundamental ontology, while avoiding error-theory by adopting a permissive stance towards ontology that can be argued to be grounded in the fundamental. This paper evaluates the Moorean arguments, identifying key epistemological assumptions. On the assumption that Moorean arguments can be used to rule out error-theory, I examine deflationary ‘representationalist’ rivals to the structured metaphysics reaction. Quinean paraphrase, fictionalist claims about syntax and semantics are considered and criticized. In the final section, a ‘direct’ deflationary strategy is outlined and the theoretical obligations that it faces are articulated. The position advocated may have us talking a lot like a friend of structured metaphysics—but with a very different conception of what we’re up to. (shrink)
Supervaluationism is often described as the most popular semantic treatment of indeterminacy. There???s little consensus, however, about how to fill out the bare-bones idea to include a characterization of logical consequence. The paper explores one methodology for choosing between the logics: pick a logic that norms belief as classical consequence is standardly thought to do. The main focus of the paper considers a variant of standard supervaluational, on which we can characterize degrees of determinacy. It applies the methodology above to (...) focus on degree logic. This is developed first in a basic, single-premise case; and then extended to the multipremise case, and to allow degrees of consequence. The metatheoretic properties of degree logic are set out. On the positive side, the logic is supraclassical???all classical valid sequents are degree logic valid. Strikingly, metarules such as cut and conjunction introduction fail. (shrink)
We can use radically different reference-schemes to generate the same truth-conditions for the sentences of a language. In this paper, we do three things. (1) Distinguish two arguments that deploy this observation to derive different conclusions. The first argues that reference is radically indeterminate: there is no fact of the matter what ordinary terms refer to. This threat is taken seriously and most contemporary metasemantic theories come with resources intended to rebut it. The second argues for radical parochialism about reference: (...) it’s a reflection of our parochial interests, rather than the nature of the subject matter, that our theorizing about language appeals to reference rather than another relation that generates the same truth-conditions. Rebuttals of the first argument cut no ice against the second, because radical parochialism is compatible with reference being determinate. (2) Argue that radical parochialism, like radical indeterminacy, would be shocking if true. (3) Argue that the case for radical parochialism turns on the explanatory purposes of “reference”-talk: on relatively “thin” conceptions, the argument goes through, and radical parochialism is (shockingly!) true; on richer conceptions, the argument can be blocked. We conclude that non-revisionists must endorse, and justify, a relatively rich conception of the explanatory purposes of “reference”-talk. (shrink)
If one believes that vagueness is an exclusively representational phenomenon, one faces the problem of the many. In the vicinity of Kilimanjaro, there are many many ‘mountain candidates’ all, apparently, with more-or-less equal claim to be mountains. David Lewis has defended a radical claim: that all the billions of mountain candidates are mountains. This paper argues that the supervaluationist about vagueness should adopt Lewis’ proposal, on pain of losing their best explanation of the seductiveness of the sorites.
I formulate a counterfactual version of the notorious ‘Ramsey Test’. Even in a weak form, this makes counterfactuals subject to the very argument that Lewis used to persuade the majority of the philosophical community that indicative conditionals were in hot water. I outline two reactions: to indicativize the debate on counterfactuals; or to counterfactualize the debate on indicatives.
If the world itself is metaphysically indeterminate in a specified respect, what follows? In this paper, we develop a theory of metaphysical indeterminacy answering this question.
In some sense, survival seems to be an intrinsic matter. Whether or not you survive some event seems to depend on what goes on with you yourself —what happens in the environment shouldn’t make a difference. Likewise, being a person at a time seems intrinsic. The principle that survival seems intrinsic is one factor which makes personal fission puzzles so awkward. Fission scenarios present cases where if survival is an intrinsic matter, it appears that an individual could survive twice over. (...) But it’s well known that standard notions of “intrinsicality” won’t do to articulate the sense in which survival is intrinsic, since ‘personhood’ appears to be a maximal property. We formulate a sense in which survival and personhood (and perhaps other maximal properties) may be almost intrinsic—a sense that would suffice, for example, to ground fission arguments. It turns out that this notion of almost-intrinsicality allows us to formulate a new version of the problem of the many. (shrink)
This essay explores the thesis that for vague predicates, uncertainty over whether a borderline instance x of red/large/tall/good is to be understood as practical uncertainty over whether to treat x as red/large/tall/good. Expressivist and quasi-realist treatments of vague predicates due to John MacFarlane and Daniel Elstein provide the stalking-horse. It examines the notion of treating/counting a thing as F , and links a central question about our attitudes to vague predications to normative evaluation of plans to treat a thing as (...) F . The essay examines how the account applies to normatively defective or contested terms. The final section raises a puzzle about the mechanics of MacFarlane’s detailed implementation for the case of gradable adjectives. (shrink)
How are permutation arguments for the inscrutability of reference to be formulated in the context of a Davidsonian truth-theoretic semantics? Davidson takes these arguments to establish that there are no grounds for favouring a reference scheme that assigns London to “Londres”, rather than one that assigns Sydney to that name. We shall see, however, that it is far from clear whether permutation arguments work when set out in the context of the kind of truth-theoretic semantics which Davidson favours. The principle (...) required to make the argument work allows us to resurrect Foster problems against the Davidsonian position. The Foster problems and the permutation inscrutability problems stand or fall together: they are one puzzle, not two. (shrink)
Two kinds of explanation might be put forward. The first goes like this: the necessary connection between the location of a whole and the location of its parts holds because the location of the whole is nothing but the collective location of its parts. The second style of explanation goes like this: the connection holds because what it is for a material whole to have something as a part, is (perhaps among other things) for the whole to contain the part.
Suppose that you're certain that a certain sentence, e.g. "Frida is tall", lacks a determinate truth value. What cognitive attitude should you take towards it—reject it, suspend judgment, or what else? We show that, by adopting a seemingly plausible principle connecting credence in A and Determinately A, we can prove a very implausible answer to this question: i.e., all indeterminate claims should be assigned credence zero. The result is striking similar to so-called triviality results in the literature on modals and (...) conditionals. (shrink)
This paper explores the interaction of well-motivated (if controversial) principles governing the probability conditionals, with accounts of what it is for a sentence to be indefinite. The conclusion can be played in a variety of ways. It could be regarded as a new reason to be suspicious of the intuitive data about the probability of conditionals; or, holding fixed the data, it could be used to give traction on the philosophical analysis of a contentious notion—indefiniteness. The paper outlines the various (...) options, and shows that ‘rejectionist’ theories of indefiniteness are incompatible with the results. Rejectionist theories include popular accounts such as supervaluationism, non-classical truth-value gap theories, and accounts of indeterminacy that centre on rejecting the law of excluded middle. An appendix compares the results obtained here with the ‘impossibility’ results descending from Lewis ( 1976 ). (shrink)
Information can be public among a group. Whether or not information is public matters, for example, for accounts of interdependent rational choice, of communication, and of joint intention. A standard analysis of public information identifies it with (some variant of) common belief. The latter notion is stipulatively defined as an infinite conjunction: for p to be commonly believed is for it to believed by all members of a group, for all members to believe that all members believe it, and so (...) forth. This analysis is often presupposed without much argument in philosophy. Theoretical entrenchment or intuitions about cases might give some traction on the question, but give little insight about why the identification holds, if it does. The strategy of this paper is to characterize a practical-normative role for information being public, and show that the only things that play that role are (variants of) common belief as stipulatively characterized. In more detail: a functional role for “taking a proposition for granted" in non-isolated decision making is characterized. I then present some minimal conditions under which such an attitude is correctly held. The key assumption links this attitude to beliefs about what is public. From minimal a priori principles, we can argue that a proposition being public among a group entails common commitment to believe among that group. Later sections explore partial converses to this result, the factivity of publicity and publicity from the perspective of outsiders to the group, and objections to the aprioricity of the result deriving from a posteriori existential presuppositions. (shrink)
In his famous 1982 paper, Allen Newell [22, 23] introduced the notion of knowledge level to indicate a level of analysis, and prediction, of the rational behavior of a cognitive arti cial agent. This analysis concerns the investigation about the availability of the agent knowledge, in order to pursue its own goals, and is based on the so-called Rationality Principle (an assumption according to which "an agent will use the knowledge it has of its environment to achieve its goals" [22, (...) p. 17]. By using the Newell's own words: "To treat a system at the knowledge level is to treat it as having some knowledge, some goals, and believing it will do whatever is within its power to attain its goals, in so far as its knowledge indicates" [22, p. 13]. In the last decades, the importance of the knowledge level has been historically and system- atically downsized by the research area in cognitive architectures (CAs), whose interests have been mainly focused on the analysis and the development of mechanisms and the processes governing human and (arti cial) cognition. The knowledge level in CAs, however, represents a crucial level of analysis for the development of such arti cial general systems and therefore deserves greater research attention [17]. In the following, we will discuss areas of broad agree- ment and outline the main problematic aspects that should be faced within a Common Model of Cognition [12]. Such aspects, departing from an analysis at the knowledge level, also clearly impact both lower (e.g. representational) and higher (e.g. social) levels. (shrink)
I explore the thesis that the future is open, in the sense that future contingents are neither true nor false. The paper is divided into three sections. In the first, I survey how the thesis arises on a variety of contemporary views on the metaphysics of time. In the second, I explore the consequences for rational belief of the ‘Aristotelian’ view that indeterminacy is characterized by truth-value gaps. In the third, I outline one line of defence for the Aristotelian against (...) the puzzles this induces: treating opinion about future contingents as a matter of fictional belief rather than simple belief. (shrink)
Are counterfactuals with true antecedents and consequents automatically true? That is, is Conjunction Conditionalization: if (X & Y), then (X > Y) valid? Stalnaker and Lewis think so, but many others disagree. We note here that the extant arguments for Conjunction Conditionalization are unpersuasive, before presenting a family of more compelling arguments. These arguments rely on some standard theorems of the logic of counterfactuals as well as a plausible and popular semantic claim about certain semifactuals. Denying Conjunction Conditionalization, then, requires (...) rejecting other aspects of the standard logic of counterfactuals, or else our intuitive picture of semifactuals. (shrink)
The metaphysics of representation poses questions such as: in virtue of what does a sentence, picture, or mental state represent that the world is a certain way? In the first instance, I have focused on the semantic properties of language: for example, what is it for a name such as ‘London’ to refer to something? Interpretationism concerning what it is for linguistic expressions to have meaning, says that constitutively, semantic facts are fixed by best semantic theory. As here developed, it (...) promises to give a reductive, universal and non-revisionary account of the nature of linguistic representation. -/- Interpretationism in general, however, is threatened by severe internal tension, due to arguments for radical inscrutability. These contend that, given the interpretationist setting, there can be no fact of the matter what object an individual word refers to: for example, that there is no fact of the matter as to whether “London” refers to London or to Sydney. -/- A series of challenges emerge, forming the basis for this thesis. 1. What sort of properties is the interpretationist trying to reduce, and what kind of reductive story is she offering? 2. How are inscrutability theses best formulated? Are arguments for inscrutability effective in their own terms? What kinds of inscrutability arise? 3. Is endorsing radical inscrutability a stable position? 4. Are there theoretical virtues—such as simplicity—that can be appealed to in discrediting the rival (empirically equivalent) theories that underpin inscrutability arguments? -/- In addressing these questions, I concentrate on diagnosing the source of inscrutability, mapping the space of ways of resisting the arguments for radical inscrutability, and examining the challenges faced in developing a principled account of linguistic content that avoids radical inscrutability. -/- The effect is not to close down the original puzzles, but rather to sharpen them into a set of new and deeper challenges. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.