Blaming (construed broadly to include both blaming-attitudes and blaming-actions) is a puzzling phenomenon. Even when we grant that someone is blameworthy, we can still sensibly wonder whether we ought to blame him. We sometimes choose to forgive and show mercy, even when it is not asked for. We are naturally led to wonder why we shouldn’t always do this. Wouldn’t it be a better to wholly reject the punitive practices of blame, especially in light of their often undesirable effects, and (...) embrace an ethic of unrelenting forgiveness and mercy? In this paper I seek to address these questions by offering an account of blame that provides a rationale for thinking that to wholly forswear blaming blameworthy agents would be deeply mistaken. This is because, as I will argue, blaming is a way of valuing, it is “a mode of valuation.” I will argue that among the minimal standards of respect generated by valuable objects, notably persons, is the requirement to redress disvalue with blame. It is not just that blame is something additional we are required to do in properly valuing, but rather blame is part of what that it is to properly value. Blaming, given the existence of blameworthy agents, is mode of valuation required by the standards of minimal respect. To forswear blame would be to fail value what we ought to value. (shrink)
Nearly all defences of the agent-causal theory of free will portray the theory as a distinctively libertarian one — a theory that only libertarians have reason to accept. According to what I call ‘the standard argument for the agent-causal theory of free will’, the reason to embrace agent-causal libertarianism is that libertarians can solve the problem of enhanced control only if they furnish agents with the agent-causal power. In this way it is assumed that there is only reason to accept (...) the agent-causal theory if there is reason to accept libertarianism. I aim to refute this claim. I will argue that the reasons we have for endorsing the agent-causal theory of free will are nonpartisan. The real reason for going agent-causal has nothing to do with determinism or indeterminism, but rather with avoiding reductionism about agency and the self. As we will see, if there is reason for libertarians to accept the agent-causal theory, there is just as much reason for compatibilists to accept it. It is in this sense that I contend that if anyone should be an agent-causalist, then everyone should be an agent-causalist. (shrink)
In this paper, I argue that it is open to semicompatibilists to maintain that no ability to do otherwise is required for moral responsibility. This is significant for two reasons. First, it undermines ChristopherEvanFranklin’s recent claim that everyone thinks that an ability to do otherwise is necessary for free will and moral responsibility. Second, it reveals an important difference between John Martin Fischer’s semicompatibilism and Kadri Vihvelin’s version of classical compatibilism, which shows that the dispute (...) between them is not merely a verbal dispute. Along the way, I give special attention to the notion of general abilities, and, though I defend the distinctiveness of Fischer’s semicompatibilism against the verbal dispute charge, I also use the discussion of the nature of general abilities to argue for the falsity of a certain claim that Fischer and coauthor Mark Ravizza have made about their account. (shrink)
In The Varieties of Reference, Gareth Evans describes the acquisition of beliefs about one’s beliefs in the following way: ‘I get myself in a position to answer the question whether I believe that p by putting into operation whatever procedure I have for answering the question whether p.’ In this paper I argue that Evans’s remark can be used to explain first person authority if it is supplemented with the following consideration: Holding on to the content of a belief and (...) ‘prefixing’ it with ‘I believe that’ is as easy as it is to hold on to the contents of one’s thoughts when making an inference. We do not, usually, have the problem, in going, for example, from ‘p’ and ‘q’ to ‘p and q’, that one of our thought contents gets corrupted. Self-ascription of belief by way of Evans’s procedure is based on the same capacity to retain and re-deploy thought contents and therefore should enjoy a similar degree of authority. However, is Evans’s description exhaustive of all authoritative self-ascription of belief? Christopher Peacocke has suggested that in addition to Evans’s procedure there are two more relevant ways of self-ascribing belief. I argue that both methods can be subsumed under Evans’s procedure. (shrink)
Intellectual attention, like perceptual attention, is a special mode of mental engagement with the world. When we attend intellectually, rather than making use of sensory information we make use of the kind of information that shows up in occurent thought, memory, and the imagination (Chun, Golomb, & Turk-Browne, 2011). In this paper, I argue that reflecting on what it is like to comprehend memory demonstratives speaks in favour of the view that intellectual attention is required to understand memory demonstratives. Moreover, (...) I argue that this is a line of thought endorsed by Gareth Evans in his Varieties of Reference (1982). In so doing, I improve on interpretations of Evans that have been offered by Christopher Peacocke (1984), and Christoph Hoerl & Theresa McCormack (a coauthored piece, 2005). In so doing I also improve on McDowell’s (1990) criticism of Peacocke’s interpretation of Evans. Like McDowell, I believe that Peacocke might overemphasize the role that “memory-images” play in Evans’ account of comprehending memory demonstratives. But unlike McDowell, I provide a positive characterization of how Evans described the phenomenology of comprehending memory demonstratives. (shrink)
What would the Merleau-Ponty of Phenomenology of Perception have thought of the use of his phenomenology in the cognitive sciences? This question raises the issue of Merleau-Ponty’s conception of the relationship between the sciences and philosophy, and of what he took the philosophical significance of his phenomenology to be. In this article I suggest an answer to this question through a discussion of certain claims made in connection to the “post-cognitivist” approach to cognitive science by Hubert Dreyfus, Shaun Gallagher and (...) Francisco Varela, Evan Thompson and Eleanor Rosch. I suggest that these claims are indicative of an appropriation of Merleau-Ponty’s thought that he would have welcomed as innovative science. Despite this, I argue that he would have viewed this use of his work as potentially occluding the full philosophical significance that he believed his phenomenological investigations to contain. (shrink)
ChristopherFranklin argues that, despite appearances, everyone thinks that the ability to do otherwise is required for free will and moral responsibility. Moreover, he says that the way to decide which ability to do otherwise is required will involve settling the nature of moral responsibility. In this paper I highlight one point on which those usually called leeway theorists - i.e. those who accept the need for alternatives - agree, in contradistinction to those who deny that the ability (...) to do otherwise is needed for free will. And I explain why it falsifies both of Franklin’s claims. (shrink)
This is a transcript of a conversation between P F Strawson and Gareth Evans in 1973, filmed for The Open University. Under the title 'Truth', Strawson and Evans discuss the question as to whether the distinction between genuinely fact-stating uses of language and other uses can be grounded on a theory of truth, especially a 'thin' notion of truth in the tradition of F P Ramsey.
The representationist maintains that an experience represents a state of affairs. To elaborate, a stimulus of one’s sensorium produces, according to her, a “phenomenal composite” made up of “phenomenal properties” that are the typical effects of certain mind-independent features of the world, which are thereby represented. It is such features, via their phenomenal representatives, of which the subject of an experience would become aware were she to engage in introspection. So, one might ask, what state of affairs would be represented (...) by an illusory experience, that is, one to which no state of affairs in the vicinity of its subject corresponds? The answer, according to the standard defense of representationism (SD), is the same state of affairs that would obtain in its subject’s surroundings if it were veridical. (shrink)
• It would be a moral disgrace for God (if he existed) to allow the many evils in the world, in the same way it would be for a parent to allow a nursery to be infested with criminals who abused the children. • There is a contradiction in asserting all three of the propositions: God is perfectly good; God is perfectly powerful; evil exists (since if God wanted to remove the evils and could, he would). • The religious believer (...) has no hope of getting away with excuses that evil is not as bad as it seems, or that it is all a result of free will, and so on. Piper avoids mentioning the best solution so far put forward to the problem of evil. It is Leibniz’s theory that God does not create a better world because there isn’t one — that is, that (contrary to appearances) if one part of the world were improved, the ramifications would result in it being worse elsewhere, and worse overall. It is a “bump in the carpet” theory: push evil down here, and it pops up over there. Leibniz put it by saying this is the “Best of All Possible Worlds”. That phrase was a public relations disaster for his theory, suggesting as it does that everything is perfectly fine as it is. He does not mean that, but only that designing worlds is a lot harder than it looks, and determining the amount of evil in the best one is no easy matter. Though humour is hardly appropriate to the subject matter, the point of Leibniz’s idea is contained in the old joke, “An optimist is someone who thinks this is the best of all possible worlds, and a pessimist thinks.. (shrink)
Benjamin Franklin's social and political thought was shaped by contacts with and knowledge of ancient aboriginal traditions. Indeed, a strong case can be made that key features of the social structure eventually outlined in the United States Constitution arose not from European sources, and not full-grown from the foreheads of European-American "founding fathers", but from aboriginal sources, communicated to the authors of the Constitution to a significant extent through Franklin. A brief sketch of the main argument to this (...) effect is offered in this essay. (shrink)
How were reliable predictions made before Pascal and Fermat's discovery of the mathematics of probability in 1654? What methods in law, science, commerce, philosophy, and logic helped us to get at the truth in cases where certainty was not attainable? The book examines how judges, witch inquisitors, and juries evaluated evidence; how scientists weighed reasons for and against scientific theories; and how merchants counted shipwrecks to determine insurance rates. Also included are the problem of induction before Hume, design arguments for (...) the existence of God, and theories on how to evaluate scientific and historical hypotheses. It is explained how Pascal and Fermat's work on chance arose out of legal thought on aleatory contracts. The book interprets pre-Pascalian unquantified probability in a generally objective Bayesian or logical probabilist sense. (shrink)
In chapter 7 of The Varieties of Reference, Gareth Evans claimed to have an argument that would present "an antidote" to the Cartesian conception of the self as a purely mental entity. On the basis of considerations drawn from philosophy of language and thought, Evans claimed to be able to show that bodily awareness is a form of self-awareness. The apparent basis for this claim is the datum that sometimes judgements about one’s position based on body sense are immune to (...) errors of misidentification relative to the first-person pronoun 'I'. However, Evans’s argument suffers from a crucial ambiguity. 'I' sometimes refers to the subject's mind, sometimes to the person, and sometimes to the subject's body. Once disambiguated, it turns out that Evans’s argument either begs the question against the Cartesian or fails to be plausible at all. Nonetheless, the argument is important for drawing our attention to the idea that bodily modes of awareness should be taken seriously as possible forms of self-awareness. (shrink)
James Franklin is Professor of Mathematics at the University of New South Wales. He is a prolific author on philosophical topics, who has written a controversial history of Australian philosophy, as well as a book about Catholic values in the Australian context, among an impressively broad range of topics. In his latest book, What Science Knows And How It Knows It, Franklin seeks to defend the rationality of science against those he describes as the enemies of science. The (...) enemies include the usual suspects, Kuhn, Feyerabend, the strong programme, French post-modernists. But there are unusual suspects as well. No doubt, many will be surprised to find Popper and Lakatos ranking high up the list of enemies. (shrink)
This paper is largely exegetical/interpretive. My goal is to demonstrate that some criticisms that have been leveled against the program Gareth Evans constructs in The Varieties of Reference (Evans 1980, henceforth VR) misfire because they are based on misunderstandings of Evans’ position. First I will be discussing three criticisms raised by Tyler Burge (Burge, 2010). The first has to do with Evans’ arguments to the effect that a causal connection between a belief and an object is insufficient for that belief (...) to be about that object. A key part of Evans’ argument is to carefully distinguish considerations relevant to the semantics of language from considerations relevant to the semantics (so to speak) of thought or belief (to make the subsequent discussion easier, I will henceforth use ‘thought’ as a blanket term for the relevant mental states, including belief). I will argue that Burge’s criticisms depend on largely not taking account of Evans’ distinctions. Second, Burge criticizes Evans’ account of ‘informational content’ taking it to be inconsistent. I will show that the inconsistency Burge finds depends entirely on a misreading of the doctrine. Finally, Burge takes Evans to task for a perceived over-intellectualization in a key aspect of his doctrine. Burge incorrectly reads Evans as requiring that the subject holding a belief be engaged in certain overly intellectual endeavors, when in fact Evans is only attributing these endeavors to theorists of such a subject. Next, I turn to two criticisms leveled by John Campbell (Campbell, 1999). I will argue that Campbell’s criticisms are based on misunderstandings – though they do hit at deeper elements of Evans’ doctrine. First, Campbell reads Evans’ account of demonstrative thought as requiring that the subject’s information link to an object allows her to directly locate that object in space. Campbell constructs a case in which one tomato (a) is, because of an angled mirror, incorrectly seen as being at a location that happens to be occupied by an identical tomato (b). Campbell claims that Evans’ doctrines require us to conclude that the subject cannot have a demonstrative thought about the seen tomato (a), though it seems intuitively that such a subject would be able to have a demonstrative thought about that tomato, despite its location is inaccurately seen. I show that Evans’ position in fact allows that the subject can have a demonstrative thought about the causal-source tomato in this case because his account does not require that the location of demonstratively identified objects be immediately accurately assessed. What is crucial is that the subject have the ability to accurately discover the location. Second, Campbell criticizes Evans’ notion of a fundamental level of thought. I show that this criticism hinges on view of the nature and role of the fundamental level of thought that mischaracterizes Evans’ treatment of the notion. (shrink)
The essential idea of Leibniz’s Theodicy was little understood in his time but has become one of the organizing themes of modern mathematics. There are many phenomena that are possible locally but for purely mathematical reasons impossible globally. For example, it is possible to build a spiral staircase that is rising at any given point, but it is impossible to build one that is rising at all points and comes back to where it started. The necessity is mathematically provable, so (...) not subject to exception by divine power. Leibniz’s Theodicy argues that God could improve the universe locally in many ways, but not globally. This paper defends Leibniz, giving positive reasons for believing that there are so many necessary interconnections between goods and evils that God is faced with a choice like the classic Trolley case, where all of the scenarios that could be chosen upfront contain evils, but some more than others. Local changes for the better seem easy to imagine, but a proper understanding of global constraints undermines the initial impression that they can be done without global cost. The paper concludes by explaining how the context of the Leibnizian argument makes it reasonable to pursue the issue of whether there are no worlds better than this one. (shrink)
This article gives two arguments for believing that our society is unknowingly guilty of serious, large-scale wrongdoing. First is an inductive argument: most other societies, in history and in the world today, have been unknowingly guilty of serious wrongdoing, so ours probably is too. Second is a disjunctive argument: there are a large number of distinct ways in which our practices could turn out to be horribly wrong, so even if no particular hypothesized moral mistake strikes us as very likely, (...) the disjunction of all such mistakes should receive significant credence. The article then discusses what our society should do in light of the likelihood that we are doing something seriously wrong: we should regard intellectual progress, of the sort that will allow us to find and correct our moral mistakes as soon as possible, as an urgent moral priority rather than as a mere luxury; and we should also consider it important to save resources and cultivate flexibility, so that when the time comes to change our policies we will be able to do so quickly and smoothly. (shrink)
Both mindreading and stereotyping are forms of social cognition that play a pervasive role in our everyday lives, yet too little attention has been paid to the question of how these two processes are related. This paper offers a theory of the influence of stereotyping on mental-state attribution that draws on hierarchical predictive coding accounts of action prediction. It is argued that the key to understanding the relation between stereotyping and mindreading lies in the fact that stereotypes centrally involve character-trait (...) attributions, which play a systematic role in the action–prediction hierarchy. On this view, when we apply a stereotype to an individual, we rapidly attribute to her a cluster of generic character traits on the basis of her perceived social group membership. These traits are then used to make inferences about that individual’s likely beliefs and desires, which in turn inform inferences about her behavior. (shrink)
Explains Aristotle's views on the possibility of continuous variation between biological species. While the Porphyrean/Linnean classification of species by a tree suggests species are distributed discretely, Aristotle admitted continuous variation between species among lower life forms.
Comparative valuation of different policy interventions often requires interpersonal comparability of benefit. In the field of health economics, the metric commonly used for such comparison, quality adjusted life years (QALYs) gained, has been criticized for failing to respect the equality of all persons’ intrinsic worth, including particularly those with disabilities. A methodology is proposed that interprets ‘full quality of life’ as the best health prospect that is achievable for the particular individual within the relevant budget constraint. This calibration is challenging (...) both conceptually and operationally as it shifts dramatically when technology or budget developments alter what can be achieved for incapacitated individuals. The proposal nevertheless ensures that the maximal achievable satisfaction of one person’s preferences can carry no more intrinsic value than that of another. This approach, which can be applied to other domains of social valuation, thus prevents implicit discrimination against the elderly and those with irremediable incapacities. (shrink)
How is human social intelligence engaged in the course of ordinary conversation? Standard models of conversation hold that language production and comprehension are guided by constant, rapid inferences about what other agents have in mind. However, the idea that mindreading is a pervasive feature of conversation is challenged by a large body of evidence suggesting that mental state attribution is slow and taxing, at least when it deals with propositional attitudes such as beliefs. Belief attributions involve contents that are decoupled (...) from our own primary representation of reality; handling these contents has come to be seen as the signature of full-blown human mindreading. However, mindreading in cooperative communication does not necessarily demand decoupling. We argue for a theoretical and empirical turn towards “factive” forms of mentalizing here. In factive mentalizing, we monitor what others do or do not know, without generating decoupled representations. We propose a model of the representational, cognitive, and interactive components of factive mentalizing, a model that aims to explain efficient real-time monitoring of epistemic states in conversation. After laying out this account, we articulate a more limited set of conversational functions for nonfactive forms of mentalizing, including contexts of meta-linguistic repair, deception, and argumentation. We conclude with suggestions for further research into the roles played by factive versus nonfactive forms of mentalizing in conversation. (shrink)
The problem of the many threatens to show that, in general, there are far more ordinary objects than you might have thought. I present and motivate a solution to this problem using many-one identity. According to this solution, the many things that seem to have what it takes to be, say, a cat, are collectively identical to that single cat.
Character judgments play an important role in our everyday lives. However, decades of empirical research on trait attribution suggest that the cognitive processes that generate these judgments are prone to a number of biases and cognitive distortions. This gives rise to a skeptical worry about the epistemic foundations of everyday characterological beliefs that has deeply disturbing and alienating consequences. In this paper, I argue that this skeptical worry is misplaced: under the appropriate informational conditions, our everyday character-trait judgments are in (...) fact quite trustworthy. I then propose a mindreading-based model of the socio-cognitive processes underlying trait attribution that explains both why these judgments are initially unreliable, and how they eventually become more accurate. (shrink)
According to the two-systems account of mindreading, our mature perspective-taking abilities are subserved by two distinct mindreading systems: a fast but inflexible, “implicit” system, and a flexible but slow “explicit” one. However, the currently available evidence on adult perspective-taking does not support this account. Specifically, both Level-1 and Level-2 perspective-taking show a combination of efficiency and flexibility that is deeply inconsistent with the two-systems architecture. This inconsistency also turns out to have serious consequences for the two-systems framework as a whole, (...) both as an account of our mature mindreading abilities and of the development of those abilities. What emerges from this critique is a conception of context-sensitive, spontaneous mindreading that may provide insight into how mindreading functions in complex social environments. This in turn offers a bulwark against skepticism about the role of mindreading in everyday social cognition. (shrink)
Nativists about theory of mind have typically explained why children below the age of four fail the false belief task by appealing to the demands that these tasks place on children’s developing executive abilities. However, this appeal to executive functioning cannot explain a wide range of evidence showing that social and linguistic factors also affect when children pass this task. In this paper, I present a revised nativist proposal about theory of mind development that is able to accommodate these findings, (...) which I call the pragmatic development account. According to this proposal, we can gain a better understanding of the shift in children’s performance on standard false-belief tasks around four years of age by considering how children’s experiences with the pragmatics of belief discourse affect the way they interpret the task. (shrink)
In the small but growing literature on the philosophy of country music, the question of how we ought to understand the genre’s notion of authenticity has emerged as one of the central questions. Many country music scholars argue that authenticity claims track attributions of cultural standing or artistic self-expression. However, careful attention to the history of the genre reveals that these claims are simply factually wrong. On the basis of this, we have grounds for dismissing these attributions. Here, I argue (...) for an alternative model of authenticity in which we take claims about the relative authenticity of country music to be evidence of ‘country’ being a dual character concept in the same way that it has been suggested of punk rock and hip-hop. Authentic country music is country music that embodies the core value commitments of the genre. These values form the basis of country artists’ and audiences’ practical identities. Part of country music’s aesthetic practice is that audiences reconnect with, reify, and revise this common practical identity through identification with artists and works that manifest these values. We should then think of authenticity discourse within country music as a kind of game within the genre’s practice of shaping and maintaining this practical identity. (shrink)
ABSTRACT Although in recent years Christine Ladd-Franklin has received recognition for her contributions to logic and psychology, her role in late nineteenth- and early twentieth-century philosophy, as well as her relationship with American pragmatism, has yet to be fully appreciated. My goal here is to attempt to better understand Ladd-Franklin’s place in the pragmatist tradition by drawing attention to her work on the nature and unity of the proposition. The question concerning the unity of the proposition – namely, (...) the problem of how to determine what differentiates a mere collection of terms from a unified and meaningful proposition – received substantial attention in Ladd-Franklin’s time, and would continue to interest analytic philosophers well into the twentieth century. I argue that Ladd-Franklin had a distinct theory of the proposition and solution to the problem of the unity of the proposition that she developed over the course of her writings on logic and philosophy. In spelling out her views, I will also show her work interacted with and influenced that of the pragmatist who was her greatest influence, C.S. Peirce. (shrink)
how does one inquire into the truth of first principles? Where does one begin when deciding where to begin? Aristotle recognizes a series of difficulties when it comes to understanding the starting points of a scientific or philosophical system, and contemporary scholars have encountered their own difficulties in understanding his response. I will argue that Aristotle was aware of a Platonic solution that can help us uncover his own attitude toward the problem.Aristotle's central problem with first principles arises from the (...) fact that they cannot be demonstrated in the same way as other propositions. Since demonstrations proceed from prior and better-known principles, if the principles themselves were in need of... (shrink)
‘Virtue signaling’ is the practice of using moral talk in order to enhance one’s moral reputation. Many find this kind of behavior irritating. However, some philosophers have gone further, arguing that virtue signaling actively undermines the proper functioning of public moral discourse and impedes moral progress. Against this view, I argue that widespread virtue signaling is not a social ill, and that it can actually serve as an invaluable instrument for moral change, especially in cases where moral argument alone does (...) not suffice. Specifically, virtue signaling can change the broader public’s social expectations, which can in turn motivate the adoption of new, positive social norms. I also argue that the reputation-seeking motives underlying virtue signaling impose important constraints on virtue signalers’ behavior, which serve to keep the worst excesses of virtue signaling in check. (shrink)
Genre discourse is widespread in appreciative practice, whether that is about hip-hop music, romance novels, or film noir. It should be no surprise then, that philosophers of art have also been interested in genres. Whether they are giving accounts of genres as such or of particular genres, genre talk abounds in philosophy as much as it does the popular discourse. As a result, theories of genre proliferate as well. However, in their accounts, philosophers have so far focused on capturing all (...) of the categories of art that we think of as genres and have focused less on ensuring that only the categories we think are genres are captured by those theories. Each of these theories populates the world with far too many genres because they call a wide class of mere categories of art genres. I call this the problem of genre explosion. In this paper, I survey the existing accounts of genre and describe the kinds of considerations they employ in determining whether a work is a work of a given genre. After this, I demonstrate the ways in which the problem of genre explosion arises for all of these theories and discuss some solutions those theories could adopt that will ultimately not work. Finally, I argue that the problem of genre explosion is best solved by adopting a social view of genres, which can capture the difference between genres and mere categories of art. (shrink)
In How We Understand Others: Philosophy and Social Cognition, Shannon Spaulding develops a novel account of social cognition with pessimistic implications for mindreading accuracy: according to Spaulding, mistakes in mentalizing are much more common than traditional theories of mindreading commonly assume. In this commentary, I push against Spaulding’s pessimism from two directions. First, I argue that a number of the heuristic mindreading strategies that Spaulding views as especially error prone might be quite reliable in practice. Second, I argue that current (...) methods for measuring mindreading performance are not well-suited for the task of determining whether our mental-state attributions are generally accurate. I conclude that any claims about the accuracy or inaccuracy of mindreading are currently unjustified. (shrink)
The current industrial revolution is said to be driven by the digitization that exploits connected information across all aspects of manufacturing. Standards have been recognized as an important enabler. Ontology-based information standard may provide benefits not offered by current information standards. Although there have been ontologies developed in the industrial manufacturing domain, they have been fragmented and inconsistent, and little has received a standard status. With successes in developing coherent ontologies in the biological, biomedical, and financial domains, an effort called (...) Industrial Ontologies Foundry (IOF) has been formed to pursue the same goal for the industrial manufacturing domain. However, developing a coherent ontology covering the entire industrial manufacturing domain has been known to be a mountainous challenge because of the multidisciplinary nature of manufacturing. To manage the scope and expectations, the IOF community kicked-off its effort with a proof-of-concept (POC) project. This paper describes the developments within the project. It also provides a brief update on the IOF organizational set up. (shrink)
In recent years, there has been a heated debate about how to interpret findings that seem to show that humans rapidly and automatically calculate the visual perspectives of others. In the current study, we investigated the question of whether automatic interference effects found in the dot-perspective task (Samson, Apperly, Braithwaite, Andrews, & Bodley Scott, 2010) are the product of domain-specific perspective-taking processes or of domain-general “submentalizing” processes (Heyes, 2014). Previous attempts to address this question have done so by implementing inanimate (...) controls, such as arrows, as stimuli. The rationale for this is that submentalizing processes that respond to directionality should be engaged by such stimuli, whereas domain-specific perspective-taking mechanisms, if they exist, should not. These previous attempts have been limited, however, by the implied intentionality of the stimuli they have used (e.g. arrows), which may have invited participants to imbue them with perspectival agency. Drawing inspiration from “novel entity” paradigms from infant gaze-following research, we designed a version of the dot-perspective task that allowed us to precisely control whether a central stimulus was viewed as animate or inanimate. Across four experiments, we found no evidence that automatic “perspective-taking” effects in the dot-perspective task are modulated by beliefs about the animacy of the central stimulus. Our results also suggest that these effects may be due to the task-switching elements of the dot-perspective paradigm, rather than automatic directional orienting. Together, these results indicate that neither the perspective-taking nor the standard submentalizing interpretations of the dot-perspective task are fully correct. (shrink)
Plato's Theaetetus discusses and ultimately rejects Protagoras's famous claim that "man is the measure of all things." The most famous of Plato's arguments is the Self-Refutation Argument. But he offers a number of other arguments as well, including one that I call the 'Future Argument.' This argument, which appears at Theaetetus 178a−179b, is quite different from the earlier Self-Refutation Argument. I argue that it is directed mainly at a part of the Protagorean view not addressed before , namely, that all (...) beliefs concerning one's own future sensible qualities are true. This part of the view is found to be inconsistent with Protagoras's own conception of wisdom as expertise and with his own pretenses at expertise in teaching. (shrink)
This paper is a test case for the claim, made famous by Myles Burnyeat, that the ancient Greeks did not recognize subjective truth or knowledge. After a brief discussion of the issue in Sextus Empiricus, I then turn to Plato's discussion of Protagorean views in the Theaetetus. In at least two passages, it seems that Plato attributes to Protagoras the view that our subjective experiences constitute truth and knowledge, without reference to any outside world of objects. I argue that these (...) passages have been misunderstood and that on the correct reading, they do not say anything about subjective knowledge. I then try out what I take to be the correct reading of the passages. The paper concludes with a brief discussion of the importance of causes in Greek epistemology. (shrink)
At a crucial juncture in Plato’s Sophist, when the interlocutors have reached their deepest confusion about being and not-being, the Eleatic Visitor proclaims that there is yet hope. Insofar as they clarify one, he maintains, they will equally clarify the other. But what justifies the Visitor’s seemingly oracular prediction? A new interpretation explains how the Visitor’s hope is in fact warranted by the peculiar aporia they find themselves in. The passage describes a broader pattern of ‘exploring both sides’ that lends (...) insight into Plato’s aporetic method. (shrink)
अद्वितीय अमेरिकी रहस्यवादी आदि दा (Franklin जोन्स) के जीवन और आध्यात्मिक आत्मकथा की एक संक्षिप्त समीक्षा. कुछ संस्करणों के कवर पर स्टीकर कहते हैं, 'सभी समय का सबसे गहरा आध्यात्मिक आत्मकथा' और यह अच्छी तरह से सच हो सकता है. मैं अपने 70 के दशक में हूँ और आध्यात्मिक शिक्षकों और आध्यात्मिकता पर कई किताबें पढ़ ी है, और यह सबसे बड़ी में से एक है. निश्चित रूप से, यह by अब तक ज्ञान की प्रक्रिया मैंने कभी देखा है (...) की पूरीऔर स्पष्ट खाता है. यहां तक कि अगर आप सभी मानव मनोवैज्ञानिक प्रक्रियाओं का सबसे आकर्षक में सब पर कोई दिलचस्पी नहीं है, यह एक अद्भुत दस्तावेज है कि धर्म, योग, और मानव मनोविज्ञान के बारे में एक बड़ा सौदा पता चलता है और गहराई और मानव संभावनाओं की सीमा की जांच. मैं इसे कुछ विस्तार से वर्णन और समकालीन भारतीय रहस्यवादी ओशो के साथ अपने शिक्षण की तुलना करें. आधुनिक दो systems दृश्यसे मानव व्यवहार के लिए एक व्यापक अप करने के लिए तारीख रूपरेखा इच्छुक लोगों को मेरी पुस्तक 'दर्शन, मनोविज्ञान, मिनडी और लुडविगमें भाषा की तार्किक संरचना से परामर्श कर सकते हैं Wittgenstein और जॉन Searle '2 एड (2019). मेरे लेखन के अधिक में रुचि रखने वालों को देख सकते हैं 'बात कर रहेबंदर- दर्शन, मनोविज्ञान, विज्ञान, धर्म और राजनीति पर एक बर्बाद ग्रह --लेख और समीक्षा 2006-2019 3 एड (2019) और आत्मघाती यूटोपियान भ्रम 21st मेंसदी 4वें एड (2019) . (shrink)
¿Qué ha pasado con el problema del cardinal del continuo después de Gödel (1938) y Cohen (1964)? Intentos de responder esta pregunta pueden encontrarse en los artículos de José Alfredo Amor (1946-2011), "El Problema del continuo después de Cohen (1964-2004)", de Carlos Di Prisco , "Are we closer to a solution of the continuum problem", y de Joan Bagaria, "Natural axioms of set and the continuum problem" , que se pueden encontrar en la biblioteca digital de mi blog de Lógica (...) Matemática y Fundamentos de la Matemática (ver). También en la entrada "The Continuum Hypothesis" de la web de Enciclopedia de Filosofía de la Universidad de Stanford existe información importante y actualizada al respecto. En esta breve nota se comenta sobre el tema de una manera divulgativa. (shrink)
Social norms are commonly understood as rules that dictate which behaviors are appropriate, permissible, or obligatory in different situations for members of a given community. Many researchers have sought to explain the ubiquity of social norms in human life in terms of the psychological mechanisms underlying their acquisition, conformity, and enforcement. Existing theories of the psychology of social norms appeal to a variety of constructs, from prediction-error minimization, to reinforcement learning, to shared intentionality, to domain-specific adaptations for norm acquisition. In (...) this paper, we propose a novel methodological and conceptual framework for the cognitive science of social norms that we call normative pluralism. We begin with an analysis of the explanatory aims of the cognitive science of social norms. From this analysis, we derive a recommendation for a reformed conception of its explanandum: a minimally psychological construct that we call normative regularities. Our central empirical proposal is that the psychological underpinnings of social norms are most likely realized by a heterogeneous set of cognitive, motivational, and ecological mechanisms that vary between norms and between individuals, rather than by a single type of process or distinctive norm system. This pluralistic approach, we suggest, offers a methodologically sound point of departure for a fruitful and rigorous science of social norms. (shrink)
El objetivo de este artículo es presentar una demostración de un teorema clásico sobre álgebras booleanas y ordenes parciales de relevancia actual en teoría de conjuntos, como por ejemplo, para aplicaciones del método de construcción de modelos llamado “forcing” (con álgebras booleanas completas o con órdenes parciales). El teorema que se prueba es el siguiente: “Todo orden parcial se puede extender a una única álgebra booleana completa (salvo isomorfismo)”. Donde extender significa “sumergir densamente”. Tal demostración se realiza utilizando cortaduras de (...) Dedekind siguiendo el texto “Set Theory” de Jech, y otras ideas propias del autor de este artículo. Adicionalmente, se formulan algunas versiones débiles del axioma de elección relacionadas con las álgebras booleanas, las cuales son también de gran importancia para la investigación en teoría de conjuntos y teoría de modelos, pues estas son poderosas técnicas de construcción de modelos, como por ejemplo, el teorema de compacidad (permite construir modelos no estándar, etc) y el teorema del ultrafiltro, que permite construir ultraproductos (pueden ser usados para investigar problemas de cardinales grandes, etc). Se presentan algunas referencias de problemas abiertos sobre el tema. -/- The objective of this paper is to present a demonstration of a classical theorem on boolean algebras and partial orders of current relevance in set theory, as for example, for applications of model construction method called forcing" (with boolean algebras complete or with partial orders). The theorem to be proved is as follows: Any partial order can be extended to a single complete boolean algebra (up to isomorphism)". Where to extend means embed densely". Such a demonstration is done using Dedekind's cuts following the text Set Theory" of Jech, and other ideas of the author of this article. In addition, some weak versions of the axiom of choice related to boolean algebras are formulated, which are also of great importance for the research in set theory and model theory, since this are powerful model construction techniques, such as the compactness theorem (allows the construction of non-standard models, etc.) and the ultralter theorem, which allows the construction of ultraproducts (can be used to investigate problems of large cardinals, etc). Some references of open problems on the subject are presented. (shrink)
Groove, as a musical quality, is an important part of jazz and pop music appreciative practices. Groove talk is widespread among musicians and audiences, and considerable importance is placed on generating and appreciating grooves in music. However, musicians, musicologists, and audiences use groove attributions in a variety of ways that do not track one consistent underlying concept. I argue that that there are at least two distinct concepts of groove. On one account, groove is ‘the feel of the music’ and, (...) on the other, groove is the psychological feeling (induced by music) of wanting to move one’s body. Further, I argue that recent work in music psychology shows that these two concepts do not converge on a unified set of musical features. Finally, I also argue that these two concepts play different functional roles in the appreciative practices of jazz and popular music. This should cause us to further consider the mediating role genre plays for aesthetic concepts and provides us with reason for adopting a more communitarian approach to aesthetics which is attentive to the ways in which aesthetic discourse serves the practices of different audiences. (shrink)
Moral character judgments pervade our everyday social interactions. But are these judgments epistemically reliable? In this paper, I discuss a challenge to the reliability of ordinary virtue and vice attribution that emerges from Christian Miller’s Mixed Traits theory of moral character, which entails that the majority of our ordinary moral character judgments are false. In response to this challenge, I argue that a key prediction of this theory is not borne out by the available evidence; this evidence further suggests that (...) our moral character judgments do converge upon real psychological properties of individuals. I go on to argue that this is because the evidence for the Mixed Traits Theory does not capture the kind of compassionate behaviors that ordinary folk really care about. Ultimately, I suggest that our ordinary standards for virtue and vice have a restricted social scope, which reflects the parochial nature of our characterological moral psychology. (shrink)
Para matemáticos interesados en problemas de fundamentos, lógico-matemáticos y filósofos de la matemática, el axioma de elección es centro obligado de reflexión, pues ha sido considerado esencial en el debate dentro de las posiciones consideradas clásicas en filosofía de la matemática (intuicionismo, formalismo, logicismo, platonismo), pero también ha tenido una presencia fundamental para el desarrollo de la matemática y metamatemática contemporánea. Desde una posición que privilegia el quehacer matemático, nos proponemos mostrar los aportes que ha tenido el axioma en varias (...) áreas fundamentales de la matemática, su aplicación en la lógica de primer orden, así como una breve descripción de las pruebas de consistencia relativa debidas a Gödel y Cohen, las cuales establecieron su independencia del sistema axiomático Zermelo-Fraenkel (ZF). Con todo lo anterior mostraremos cómo el quehacer matemático contemporáneo se adscribe al platonismo matemático en los términos de Bernays y Ferreirós. Revisaremos también los argumentos de Zermelo y Cantor para permitir el uso de asunciones en la matemática, los cuales se acercan a los planteamientos de la investigación científica y esbozan relaciones con la filosofía de la práctica matemática. Finalmente, justificamos el uso del axioma de elección en la contemporaneidad, abogando por unas relaciones de equidad entre la matemática y la filosofía, presentando además su plena vigencia, a través de la referencia a algunos problemas abiertos en la actualidad que vinculan el axioma de elección con la teoría de Ramsey. (shrink)
En el ámbito de la lógica matemática existe un problema sobre la relación lógica entre dos versiones débiles del Axioma de elección (AE) que no se ha podido resolver desde el año 2000 (aproximadamente). Tales versiones están relacionadas con ultrafiltros no principales y con Propiedades Ramsey (Bernstein, Polarizada, Subretículo, Ramsey, Ordinales flotantes, etc). La primera versión débil del AE es la siguiente (A): “Existen ultrafiltros no principales sobre el conjunto de los números naturales (ℕ)”. Y la segunda versión débil del (...) AE es la siguiente (B): “Existen ultraflitros sobre ℕ”. Se sabe que A implica B, pero se desconoce si B implica A. Di Prisco y Henle conjeturan en los artículos ([1], [2]) que esto no ocurre, es decir, conjeturan que B no implica A, en otras palabras, conjeturan que A es más fuerte estrictamente que B, que A es independiente de B, pero esto no se ha podido demostrar todavía aunque se ha intentado hacer desde hace aproximadamente 21 años. Una descripción detallada de este problema abierto puede encontrarse en esta ponencia (dictada en el marco del Día Mundial de la Lógica 14-01-2022) y en el artículo [3]. [1] C. Di Prisco y H. Henle. “Doughnuts, Floating Ordinals, Square Brackets, and Ultraflitters”. Journal of Symbolic Logic 65 (2000) 462-473. [2] C. Di Prisco y H. Henle. “Partitions of the reals and choice”. En “Models, algebras and proofs”. X. Caicedo y C.M. Montenegro. Eds. Lecture Notes in Pure and Appl. Math, 203, Marcel Dekker, 1999. [3] F. Galindo. “Tópicos de ultrafiltros”. Divulgaciones Matemáticas. Vol. 21, No 1-2, 2020. (shrink)
Plato’s Parmenides and Lysis have a surprising amount in common from a methodological standpoint. Both systematically employ a method that I call ‘exploring both sides’, a philosophical method for encouraging further inquiry and comprehensively understanding the truth. Both have also been held in suspicion by interpreters for containing what looks uncomfortably similar to sophistic methodology. I argue that the methodological connections across these and other dialogues relieve those suspicions and push back against a standard developmentalist story about Plato’s method. This (...) allows for a better understanding of why exploring both sides is explicitly recommended in the Parmenides and its role within Plato’s broader methodological repertoire. (shrink)
El objetivo principal de este artículo es presentar la demostración directa del Teorema de compacidad de la Lógica de primer orden (Gama tiene un modelo si y sólo si cada subconjunto finito de Gama tiene un modelo) que se realiza utilizando el Método de construcción de modelos llamado "Ultraproductos" que, a su vez, usa "Ultrafiltros". Actualmente es más común demostrar el Teorema de compacidad como un corolario del Teorema de completitud de Gödel y usar el método de reducción al absurdo (...) para probarlo. Sin embargo, vale la pena estudiar también esta prueba directa que usa Ultraproductos porque dicha técnica tiene importantes aplicaciones en investigaciones contemporáneas de matemáticas, por ejemplo en la Teoría de conjuntos y en el Análisis. Al final del artículo se realiza un breve comentario sobre Compacidad, Ultraproductos, Cardinales grandes y Modelos no estándar. (shrink)
In both Metaphysics Γ 4 and 5 Aristotle argues that Protagoras is committed to the view that all contradictions are true. Yet Aristotle’s arguments are not transparent, and later, in Γ 6, he provides Protagoras with a way to escape contradictions. In this paper I try to understand Aristotle’s arguments. After examining a number of possible solutions, I conclude that the best way of explaining them is to (a) recognize that Aristotle is discussing a number of Protagorean opponents, and (b) (...) import another of Protagoras’ views, namely the claim that there are always two logoi opposed to one another. (shrink)
The self-worth of political communities is often understood to be an expression of their position in a hierarchy of power; if so, then the desire for self-worth is a source of competition and conflict in international relations. In early modern German natural law theories, one finds the alternative view, according to which duties of esteem toward political communities should reflect the degree to which they fulfill the functions of civil government. The present article offers a case study, examining the views (...) concerning confederation rights and the resulting duties of esteem in diplomatic relations developed by Christoph Besold (1577–1638). Besold defends the view that confederations including dependent communities—such as the Hanseatic League—could fulfill a stabilizing political function. He also uses sixteenth-century conceptions concerning the acquisition of sovereignty rights through prescription of immemorial time. Both strands of argument lead to the conclusion that the envoys of dependent communities can have the right to be recognized as ambassadors, with all the duties of esteem that follow from this recognition. (shrink)
The Parmenides has been unduly overlooked in discussions of hypothesis in Plato. It contains a unique method for testing first principles, a method I call ‘exploring both sides’. The dialogue recommends exploring the consequences of both a hypothesis and its contradictory and thematizes this structure throughout. I challenge the view of Plato’s so-called ‘method of hypothesis’ as an isolated stage in Plato’s development; instead, the evidence of the Parmenides suggests a family of distinct hypothetical methods, each with its own peculiar (...) aim. Exploring both sides is unique both in its structure and in its aim of testing candidate principles. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.