Abraham J. Malherbe was one of the most influential New Testament scholars of the past half century. He is especially known for his use of Hellenistic moral philosophy in the interpretation of New Testament texts, especially Pauline literature. Whilst the comparative study of New Testament and Greco-Roman material remains a contentious approach in scholarship, Malherbe’s work provides important pointers in how to make such comparisons in a meaningful and reasoned manner, by paying due respect to the integrity of the texts (...) being compared and to the function textual elements have within their own contexts. I discussed the salient features of Malherbe’s approach, focusing in particular on his study of topoi. One of the most significant findings was Malherbe’s emphasis on the dialectical combination of common and individual elements in such topoi, which enabled ancient authors to embed their own texts within the cultural discourse of their time. His approach opens the way to further research of the New Testament within its philosophical context without requiring proof of a genealogical relationship between the texts or authors concerned. (shrink)
Ever since Chomsky, language has become the paradigmatic example of an innate capacity. Infants of only a few months old are aware of the phonetic structure of their mother tongue, such as stress-patterns and phonemes. They can already discriminate words from non-words and acquire a feel for the grammatical structure months before they voice their first word. Language reliably develops not only in the face of poor linguistic input, but even without it. In recent years, several scholars have extended this (...) uncontroversial view into the stronger claim that natural language is a human-specific adaptation. As I shall point out, this position is more problematic because of a lack of conceptual clarity over what human-specific cognitive adaptations are, and how they relate to modularity, the notion that mental phenomena arise from several domain-specific cognitive structures. The main aim of this paper is not to discuss whether or not language is an adaptation, but rather, to examine the concept of modularity with respect to the evolution and development of natural language. . (shrink)
The force law of Maxwell’s classical electrodynamics does not agree with Newton’s third law of motion (N3LM), in case of open circuit magnetostatics. Initially, a generalized magnetostatics theory is presented that includes two additional physical fields B_Φ and B_l, defined by scalar functions. The scalar magnetic field B_l mediates a longitudinal Ampère force that balances the transverse Ampère force (aka the magnetic field force), such that the sum of the two forces agrees with N3LM for all stationary current distributions. Secondary (...) field induction laws are derived; a secondary curl free electric field E_l is induced by a time varying scalar magnetic field B_l, which isn’t described by Maxwell’s electrodynamics. The Helmholtz’ decomposition is applied to exclude E_l from the total electric field E, resulting into a more simple Maxwell theory. Decoupled inhomogeneous potential equations and its solutions follow directly from this theory, without having to apply a gauge condition. Field expressions are derived from the potential functions that are simpler and far field consistent with respect to the Jefimenko fields. However, our simple version of Maxwell’s theory does not satisfy N3LM. Therefore we combine the generalized magnetostatics with the simple version of Maxwell’s electrodynamics, via the generalization of Maxwell’s speculative displacement current. The resulting electrodynamics describes three types of vacuum waves: the Φ wave, the longitudinal electromagnetic (LEM) wave and the transverse electromagnetic (TEM) wave, with phase velocities respectively a, b and c. Power- and force theorems are derived, and the force law agrees with Newton’s third law only if the phase velocities satisfy the following condition: a >> b and b = c. The retarded potential functions can be found without gauge conditions, and four retarded field expressions are derived that have three near field terms and six far field terms. All six far field terms are explained as the mutual induction of two free fields. Our theory supports Rutherford’s solution of the 4/3 problem of electromagnetic mass, which requires an extra longitudinal electromagnetic momentum. Our generalized classical electrodynamics might spawn new physics experiments and electrical engineering, such as new photoelectric effects based on Φ- or LEM radiation, and the conversion of natural Φ- or LEM radiation into useful electricity, in the footsteps of dr. N. Tesla and dr. T.H. Moray. (shrink)
Maxwell’s Classical Electrodynamics (MCED) suffers several inconsistencies: (1) the Lorentz force law of MCED violates Newton’s Third Law of Motion (N3LM) in case of stationary and divergent or convergent current distributions; (2) the general Jefimenko electric field solution of MCED shows two longitudinal far fields that are not waves; (3) the ratio of the electrodynamic energy-momentum of a charged sphere in uniform motion has an incorrect factor of 4/3. A consistent General Classical Electrodynamics (GCED) is presented that is based on (...) Whittaker’s reciprocal force law that satisfies N3LM. The Whittaker force is expressed as a scalar magnetic field force, added to the Lorentz force. GCED is consistent only if it is assumed that the electric potential velocity in vacuum, ’a’, is much greater than ’c’ (a ≫ c); GCED reduces to MCED, in case we assume a = c. Longitudinal electromagnetic waves and superluminal longitudinal electric potential waves are predicted. This theory has been verified by seemingly unrelated experiments, such as the detection of superluminal Coulomb fields and longitudinal Ampe`re forces, and has a wide range of electrical engineering applications. (shrink)
A series of papers on different aspects of practical knowledge by Roderick Chisholm, Rudolf Haller, J. C. Nyiri, Eva Picardi, Joachim Schulte Roger Scruton, Barry Smith and Johan Wrede.
This research note is penned in honour of Johan Vander Hoeven on his retirement as Editor-in-Chief of Philosophia Reformata. It is to acknowledge his helpful contribution to the critical exposition of phenomenology. I first read his work almost 30 years ago and it challenged me to develop a sympathetic Christian critique of this philosophical movement. This note is to offer some reflection upon the Christian interpretation of phenomeology. In particular, it raises questions about how some famous phrases, one by (...) Dilthey, the other by Husserl, have been construed. (shrink)
The Catastrophe Theory of Rene Thom and E. C. Zeeman suggests a mathematical interpretation of certain aspects of Hegelian and Marxist dialectics. Specifically, the three 'classical' dialectical principles, (1) the transformation of quantity into quality, (2) the unity and struggle of opposites, and (3) the negation of negation, can be modeled with the seven 'elementary catastrophes' given by Thorn, especially the catastrophes known as the 'cusp' and the 'butterfly'. Far from being empty metaphysics or scholasticism, as critics have argued, (...) the dialectical principles embody genuine insights into a class of phenomena, insights which can now be expressed within a precise mathematical formalism. This fact does not, however, support the claim that these principles, possibly modified or supplemented, constitute the laws of motion for human thought and for natural and social processes or even just the last of these. (shrink)
Thom Brooks'sHegel's Political Philosophy: A Systematic Reading of the Philosophy of Rightpresents a very clear and methodologically self-conscious series of discussions of key topics within Hegel's classic text. As one might expect for a ‘systematic’ reading, the main body of Brooks's text commences with an opening chapter on Hegel's system. Then follow seven chapters, the topics of which are encountered sequentially as one reads through thePhilosophy of Right. Brooks's central claim is that too often Hegel's theories or views on (...) any of these topics are misunderstood because of a tendency to isolate the relevant passages from the encompassing structure of thePhilosophy of Rightitself, and, in turn, from Hegel's system of philosophy as a whole, with its logical underpinnings. Brooks is clearly right in holding that Hegel hadintendedthePhilosophy of Rightto be read against the background of ‘the system’ and the ‘logic’ articulating it —nobody doubts that— but there is a further substantive issue here.Shouldcontemporary readers heed Hegel's advice? Brooks's answer is emphatically in the affirmative, and what results is a series of illuminating discussions in which he makes a case for his own interpretations on the basis of systematic considerations, presented against a range of alternatives taken from the contemporary secondary literature, which is amply covered, often in the extensive endnotes to the book. (shrink)
Derek Parfit’s argument against the platitude that identity is what matters in survival does not work given his intended reading of the platitude, namely, that what matters in survival to some future time is being identical with someone who is alive at that time. I develop Parfit’s argument so that it works against the platitude on this intended reading.
Standard definitions of causal closure focus on where the causes in question are. In this paper, the focus is changed to where they are not. Causal closure is linked to the principle that no cause of another universe causes an event in a particular universe. This view permits the one universe to be affected by the other via an interface. An interface between universes can be seen as a domain that violates the suggested account of causal closure, suggesting a view (...) in which universes are causally closed whereas interfaces are not. On this basis, universes are not affected by other universes directly but rather indirectly. (shrink)
Whether or not capitalism is compatible with ethics is a long standing dispute. We take up an approach to virtue ethics inspired by Adam Smith and consider how market competition influences the virtues most associated with modern commercial society. Up to a point, competition nurtures and supports such virtues as prudence, temperance, civility, industriousness and honesty. But there are also various mechanisms by which competition can have deleterious effects on the institutions and incentives necessary for sustaining even these most commercially (...) friendly of virtues. It is often supposed that if competitive markets are good, more competition must always be better. However, in the long run competition enhancing policies that neglect the nurturing and support of the bourgeois virtues may undermine the continued flourishing of modern commercial society. (shrink)
Recent experimental evidence from developmental psychology and cognitive neuroscience indicates that humans are equipped with unlearned elementary mathematical skills. However, formal mathematics has properties that cannot be reduced to these elementary cognitive capacities. The question then arises how human beings cognitively deal with more advanced mathematical ideas. This paper draws on the extended mind thesis to suggest that mathematical symbols enable us to delegate some mathematical operations to the external environment. In this view, mathematical symbols are not only used to (...) express mathematical concepts—they are constitutive of the mathematical concepts themselves. Mathematical symbols are epistemic actions, because they enable us to represent concepts that are literally unthinkable with our bare brains. Using case-studies from the history of mathematics and from educational psychology, we argue for an intimate relationship between mathematical symbols and mathematical cognition. (shrink)
In this paper we shed new light on the Argument from Disagreement by putting it to test in a computer simulation. According to this argument widespread and persistent disagreement on ethical issues indicates that our moral opinions are not influenced by any moral facts, either because no such facts exist or because they are epistemically inaccessible or inefficacious for some other reason. Our simulation shows that if our moral opinions were influenced at least a little bit by moral facts, we (...) would quickly have reached consensus, even if our moral opinions were affected by factors such as false authorities, external political shifts, and random processes. Therefore, since no such consensus has been reached, the simulation gives us increased reason to take seriously the Argument from Disagreement. Our conclusion is however not conclusive; the simulation also indicates what assumptions one has to make in order to reject the Argument from Disagreement. The simulation algorithm we use builds on the work of Hegselmann and Krause (J Artif Soc Social Simul 5(3); 2002, J Artif Soc Social Simul 9(3), 2006). (shrink)
Epistemic peer disagreement raises interesting questions, both in epistemology and in philosophy of science. When is it reasonable to defer to the opinion of others, and when should we hold fast to our original beliefs? What can we learn from the fact that an epistemic peer disagrees with us? A question that has received relatively little attention in these debates is the value of epistemic peer disagreement—can it help us to further epistemic goals, and, if so, how? We investigate this (...) through a recent case in paleoanthropology: the debate on the taxonomic status of Homo floresiensis remains unresolved, with some authors arguing the fossils represent a novel hominin species, and others claiming that they are Homo sapiens with congenital growth disorders. Our examination of this case in the recent history of science provides insights into the value of peer disagreement, indicating that it is especially valuable if one does not straightaway defer to a peer’s conclusions, but nevertheless remains open to a peer’s evidence and arguments. (shrink)
Thom Brooks criticizes utilitarian and retributive theories of punishment but argues that utilitarian and retributive goals can be incorporated into a coherent and unified theory of punitive restoration, according to which punishment is a means of reintegrating criminals into society and restoring rights. I point to some difficulties with Brooks’ criticisms of retributive and utilitarian theories, and argue that his theory of punitive restoration is not unified or coherent. I argue further that a theory attempting to capture the complex (...) set of rules and behaviors that constitute the practice of legal punishment cannot persuasively be unified and coherent: legitimate features of the practice advance goals and promote values that in some cases conflict. (shrink)
This paper takes a cognitive perspective to assess the significance of some Late Palaeolithic artefacts (sculptures and engraved objects) for philosophicalconcepts of art. We examine cognitive capacities that are necessary to produceand recognize objects that are denoted as art. These include the ability toattribute and infer design (design stance), the ability to distinguish between themateriality of an object and its meaning (symbol-mindedness), and an aesthetic sensitivity to some perceptual stimuli. We investigate to what extent thesecognitive processes played a role in (...) the production and appreciation of somerecently discovered Palaeolithic artefacts. (shrink)
The present paper describes an ”ontological square’ mapping possible ways of combining the domains and converse domains of the relations of inherence and denomination. In the context of expounding and extending medieval appropriations of elements drawn from Aristotle’s Categories for theological purposes, the paper uses this square to examine different ways of defining Substance-terms and Accident-terms by reference to inherence and denomination within the constraints imposed by the doctrine of the Trinity. These different approaches are related to particular texts of (...) thinkers including Bonaventure and Gilbert of Poitiers. (shrink)
Severe poverty is a major global problem about risk and inequality. What, if any, is the relationship between equality, fairness and responsibility in an unequal world? I argue for four conclusions. The first is the moral urgency of severe poverty. We have too many global neighbours that exist in a state of emergency and whose suffering is intolerable. The second is that severe poverty is a problem concerning global injustice that is relevant, but not restricted, to questions about responsibility. If (...) none were responsible, this does not eliminate all compelling claims to provide assistance. The third is that severe poverty represents an inequality too far; it is a condition of extremity with denial of basic needs. The fourth is that there is a need for an approach that captures all relevant cases – and the capabilities approach and the connection theory of remedial responsibilities are highlighted as having special promise. (shrink)
Speculative fiction, such as science fiction and fantasy, has a unique epistemic value. We examine similarities and differences between speculative fiction and philosophical thought experiments in terms of how they are cognitively processed. They are similar in their reliance on mental prospection, but dissimilar in that fiction is better able to draw in readers (transportation) and elicit emotional responses. By its use of longer, emotionally poignant narratives and seemingly irrelevant details, speculative fiction allows for a better appraisal of the consequences (...) of philosophical ideas than thought experiments. (shrink)
Given reductionism about people, personal persistence must fundamentally consist in some kind of impersonal continuity relation. Typically, these continuity relations can hold from one to many. And, if they can, the analysis of personal persistence must include a non-branching clause to avoid non-transitive identities or multiple occupancy. It is far from obvious, however, what form this clause should take. This paper argues that previous accounts are inadequate and develops a new proposal.
[from the publisher's website] Questions about the existence and attributes of God form the subject matter of natural theology, which seeks to gain knowledge of the divine by relying on reason and experience of the world. Arguments in natural theology rely largely on intuitions and inferences that seem natural to us, occurring spontaneously—at the sight of a beautiful landscape, perhaps, or in wonderment at the complexity of the cosmos—even to a nonphilosopher. In this book, Helen De Cruz and Johan (...) De Smedt examine the cognitive origins of arguments in natural theology. They find that although natural theological arguments can be very sophisticated, they are rooted in everyday intuitions about purpose, causation, agency, and morality. Using evidence and theories from disciplines including the cognitive science of religion, evolutionary ethics, evolutionary aesthetics, and the cognitive science of testimony, they show that these intuitions emerge early in development and are a stable part of human cognition. -/- De Cruz and De Smedt analyze the cognitive underpinnings of five well-known arguments for the existence of God: the argument from design, the cosmological argument, the moral argument, the argument from beauty, and the argument from miracles. Finally, they consider whether the cognitive origins of these natural theological arguments should affect their rationality. (shrink)
Despite their success in describing and predicting cognitive behavior, the plausibility of so-called ‘rational explanations’ is often contested on the grounds of computational intractability. Several cognitive scientists have argued that such intractability is an orthogonal pseudoproblem, however, since rational explanations account for the ‘why’ of cognition but are agnostic about the ‘how’. Their central premise is that humans do not actually perform the rational calculations posited by their models, but only act as if they do. Whether or not the problem (...) of intractability is solved by recourse to ‘as if’ explanations critically depends, inter alia, on the semantics of the ‘as if’ connective. We examine the five most sensible explications in the literature, and conclude that none of them circumvents the problem. As a result, rational ‘as if’ explanations must obey the minimal computational constraint of tractability. (shrink)
To what extent can humorism be a legitimate disposition toward the Absurd? The Absurd is born from the insurmountable contradiction between one’s ceaseless striving and the absence of an ultimate resolution – or, as I prefer to call it, the ‘dissolution of resolution’. Humoristic Absurdism is the commitment to a pattern of humorous responses to the Absurd, which regard this absurd condition, as well as its manifestation in absurd situations, as a comical phenomenon. Although the humoristic disposition seems promising, by (...) virtue of humor’s recognition of incongruity and its denial of an ultimate resolution, and succeeds in countering three major objections, it falls prey to the claim that comprehensive humorism portrays as frivolous what is earnest, thereby renouncing the gravity of the desire for ultimate resolution, which is fundamental to the notion of the Absurd. In an attempt to explore alternative roles for humor in a legitimate disposition toward the Absurd, Metamodern Absurdism is suggested, which revolves around a post-ironic oscillation between humor and earnestness. In short, only in a role limited to one pole of an oscillatory pattern of responses can humor be integrated in a legitimate disposition toward the Absurd. (shrink)
Benefit/cost analysis is a technique for evaluating programs, procedures, and actions; it is not a moral theory. There is significant controversy over the moral justification of benefit/cost analysis. When a procedure for evaluating social policy is challenged on moral grounds, defenders frequently seek a justification by construing the procedure as the practical embodiment of a correct moral theory. This has the apparent advantage of avoiding difficult empirical questions concerning such matters as the consequences of using the procedure. So, for example, (...) defenders of benefit/cost analysis are frequently tempted to argue that this procedure just is the calculation of moral Tightness – perhaps that what it means for an action to be morally right is just for it to have the best benefit-to-cost ratio given the accounts of “benefit” and “cost” that BCA employs. They suggest, in defense of BCA, that they have found the moral calculus – Bentham's “unabashed arithmetic of morals.” To defend BCA in this manner is to commit oneself to one member of a family of moral theories and, also, to the view that if a procedure is the direct implementation of a correct moral theory, then it is a justified procedure. Neither of these commitments is desirable, and so the temptation to justify BCA by direct appeal to a B/C moral theory should be resisted; it constitutes an unwarranted short cut to moral foundations – in this case, an unsound foundation. Critics of BCA are quick to point out the flaws of B/C moral theories, and to conclude that these undermine the justification of BCA. But the failure to justify BCA by a direct appeal to B/C moral theory does not show that the technique is unjustified. There is hope for BCA, even if it does not lie with B/C moral theory. (shrink)
Despite their divergent metaphysical assumptions, Reformed and evolutionary epistemologists have converged on the notion of proper basicality. Where Reformed epistemologists appeal to God, who has designed the mind in such a way that it successfully aims at the truth, evolutionary epistemologists appeal to natural selection as a mechanism that favors truth-preserving cognitive capacities. This paper investigates whether Reformed and evolutionary epistemological accounts of theistic belief are compatible. We will argue that their chief incompatibility lies in the noetic effects of sin (...) and what may be termed the noetic effects of evolution, systematic tendencies wherein human cognitive faculties go awry. We propose a reconceptualization of the noetic effects of sin to mitigate this tension. (shrink)
We have all been told of the death of grand narratives. We have been told that the days of asking eternal metaphysical questions in philosophy are long since over. When Wittgenstein’s (1953/2009, p. 174) famous spade hit bedrock it reminded us that we had better stop wasting our time on lofty questions without answers. Foucault (1970) prompted us to recall Borges’story of a certain Chinese encyclopedia showing us that there are many ways of ordering the world and that each way (...) changes the rules of the game a little bit. We found that history was contingent and that hierarchies, however firmly built, would all crumble in the end. In its place were the slightly disorienting feeling following the postmodernist’s proclamation of ‘the elusiveness of meaning and knowledge’ (Kirby, 2017, p. 5). (shrink)
According to the Psychological-Continuity Account of What Matters, you are justified in having special concern for the well-being of a person at a future time if and only if that person will be psychologically continuous with you as you are now. On some versions of the account, the psychological continuity is required be temporally ordered, whereas, on other versions, it is allowed to be temporally unordered. In this paper, I argue that the account is implausible if the psychological continuity is (...) allowed to be temporally unordered. I also argue that, if the psychological continuity is required to be temporally ordered, it cannot plausibly be purely psychological (in the sense that the psychological continuity is not required to be caused through spatio-temporal continuity of a brain). The upshot is that no plausible version of the Psychological-Continuity Account of What Matters is purely psychological. So psychological continuity is not what matters in survival. (shrink)
The Levelling-Down Objection is a standard objection to monistic egalitarian theories where equality is the only thing that has intrinsic value. Most egalitarians, however, are value pluralists; they hold that, in addition to equality being intrinsically valuable, the egalitarian currency in which we are equal or unequal is also intrinsically valuable. In this paper, I shall argue that the Levelling-Down Objection still minimizes the weight that the intrinsic badness of inequality could have in the overall intrinsic evaluation of outcomes, given (...) a certain way of measuring the badness of inequality, namely, the Additive Individual-Complaints Measure. (shrink)
In this paper we begin categorizing a plurality of possible worlds on the basis of permitting or not permitting ontologically different things to be causally connected. We build the work on the dual principle that all universes are causally closed either because no universe causes anything outside itself or because no universe has anything in it that is caused by another universe.
Moral obligation and permissibility are usually thought to be interdefinable. Following the pattern of the duality definitions of necessity and possibility, we have that something’s being permissible could be defined as its not being obligatory to not do it. And that something’s being obligatory could be defined as its not being permissible to not do it. In this paper, I argue that neither direction of this alleged interdefinability works. Roughly, the problem is that a claim that some act is obligatory (...) or permissible entails that there is a moral law, whereas a negative claim that some act is not obligatory or not permissible does not. Nevertheless, one direction of the interdefinability can potentially be salvaged. I argue that, if we do not require the conceptual possibility of moral dilemmas, then there is a way to plausibly define obligation in terms of permissibility. I conclude that permissibility is the only feasible deontic primitive. (shrink)
The impact of science on ethics forms since long the subject of intense debate. Although there is a growing consensus that science can describe morality and explain its evolutionary origins, there is less consensus about the ability of science to provide input to the normative domain of ethics. Whereas defenders of a scientific normative ethics appeal to naturalism, its critics either see the naturalistic fallacy committed or argue that the relevance of science to normative ethics remains undemonstrated. In this paper, (...) we argue that current scientific normative ethicists commit no fallacy, that criticisms of scientific ethics contradict each other, and that scientific insights are relevant to normative inquiries by informing ethics about the options open to the ethical debate. Moreover, when conceiving normative ethics as being a nonfoundational ethics, science can be used to evaluate every possible norm. This stands in contrast to foundational ethics in which some norms remain beyond scientific inquiry. Finally, we state that a difference in conception of normative ethics underlies the disagreement between proponents and opponents of a scientific ethics. Our argument is based on and preceded by a reconsideration of the notions naturalistic fallacy and foundational ethics. This argument differs from previous work in scientific ethics: whereas before the philosophical project of naturalizing the normative has been stressed, here we focus on concrete consequences of biological findings for normative decisions or on the day-to-day normative relevance of these scientific insights. (shrink)
Persons think. Bodies, time-slices of persons, and brains might also think. They have the necessary neural equipment. Thus, there seems to be more than one thinker in your chair. Critics assert that this is too many thinkers and that we should reject ontologies that allow more than one thinker in your chair. I argue that cases of multiple thinkers are innocuous and that there is not too much thinking. Rather, the thinking shared between, for example, persons and their bodies is (...) exactly what we should expect at the intersection of part sharing and the supervenience of the mental on the physical. I end by responding to the overcrowding objection, the personhood objection, the personal-pronoun reference problem and the epistemic objection. (shrink)
Empirical research in the psychology of nature appreciation suggests that humans across cultures tend to evaluate nature in positive aesthetic terms, including a sense of beauty and awe. They also frequently engage in joint attention with other persons, whereby they are jointly aware of sharing attention to the same event or object. This paper examines how, from a natural theological perspective, delight in natural beauty can be conceptualized as a way of joining attention to creation. Drawing on an analogy between (...) art and creation, we propose that aesthetic appreciation of nature may provide theists with a unique phenomenological insight into God's creative intentions, which are embodied in the physical beauty of creation. We suggest two directions in which this way of looking at the natural world can be fleshed out: in a spontaneous way, that does not take into account background information, and with the help of science. (shrink)
A philosophical exchange broadly inspired by the characters of Berkeley’s Three Dialogues. Hylas is the realist philosopher: the view he stands up for reflects a robust metaphysic that is reassuringly close to common sense, grounded on the twofold persuasion that the world comes structured into entities of various kinds and at various levels and that it is the task of philosophy, if not of science generally, to “bring to light” that structure. Philonous, by contrast, is the anti-realist philosopher (though not (...) necessarily an idealist): his metaphysic is stark, arid, dishearteningly bone-dry, and stems from the conviction that a great deal of the structure that we are used to attribute to the world out there lies, on closer inspection, in our head, in our “organizing practices”, in the complex system of concepts and categories that unrerlie our representation of experience and our need to represent it that way. (shrink)
Should the existence of moral disagreement reduce one’s confidence in one’s moral judgments? Many have claimed that it should not. They claim that we should be morally self-sufficient: that one’s moral judgment and moral confidence ought to be determined entirely one’s own reasoning. Others’ moral beliefs ought not impact one’s own in any way. I claim that moral self-sufficiency is wrong. Moral self-sufficiency ignores the degree to which moral judgment is a fallible cognitive process like all the rest. In this (...) paper, I take up two possible routes to moral self-sufficiency.First, I consider Robert Paul Wolff’s argument that an autonomous being is required to act from his own reasoning. Does Wolff’s argument yield moral self-sufficiency? Wolff’s argument does forbid unthinking obedience. But it does not forbid guidance: the use of moral testimony to glean evidence about nonmoral states of affairs. An agent can use the existence of agreement or disagreement as evidence concerning the reliability of their own cognitive abilities, which is entirely nonmoral information. Corroboration and discorroboration yields nonmoral evidence, and no reasonable theory of autonomy can forbid the use of nonmoral evidence. In fact, by using others to check on my own cognitive functionality, an agent is reasoning better and is thereby more autonomous.Second, I consider Philip Nickel’s requirement that moral judgment proceed from personal understanding. I argue that the requirement of understanding does forbid unthinking obedience, but not discorroboration. When an agent reasons morally, and then reduces confidence in their judgments through discorroboration, they are in full contact with the moral reasons, and with the epistemic reasons. Discorroboration yields more understanding, not less. (shrink)
The current debate over aesthetic testimony typically focuses on cases of doxastic repetition — where, when an agent, on receiving aesthetic testimony that p, acquires the belief that p without qualification. I suggest that we broaden the set of cases under consideration. I consider a number of cases of action from testimony, including reconsidering a disliked album based on testimony, and choosing an artistic educational institution from testimony. But this cannot simply be explained by supposing that testimony is usable for (...) action, but unusable for doxastic repetition. I consider a new asymmetry in the usability aesthetic testimony. Consider the following cases: we seem unwilling to accept somebody hanging a painting in their bedroom based merely on testimony, but entirely willing to accept hanging a painting in a museum based merely on testimony. The switch in intuitive acceptability seems to track, in some complicated way, the line between public life and private life. These new cases weigh against a number of standing theories of aesthetic testimony. I suggest that we look further afield, and that something like a sensibility theory, in the style of John McDowell and David Wiggins, will prove to be the best fit for our intuitions for the usability of aesthetic testimony. I propose the following explanation for the new asymmetry: we are willing to accept testimony about whether a work merits being found beautiful; but we are unwilling to accept testimony about whether something actually is beautiful. (shrink)
Some hold that musical works are fusions of, or coincide with, their performances. But if performances contain wrong notes, won't works inherit that property? We say ‘no’.
Intersectionality has become the primary analytic tool that feminist and anti-racist scholars deploy for theorizing identity and oppression. This paper exposes and critically interrogates the assumptions underpinning intersectionality by focusing on four tensions within intersectionality scholarship: the lack of a defined intersectional methodology; the use of black women as quintessential intersectional subjects; the vague definition of intersectionality; and the empirical validity of intersectionality. Ultimately, my project does not seek to undermine intersectionality; instead, I encourage both feminist and anti-racist scholars to (...) grapple with intersectionality's theoretical, political, and methodological murkiness to construct a more complex way of theorizing identity and oppression. (shrink)
Sleep onset is associated with marked changes in behavioral, physiological, and subjective phenomena. In daily life though subjective experience is the main criterion in terms of which we identify it. But very few studies have focused on these experiences. This study seeks to identify the subjective variables that reflect sleep onset. Twenty young subjects took an afternoon nap in the laboratory while polysomnographic recordings were made. They were awakened four times in order to assess subjective experiences that correlate with the (...) (1) appearance of slow eye movement, (2) initiation of stage 1 sleep, (3) initiation of stage 2 sleep, and (4) 5 min after the start of stage 2 sleep. A logistic regression identified control over and logic of thought as the two variables that predict the perception of having fallen asleep. For sleep perception, these two variables accurately classified 91.7% of the cases; for the waking state, 84.1%. (shrink)
Recent conversation has blurred two very different social epistemic phenomena: echo chambers and epistemic bubbles. Members of epistemic bubbles merely lack exposure to relevant information and arguments. Members of echo chambers, on the other hand, have been brought to systematically distrust all outside sources. In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. It is crucial to keep these phenomena distinct. First, echo chambers can explain the post-truth phenomena in a way that epistemic (...) bubbles cannot. Second, each type of structure requires a distinct intervention. Mere exposure to evidence can shatter an epistemic bubble, but may actually reinforce an echo chamber. Finally, echo chambers are much harder to escape. Once in their grip, an agent may act with epistemic virtue, but social context will pervert those actions. Escape from an echo chamber may require a radical rebooting of one's belief system. (shrink)
The conditional interpretation of general categorical statements like ‘All men are animals’ as universally quantified material conditionals ‘For all x, if x is F, then x is G’ suggests that the logical structure of law statements is conditional rather than categorical. Disregarding the problem that the universally quantified material conditional is trivially true whenever there are no xs that are F, there are some reasons to be sceptical of Frege’s equivalence between categorical and conditional expressions. -/- Now many philosophers will (...) claim that the material conditional interpretation of laws statements, dispositions ascriptions, or any causal claim is generally accepted as wrong and outdated. Still, there seem to be some basic logical assumptions that are shared by most of the participants in the debate on causal matters which at least stems from the traditional truth functional interpretation of conditionals. This is indicated by the vocabulary in the philosophical debate on causation, where one often speaks of ‘counterfactuals’, ‘possible worlds’ and ‘necessity’ without being explicit on whether or to what extent one accepts the logical-technical definition of these notions. To guarantee a non-Humean and non-extensional approach to causal relations, it is therefore important to be aware of the logical and metaphysical implications of the technical vocabulary. -/- In this paper we want to show why extensional logic cannot deal with causal relations. Via a logical analysis of law-like statements ‘All Fs are Gs’ we hope to throw some new light on interrelated notions like causation, laws, induction, hypotheticality and modality. If successful, our analysis should be of relevance for a deeper understanding of any type of causal relations, whether we understand them to be laws, dispositions, singulars or categoricals. (shrink)
This paper argues that the technical notion of conditional probability, as given by the ratio analysis, is unsuitable for dealing with our pretheoretical and intuitive understanding of both conditionality and probability. This is an ontological account of conditionals that include an irreducible dispositional connection between the antecedent and consequent conditions and where the conditional has to be treated as an indivisible whole rather than compositional. The relevant type of conditionality is found in some well-defined group of conditional statements. As an (...) alternative, therefore, we briefly offer grounds for what we would call an ontological reading: for both conditionality and conditional probability in general. It is not offered as a fully developed theory of conditionality but can be used, we claim, to explain why calculations according to the RATIO scheme does not coincide with our intuitive notion of conditional probability. What it shows us is that for an understanding of the whole range of conditionals we will need what John Heil (2003), in response to Quine (1953), calls an ontological point of view. (shrink)
According to cognitive science of religion (CSR) people naturally veer toward beliefs that are quite divergent from Anselmian monotheism or Christian theism. Some authors have taken this view as a starting point for a debunking argument against religion, while others have tried to vindicate Christian theism by appeal to the noetic effects of sin or the Fall. In this paper, we ask what theologians can learn from CSR about the nature of the divine, by looking at the CSR literature and (...) what it identifies as commonalities across religions. We use a pluralist, non-confessional approach to outline properties of the divine. We connect our approach to Hick’s religious pluralism, Ramakrishna’s realization of God through multiple spiritual paths, and Gellman’s inexhaustible plenitude. (shrink)
Humans have a tendency to reason teleologically. This tendency is more pronounced under time pressure, in people with little formal schooling and in patients with Alzheimer’s. This has led some cognitive scientists of religion, notably Kelemen, to call intuitive teleological reasoning promiscuous, by which they mean teleology is applied to domains where it is unwarranted. We examine these claims using Kant’s idea of the transcendental illusion in the first Critique and his views on the regulative function of teleological reasoning in (...) the third Critique. We examine whether a Kantian framework can help resolve the tension between the apparent promiscuity of intuitive teleology and its role in human reasoning about biological organisms and natural kinds. (shrink)
This Element focuses on three challenges of evolution to religion: teleology, human origins, and the evolution of religion itself. First, religious worldviews tend to presuppose a teleological understanding of the origins of living things, but scientists mostly understand evolution as non-teleological. Second, religious and scientific accounts of human origins do not align in a straightforward sense. Third, evolutionary explanations of religion, including religious beliefs and practices, may cast doubt on their justification. We show how these tensions arise and offer potential (...) responses for religion. Individual religions can meet these challenges, if some of their metaphysical assumptions are adapted or abandoned. (shrink)
We advocate an analysis of meaning that departs from the pragmatic slogan that “meaning is use”. However, in order to avoid common missteps, this claim is in dire need of qualification. We argue that linguistic meaning does not originate from language use as such; therefore we cannot base a theory of meaning only on use. It is important not to neglect the fact that language is ultimately reliant on non-linguistic factors. This might seem to oppose the aforementioned slogan, but it (...) will be made clear how this opposition is chimerical. We propose that meaning traces back to the relation between subjectivity and intersubjectivity, which is at the heart of the matter by inducing strong interdependency between intention and interpretation. But to base a full-fledged analysis of meaning in communicative dyads alone is also insufficient. What needs to be further acknowledged is that meaning sharing becomes regulated by the interactions of a community. This we call consensus and it is at play in framing all communicative acts. Hence arises a triadic structure – we call this the meaningsharing network. The main motivation behind this model is to capture that meaning is not “in the head”, nor is it autonomous of the individual members that constitute it. (shrink)
What sort of logic do we get if we adopt a supervaluational semantics for vagueness? As it turns out, the answer depends crucially on how the standard notion of validity as truth preservation is recasted. There are several ways of doing that within a supervaluational framework, the main alternative being between “global” construals (e.g., an argument is valid iff it preserves truth-under-all-precisifications) and “local” construals (an argument is valid iff, under all precisifications, it preserves truth). The former alternative is by (...) far more popular, but I argue in favor of the latter, for (i) it does not suffer from a number of serious objections, and (ii) it makes it possible to restore global validity as a defined notion. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.