In this paper we consider conditionalrandomquantities (c.r.q.’s) in the setting of coherence. Based on betting scheme, a c.r.q. X|H is not looked at as a restriction but, in a more extended way, as \({XH + \mathbb{P}(X|H)H^c}\) ; in particular (the indicator of) a conditional event E|H is looked at as EH + P(E|H)H c . This extended notion of c.r.q. allows algebraic developments among c.r.q.’s even if the conditioning events are different; then, for instance, (...) we can give a meaning to the sum X|H + Y|K and we can define the iterated c.r.q. (X|H)|K. We analyze the conjunction of two conditional events, introduced by the authors in a recent work, in the setting of coherence. We show that the conjoined conditional is a conditionalrandom quantity, which may be a conditional event when there are logical dependencies. Moreover, we introduce the negation of the conjunction and by applying De Morgan’s Law we obtain the disjoined conditional. Finally, we give the lower and upper bounds for the conjunction and disjunction of two conditional events, by showing that the usual probabilistic properties continue to hold. (shrink)
We generalize, by a progressive procedure, the notions of conjunction and disjunction of two conditional events to the case of n conditional events. In our coherence-based approach, conjunctions and disjunctions are suitable conditionalrandomquantities. We define the notion of negation, by verifying De Morgan’s Laws. We also show that conjunction and disjunction satisfy the associative and commutative properties, and a monotonicity property. Then, we give some results on coherence of prevision assessments for some families (...) of compounded conditionals; in particular we examine the Fréchet-Hoeffding bounds. Moreover, we study the reverse probabilistic inference from the conjunction Cn+1 of n + 1 conditional events to the family {Cn,En+1|Hn+1}. We consider the relation with the notion of quasi-conjunction and we examine in detail the coherence of the prevision assessments related with the conjunction of three conditional events. Based on conjunction, we also give a characterization of p-consistency and of p-entailment, with applications to several inference rules in probabilistic nonmonotonic reasoning. Finally, we examine some non p-valid inference rules; then, we illustrate by an example two methods which allow to suitably modify non p-valid inference rules in order to get inferences which are p-valid. (shrink)
Starting from a recent paper by S. Kaufmann, we introduce a notion of conjunction of two conditional events and then we analyze it in the setting of coherence. We give a representation of the conjoined conditional and we show that this new object is a conditionalrandom quantity, whose set of possible values normally contains the probabilities assessed for the two conditional events. We examine some cases of logical dependencies, where the conjunction is a (...) class='Hi'>conditional event; moreover, we give the lower and upper bounds on the conjunction. We also examine an apparent paradox concerning stochastic independence which can actually be explained in terms of uncorrelation. We briefly introduce the notions of disjunction and iterated conditioning and we show that the usual probabilistic properties still hold. (shrink)
We deepen the study of conjoined and disjoined conditional events in the setting of coherence. These objects, differently from other approaches, are defined in the framework of conditionalrandomquantities. We show that some well known properties, valid in the case of unconditional events, still hold in our approach to logical operations among conditional events. In particular we prove a decomposition formula and a related additive property. Then, we introduce the set of conditional constituents (...) generated by $n$ conditional events and we show that they satisfy the basic properties valid in the case of unconditional events. We obtain a generalized inclusion-exclusion formula, which can be interpreted by introducing a suitable distributive property. Moreover, under logical independence of basic unconditional events, we give two necessary and sufficient coherence conditions. The first condition gives a geometrical characterization for the coherence of prevision assessments on a family F constituted by n conditional events and all possible conjunctions among them. The second condition characterizes the coherence of prevision assessments defined on $F\cup K$, where $K$ is the set of conditional constituents associated with the conditional events in $F$. Then, we give some further theoretical results and we examine some examples and counterexamples. Finally, we make a comparison with other approaches and we illustrate some theoretical aspects and applications. (shrink)
There is wide support in logic, philosophy, and psychology for the hypothesis that the probability of the indicative conditional of natural language, $P(\textit{if } A \textit{ then } B)$, is the conditional probability of $B$ given $A$, $P(B|A)$. We identify a conditional which is such that $P(\textit{if } A \textit{ then } B)= P(B|A)$ with de Finetti's conditional event, $B|A$. An objection to making this identification in the past was that it appeared unclear how to form (...) compounds and iterations of conditional events. In this paper, we illustrate how to overcome this objection with a probabilistic analysis, based on coherence, of these compounds and iterations. We interpret the compounds and iterations as conditionalrandomquantities which, given some logical dependencies, may reduce to conditional events. We show how the inference to $B|A$ from $A$ and $B$ can be extended to compounds and iterations of both conditional events and biconditional events. Moreover, we determine the respective uncertainty propagation rules. Finally, we make some comments on extending our analysis to counterfactuals. (shrink)
A study is reported testing two hypotheses about a close parallel relation between indicative conditionals, if A then B, and conditional bets, I bet you that if A then B. The first is that both the indicative conditional and the conditional bet are related to the conditional probability, P(B|A). The second is that de Finetti's three-valued truth table has psychological reality for both types of conditional – true, false, or void for indicative conditionals and win, (...) lose or void for conditional bets. The participants were presented with an array of chips in two different colours and two different shapes, and an indicative conditional or a conditional bet about a random chip. They had to make judgments in two conditions: either about the chances of making the indicative conditional true or false or about the chances of winning or losing the conditional bet. The observed distributions of responses in the two conditions were generally related to the conditional probability, supporting the first hypothesis. In addition, a majority of participants in further conditions chose the third option, “void”, when the antecedent of the conditional was false, supporting the second hypothesis. (shrink)
Non-commuting quantities and hidden parameters – Wave-corpuscular dualism and hidden parameters – Local or nonlocal hidden parameters – Phase space in quantum mechanics – Weyl, Wigner, and Moyal – Von Neumann’s theorem about the absence of hidden parameters in quantum mechanics and Hermann – Bell’s objection – Quantum-mechanical and mathematical incommeasurability – Kochen – Specker’s idea about their equivalence – The notion of partial algebra – Embeddability of a qubit into a bit – Quantum computer is not Turing machine (...) – Is continuality universal? – Diffeomorphism and velocity – Einstein’s general principle of relativity – „Mach’s principle“ – The Skolemian relativity of the discrete and the continuous – The counterexample in § 6 of their paper – About the classical tautology which is untrue being replaced by the statements about commeasurable quantum-mechanical quantities – Logical hidden parameters – The undecidability of the hypothesis about hidden parameters – Wigner’s work and и Weyl’s previous one – Lie groups, representations, and psi-function – From a qualitative to a quantitative expression of relativity − psi-function, or the discrete by the random – Bartlett’s approach − psi-function as the characteristic function of random quantity – Discrete and/ or continual description – Quantity and its “digitalized projection“ – The idea of „velocity−probability“ – The notion of probability and the light speed postulate – Generalized probability and its physical interpretation – A quantum description of macro-world – The period of the as-sociated de Broglie wave and the length of now – Causality equivalently replaced by chance – The philosophy of quantum information and religion – Einstein’s thesis about “the consubstantiality of inertia ant weight“ – Again about the interpretation of complex velocity – The speed of time – Newton’s law of inertia and Lagrange’s formulation of mechanics – Force and effect – The theory of tachyons and general relativity – Riesz’s representation theorem – The notion of covariant world line – Encoding a world line by psi-function – Spacetime and qubit − psi-function by qubits – About the physical interpretation of both the complex axes of a qubit – The interpretation of the self-adjoint operators components – The world line of an arbitrary quantity – The invariance of the physical laws towards quantum object and apparatus – Hilbert space and that of Minkowski – The relationship between the coefficients of -function and the qubits – World line = psi-function + self-adjoint operator – Reality and description – Does a „curved“ Hilbert space exist? – The axiom of choice, or when is possible a flattening of Hilbert space? – But why not to flatten also pseudo-Riemannian space? – The commutator of conjugate quantities – Relative mass – The strokes of self-movement and its philosophical interpretation – The self-perfection of the universe – The generalization of quantity in quantum physics – An analogy of the Feynman formalism – Feynman and many-world interpretation – The psi-function of various objects – Countable and uncountable basis – Generalized continuum and arithmetization – Field and entanglement – Function as coding – The idea of „curved“ Descartes product – The environment of a function – Another view to the notion of velocity-probability – Reality and description – Hilbert space as a model both of object and description – The notion of holistic logic – Physical quantity as the information about it – Cross-temporal correlations – The forecasting of future – Description in separable and inseparable Hilbert space – „Forces“ or „miracles“ – Velocity or time – The notion of non-finite set – Dasein or Dazeit – The trajectory of the whole – Ontological and onto-theological difference – An analogy of the Feynman and many-world interpretation − psi-function as physical quantity – Things in the world and instances in time – The generation of the physi-cal by mathematical – The generalized notion of observer – Subjective or objective probability – Energy as the change of probability per the unite of time – The generalized principle of least action from a new view-point – The exception of two dimensions and Fermat’s last theorem. (shrink)
A large number of essays address the Sleeping Beauty problem, which undermines the validity of Bayesian inference and Bas Van Fraassen's 'Reflection Principle'. In this study a straightforward analysis of the problem based on probability theory is presented. The key difference from previous works is that apart from the random experiment imposed by the problem's description, a different one is also considered, in order to negate the confusion on the involved conditional probabilities. The results of the analysis indicate (...) that no inconsistency takes place, whereas both Bayesian inference and 'Reflection Principle' are valid. (shrink)
Van Fraassen's Judy Benjamin problem asks how one ought to update one's credence in A upon receiving evidence of the sort ``A may or may not obtain, but B is k times likelier than C'', where {A,B,C} is a partition. Van Fraassen's solution, in the limiting case of increasing k, recommends a posterior converging to the probability of A conditional on A union B, where P is one's prior probability function. Grove and Halpern, and more recently Douven and Romeijn, (...) have argued that one ought to leave credence in A unchanged, i.e. fixed at P(A). We argue that while the former approach is superior, it brings about a Reflection violation due in part to neglect of a ``regression to the mean'' phenomenon, whereby when C is eliminated by random evidence that leaves A and B alive, the ratio P(A):P(B) ought to drift in the direction of 1:1. (shrink)
Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesian inference. Finally, I explore the implications of this analysis (...) for Bayesian reasoning with idealized models in science. (shrink)
In this paper, I will discuss the various ways in which intentions can be said to be conditional, with particular attention to the internal conditions on the intentions’ content. I will first consider what it takes to carry out a conditional intention. I will then discuss how the distinctive norms of intention apply to conditional intentions and whether conditional intentions are a weaker sort of commitments than the unconditional ones. This discussion will lead to the idea (...) of what I call the ‘deep structure’ of intentions. Roughly, this is the idea that the conditional nature of our intentions is only partially made explicit in the expressions we use to communicate our intentions and in the explicit form of our thinking about and reasoning with them. Most conditions that qualify our intentions are part of a deep functional structure that can be evinced by observing the actual psychological functioning of intentions and by considering the rational requirements that they engage. I will argue that the deep structure of intentions is characteristically conditional. Genuinely unconditional intentions are only limiting instances of conditional intentions and their contribution to agency can only be understood in light of this fact. I will conclude by showing that the characteristic conditional structure of intentions is intimately related to distinctive features of human agency, especially to its unity over time. (shrink)
Resemblances obtain not only between objects but between properties. Resemblances of the latter sort - in particular resemblances between quantitative properties - prove to be the downfall of a well-known theory of universals, namely the one presented by David Armstrong. This paper examines Armstrong's efforts to account for such resemblances within the framework of his theory and also explores several extensions of that theory. All of them fail.
I argue that taking the Practical Conditionals Thesis seriously demands a new understanding of the semantics of such conditionals. Practical Conditionals Thesis: A practical conditional [if A][ought] expresses B’s conditional preferability given A Paul Weirich has argued that the conditional utility of a state of affairs B on A is to be identified as the degree to which it is desired under indicative supposition that A. Similarly, exploiting the PCT, I will argue that the proper analysis of (...) indicative practical conditionals is in terms of what is planned, desired, or preferred, given suppositional changes to an agent’s information. Implementing such a conception of conditional preference in a semantic analysis of indicative practical conditionals turns out to be incompatible with any approach which treats the indicative conditional as expressing non-vacuous universal quantification over some domain of relevant antecedent-possibilities. Such analyses, I argue, encode a fundamental misunderstanding of what it is to be best, given some condition. The analysis that does the best vis-à-vis the PCT is, instead, one that blends a Context-Shifty account of indicative antecedents with an Expressivistic, or non-propositional, treatment of their practical consequents. (shrink)
Lewis (1973) gave a short argument against conditional excluded middle, based on his treatment of ‘might’ counterfactuals. Bennett (2003), with much of the recent literature, gives an alternative take on ‘might’ counterfactuals. But Bennett claims the might-argument against CEM still goes through. This turns on a specific claim I call Bennett’s Hypothesis. I argue that independently of issues to do with the proper analysis of might-counterfactuals, Bennett’s Hypothesis is inconsistent with CEM. But Bennett’s Hypothesis is independently objectionable, so we (...) should resolve this tension by dropping the Hypothesis, not by dropping CEM. (shrink)
Children approach counterfactual questions about stories with a reasoning strategy that falls short of adults’ Counterfactual Reasoning (CFR). It was dubbed “Basic Conditional Reasoning” (BCR) in Rafetseder et al. (Child Dev 81(1):376–389, 2010). In this paper we provide a characterisation of the differences between BCR and CFR using a distinction between permanent and nonpermanent features of stories and Lewis/Stalnaker counterfactual logic. The critical difference pertains to how consistency between a story and a conditional antecedent incompatible with a nonpermanent (...) feature of the story is achieved. Basic conditional reasoners simply drop all nonpermanent features of the story. Counterfactual reasoners preserve as much of the story as possible while accommodating the antecedent. (shrink)
The conditional analysis of dispositions is widely rejected, mainly due to counterexamples in which dispositions are either “finkish” or “masked.” David Lewis proposed a reformed conditional analysis. This view avoids the problem of finkish dispositions, but it fails to solve the problem of masking. I will propose a reformulation of Lewis’ analysis, and I will argue that this reformulation can easily be modified so that it avoids the problem of masking. In the final section, I will address the (...) challenge that some dispositions appear to lack any stimulus condition, and I will briefly turn to the issue of reductionism. (shrink)
Alexander Rosenberg (1994) claims that the omniscient viewpoint of the evolutionary process would have no need for the concept of random drift. However, his argument fails to take into account all of the processes which are considered to be instances of random drift. A consideration of these processes shows that random drift is not eliminable even given a position of omniscience. Furthermore, Rosenberg must take these processes into account in order to support his claims that evolution is (...) deterministic and that evolutionary biology is an instrumental science. (shrink)
This paper outlines an account of conditionals, the evidential account, which rests on the idea that a conditional is true just in case its antecedent supports its consequent. As we will show, the evidential account exhibits some distinctive logical features that deserve careful consideration. On the one hand, it departs from the material reading of ‘if then’ exactly in the way we would like it to depart from that reading. On the other, it significantly differs from the non-material accounts (...) which hinge on the Ramsey Test, advocated by Adams, Stalnaker, Lewis, and others. (shrink)
Ranking theory is a formal epistemology that has been developed in over 600 pages in Spohn's recent book The Laws of Belief, which aims to provide a normative account of the dynamics of beliefs that presents an alternative to current probabilistic approaches. It has long been received in the AI community, but it has not yet found application in experimental psychology. The purpose of this paper is to derive clear, quantitative predictions by exploiting a parallel between ranking theory and a (...) statistical model called logistic regression. This approach is illustrated by the development of a model for the conditional inference task using Spohn's ranking theoretic approach to conditionals. (shrink)
Conditional excluded middle (CEM) is the following principe of counterfactual logic: either, if it were the case that φ, it would be the case that ψ, or, if it were the case that φ, it would be the case that not-ψ. I will first show that CEM entails the identity of indiscernibles, the falsity of physicalism, and the failure of the modal to supervene on the categorical and of the vague to supervene on the precise. I will then argue (...) that we should accept these startling conclusions, since CEM is valid. (shrink)
The main thesis of this paper is that, whereas an intention simpliciter is a commitment to a plan of action, a conditional intention is a commitment to a contingency plan, a commitment about what to do upon (learning of) a certain contingency relevant to one’s interests obtaining. In unconditional intending, our commitment to acting is not contingent on finding out that some condition obtains. In conditional intending, we intend to undertake an action on some condition, impinging on our (...) interests, which is as yet unsettled for us, but about which we can find out without undue cost. (shrink)
Michael Fara's ‘habitual analysis’ of disposition ascriptions is equivalent to a kind of ceteris paribus conditional analysis which has no evident advantage over Martin's well known and simpler analysis. I describe an unsatisfactory hypothetical response to Martin's challenge, which is lacking in just the same respect as the analysis considered by Martin; Fara's habitual analysis is equivalent to this hypothetical analysis. The feature of the habitual analysis that is responsible for this cannot be harmlessly excised, for the resulting analysis (...) would be subject to familiar counter-examples. (shrink)
When does it make sense to act randomly? A persuasive argument from Bayesian decision theory legitimizes randomization essentially only in tie-breaking situations. Rational behaviour in humans, non-human animals, and artificial agents, however, often seems indeterminate, even random. Moreover, rationales for randomized acts have been offered in a number of disciplines, including game theory, experimental design, and machine learning. A common way of accommodating some of these observations is by appeal to a decision-maker’s bounded computational resources. Making this suggestion both (...) precise and compelling is surprisingly difficult. Toward this end, I propose two fundamental rationales for randomization, drawing upon diverse ideas and results from the wider theory of computation. The first unifies common intuitions in favour of randomization from the aforementioned disciplines. The second introduces a deep connection between randomization and memory: access to a randomizing device is provably helpful for an agent burdened with a finite memory. Aside from fit with ordinary intuitions about rational action, the two rationales also make sense of empirical observations in the biological world. Indeed, random behaviour emerges more or less where it should, according to the proposal. (shrink)
von Fintel and Gillies : 329–360, 2007) have proposed a dynamic strict conditional account of counterfactuals as an alternative to the standard variably strict account due to Stalnaker and Lewis. Von Fintel’s view is motivated largely by so-called reverse Sobel sequences, about which the standard view seems to make the wrong predictions. More recently Moss :561–586, 2012) has offered a pragmatic/epistemic explanation that purports to explain the data without requiring abandonment of the standard view. So far the small amount (...) of subsequent literature has focused primarily on the original class of cases motivating the strict conditional view. What is needed in the debate is an examination of the predictions of the dynamic strict conditional account for a broader range of data. I undertake this task here, presenting a slew of cases that are problematic for the strict conditional view but not for Moss’s view, and considering some possible responses. Ultimately I take my contribution to constitute a significant blow to the dynamic strict conditional view, though not a decisive verdict against it. (shrink)
An expected utility model of individual choice is formulated which allows the decision maker to specify his available actions in the form of controls (partial contingency plans) and to simultaneously choose goals and controls in end-mean pairs. It is shown that the Savage expected utility model, the Marschak- Radner team model, the Bayesian statistical decision model, and the standard optimal control model can be viewed as special cases of this goal-control expected utility model.
This paper makes two essential claims about the nature of shame and shame punishment. I argue that, if we properly understand the nature of shame, that it is sometimes justifiable to shame others in the context of a pluralistic multicultural society. I begin by assessing the accounts of shame provided by Cheshire Calhoun (2004) and Julien Deonna, Raffaele Rodogno, & Fabrice Teroni (2012). I argue that both views have problems. I defend a theory of shame and embarrassment that connects both (...) emotions to “whole-self” properties. Shame and embarrassment, I claim, are products of the same underlying emotion. I distinguish between moralized and nonmoralized shame in order to show when, and how, moral and non-moral shame may be justly deployed. Shame is appropriate, I argue, if and only if it targets malleable moral or non-moral normative imperfections of a person’s ‘whole-self.’ Shame is unjustifiable when it targets durable aspects of a person’s “whole-self.” I conclude by distinguishing shame punishments from guilt punishments and show that my account can explain why it is wrong to shame individuals on account of their race, sex, gender, or body while permitting us to sometimes levy shame and shame punishment against others, even those otherwise immune to moral reasons. (shrink)
Newton published his deduction of universal gravity in Principia (first ed., 1687). To establish the universality (the particle-to-particle nature) of gravity, Newton must establish the additivity of mass. I call ‘additivity’ the property a body's quantity of matter has just in case, if gravitational force is proportional to that quantity, the force can be taken to be the sum of forces proportional to each particle's quantity of matter. Newton's argument for additivity is obscure. I analyze and assess manuscript versions of (...) Newton's initial argument within his initial deduction, dating from early 1685. Newton's strategy depends on distinguishing two quantities of matter, which I call ‘active’ and ‘passive’, by how they are measured. These measurement procedures frame conditions on the additivity of each quantity so measured. While Newton has direct evidence for the additivity of passive quantity of matter, he does not for that of the active quantity. Instead, he tries to infer the latter from the former via conceptual analyses of the third law of motion grounded largely on analogies to magnetic attractions. The conditions needed to establish passive additivity frustrate Newton's attempted inference to active additivity. (shrink)
Explaining the behaviour of ecosystems is one of the key challenges for the biological sciences. Since 2000, new-mechanicism has been the main model to account for the nature of scientific explanation in biology. The universality of the new-mechanist view in biology has been however put into question due to the existence of explanations that account for some biological phenomena in terms of their mathematical properties (mathematical explanations). Supporters of mathematical explanation have argued that the explanation of the behaviour of ecosystems (...) is usually provided in terms of their mathematical properties, and not in mechanistic terms. They have intensively studied the explanation of the properties of ecosystems that behave following the rules of a non-random network. However, no attention has been devoted to the study of the nature of the explanation in those that form a random network. In this paper, we cover that gap by analysing the explanation of the stability behaviour of the microbiome recently elaborated by Coyte and colleagues, to determine whether it fits with the model of explanation suggested by the new-mechanist or by the defenders of mathematical explanation. Our analysis of this case study supports three theses: (1) that the explanation is not given solely in terms of mechanisms, as the new-mechanists understand the concept; (2) that the mathematical properties that describe the system play an essential explanatory role, but they do not exhaust the explanation; (3) that a non-previously identified appeal to the type of interactions that the entities in the network can exhibit, as well as their abundance, is also necessary for Coyte and colleagues’ account to be fully explanatory. From the combination of these three theses we argue for the necessity of an integrative pluralist view of the nature of behaviour explanation when this is given by appealing to the existence of a random network. (shrink)
Our aim in the present paper is to investigate, from the standpoint of truth-theoretic semantics, English tense, temporal designators and quantifiers, and other expressions we use to relate ourselves and other things to the temporal order. Truth-theoretic semantics provides a particularly illuminating standpoint from which to discuss issues about the semantics of tense, and their relation to thoughts at, and about, times. Tense, and temporal modifiers, contribute systematically to conditions under which sentences we utter are true or false. A Tarski-style (...) truth-theoretic semantics, by requiring explicitly represented truth conditions, helps to sharpen questions about the function of tense, and to deepen our insight into the contribution the tenses and temporal modifiers make to what we say by using them. (shrink)
The article evaluates the Domain Postulate of the Classical Model of Science and the related Aristotelian prohibition rule on kind-crossing as interpretative tools in the history of the development of mathematics into a general science of quantities. Special reference is made to Proclus’ commentary to Euclid’s first book of Elements , to the sixteenth century translations of Euclid’s work into Latin and to the works of Stevin, Wallis, Viète and Descartes. The prohibition rule on kind-crossing formulated by Aristotle in (...) Posterior analytics is used to distinguish between conceptions that share the same name but are substantively different: for example the search for a broader genus including all mathematical objects; the search for a common character of different species of mathematical objects; and the effort to treat magnitudes as numbers. (shrink)
In this paper, I discuss Ludwig's systematic and illuminating account of conditional intentions, with particular reference to my own view (presented in "Conditional Intentions", Noûs, 2009). In contrast to Ludwig, I argue that we should prefer a formal characterization of conditional intentions rather than a more substantial one in terms of reasons for action (although the conditions that qualify an intention bear on the reasonableness and justifiability of the intention). I then defend a partially different taxonomy of (...) the conditions that might qualify an intention and discuss how the difference bears on the application of the rational pressures of intention. I go on to acknowledge that Ludwig is correct on insisting on the centrality of the *epistemic* element in the antecedent of conditional intentions. But I argue that even when a condition has been settled (that is, when the agent has ascertained that it holds), the intention remains genuinely conditional. In my view, conditions that have been settled are not just part of the background of planning: they continue to qualify the content of the intention (although they come to play a different role when settled). I then discuss how the settling of a condition does not interrupt the *continuity* of the content and structure of the intention---in contrast to Ludwig's account, where the conditional intention appears to give rise, when the conditions are taken as settled, to a distinct *unconditional* intention. I close by discussing the serious concern that my way of characterizing conditional intentions threatens to swallow most intentions, given that it is unlikely that we have intentions that do not rest on our accepting the obtaining of relevant conditions. (shrink)
Most contractualist ethical theories have a subjunctivist structure. This means that they attempt to make sense of right and wrong in terms of a set of principles which would be accepted in some idealized, non-actual circumstances. This makes these views vulnerable to the so-called conditional fallacy objection. The moral principles that are appropriate for the idealized circumstances fail to give a correct account of what is right and wrong in the ordinary situations. This chapter uses two versions of contractualism (...) to illustrate this problem: Nicholas Southwood’s and a standard contractualist theory inspired by T.M. Scanlon’s contractualism. It then develops a version of Scanlon’s view that can avoid the problem. This solution is based on the idea that we also need to compare different inculcation elements of moral codes in the contractualist framework. This idea also provides a new solution to the problem of at what level of social acceptance should principles be compared. (shrink)
In a series of pre-registered studies, we explored (a) the difference between people’s intuitions about indeterministic scenarios and their intuitions about deterministic scenarios, (b) the difference between people’s intuitions about indeterministic scenarios and their intuitions about neurodeterministic scenarios (that is, scenarios where the determinism is described at the neurological level), (c) the difference between people’s intuitions about neutral scenarios (e.g., walking a dog in the park) and their intuitions about negatively valenced scenarios (e.g., murdering a stranger), and (d) the difference (...) between people’s intuitions about free will and responsibility in response to first-person scenarios and third-person scenarios. We predicted that once we focused participants’ attention on the two different abilities to do otherwise available to agents in indeterministic and deterministic scenarios, their intuitions would support natural incompatibilism—the view that laypersons judge that free will and moral responsibility are incompatible with determinism. This prediction was borne out by our findings. (shrink)
In this paper I try to show that semantics can explain word-to-world relations and that sentences can have meanings that determine truth-conditions. Critics like Chomsky typically maintain that only speakers denote, i.e., only speakers, by using words in one way or another, represent entities or events in the world. However, according to their view, individual acts of denotations are not explained just by virtue of speakers' semantic knowledge (since, according to them, semantic knowledge is very scarce: see Pietroski, 2018). Against (...) this view, I will hold that, in the typical cases considered, semantic knowledge can account for the denotational uses of words of individual speakers. (shrink)
Expressing a widely-held view, David Hitchcock claims that "an enthymematic argument ... assumes at least the truth of the argument's associated conditional ... whose antecedent is the conjunction of the argument's explicit premises and whose consequent is the argument's conclusion." But even definitionally, this view is problematic, since an argument's being "enthymematic" or incomplete with respect to its explicit premises means that the conclusion is not implied by these premises alone. The paper attempts to specify the ways in which (...) the view is incorrect, as well as seemingly correct (e.g., the case of a Modus Ponens wherein the major premise is implicit). -/- . (shrink)
Since it was presented in 1963, Chisholm’s paradox has attracted constant attention in the deontic logic literature, but without the emergence of any definitive solution. We claim this is due to its having no single solution. The paradox actually presents many challenges to the formalization of deontic statements, including (1) context sensitivity of unconditional oughts, (2) formalizing conditional oughts, and (3) distinguishing generic from nongeneric oughts. Using the practical interpretation of ‘ought’ as a guideline, we propose a linguistically motivated (...) logical solution to each of these problems, and explain the relation of the solution to the problem of contrary-to-duty obligations. (shrink)
The standard treatment of conditional probability leaves conditional probability undefined when the conditioning proposition has zero probability. Nonetheless, some find the option of extending the scope of conditional probability to include zero-probability conditions attractive or even compelling. This article reviews some of the pitfalls associated with this move, and concludes that, for the most part, probabilities conditional on zero-probability propositions are more trouble than they are worth.
Kant's obscure essay entitled An Attempt to Introduce the Concept of Negative Quantities into Philosophy has received virtually no attention in the Kant literature. The essay has been in English translation for over twenty years, though not widely available. In his original 1983 translation, Gordon Treash argues that the Negative Quantities essay should be understood as part of an ongoing response to the philosophy of Christian Wolff. Like Hoffmann and Crusius before him, the Kant of 1763 is at (...) odds with the Leibnizian-Wolffian tradition of deductive metaphysics. He joins his predecessors in rejecting the assumption that the law of contradiction alone can provide proof of the principle of sufficient reason: -/- In his rejection of the possibility of deducing all philosophic truth from the law of contradiction, however, and in the clear recognition that this impossibility has immediate consequences for defense of the law of sufficient reason, Kant's work most definitely and positively constitutes a line of succession from Hoffmann and Crusius (Treash, 1983, p. 25). -/- The recognition that Kant's Negative Quantities essay is part of a response to the tradition of deductive metaphysics is, without a doubt, an important contribution to the Kant literature. However, there is still more to be said about this neglected essay. The full significance of the paper becomes known through its ties to a second, empiricist line of succession. Clues to this second line of succession can be found in Kant's prefatory remarks concerning Euler's 1748 Reflections on Space and Time and Crusius' 1749 Guidance in the Orderly and Careful Consideration of Natural Events. As I will show, these prefatory remarks suggest a reading of Kant's Negative Quantities paper that reaches beyond German deductive metaphysics to engage a debate regarding the application of mathematics in philosophy initiated by George Berkeley. (shrink)
This paper argues that the technical notion of conditional probability, as given by the ratio analysis, is unsuitable for dealing with our pretheoretical and intuitive understanding of both conditionality and probability. This is an ontological account of conditionals that include an irreducible dispositional connection between the antecedent and consequent conditions and where the conditional has to be treated as an indivisible whole rather than compositional. The relevant type of conditionality is found in some well-defined group of conditional (...) statements. As an alternative, therefore, we briefly offer grounds for what we would call an ontological reading: for both conditionality and conditional probability in general. It is not offered as a fully developed theory of conditionality but can be used, we claim, to explain why calculations according to the RATIO scheme does not coincide with our intuitive notion of conditional probability. What it shows us is that for an understanding of the whole range of conditionals we will need what John Heil (2003), in response to Quine (1953), calls an ontological point of view. (shrink)
The conventional wisdom about conditionals claims that (1) conditionals that have non-assertive acts in their consequents, such as commands and promises, are not plausibly interpreted as material implications; (2) the most promising hypothesis about these sentences is conditional-assertion theory, which explains a conditional as a conditional speech act, i.e., a performance of a speech act given the assumption of the antecedent. This hypothesis has far-reaching and revisionist consequences, because conditional speech acts are not synonymous with a (...) proposition with truth conditions. This paper argues against this prevalent view in two steps. First, it presents a battery of objections against conditional-assertion theory. Second, it argues that these examples can be convincingly interpreted as categorical assertions of material implications. (shrink)
Priest has provided a simple tableau calculus for Chellas's conditional logic Ck. We provide rules which, when added to Priest's system, result in tableau calculi for Chellas's CK and Lewis's VC. Completeness of these tableaux, however, relies on the cut rule.
Polysemy seems to be a relatively neglected phenomenon within philosophy of language as well as in many quarters in linguistic semantics. Not all variations in a word’s contribution to truth-conditional contents are to be thought as expressions of the phenomenon of polysemy, but it can be argued that many are. Polysemous terms are said to contribute senses or aspects to truth-conditional contents. In this paper, I will make use of the notion of aspect to argue that some apparently (...) wild variations in an utterance’s truth conditions are instead quite systematic. In particular, I will focus on Travis’ much debated green leaves case and explain it in terms of the polysemy of the noun; and in particular, in terms of the as-it-is and the as-it-looks aspects associated with kind words. (shrink)
The meaning that expressions take on particular occasions often depends on the context in ways which seem to transcend its direct effect on context-sensitive parameters. ‘Truth-conditional pragmatics’ is the project of trying to model such semantic flexibility within a compositional truth-conditional framework. Most proposals proceed by radically ‘freeing up’ the compositional operations of language. I argue, however, that the resulting theories are too unconstrained, and predict flexibility in cases where it is not observed. These accounts fall into this (...) position because they rarely, if ever, take advantage of the rich information made available by lexical items. I hold, instead, that lexical items encode both extension and non-extension determining information. Under certain conditions, the non-extension determining information of an expression e can enter into the compositional processes that determine the meaning of more complex expressions which contain e. This paper presents and motivates a set of type-driven compositional operations that can access non-extension determining information and introduce bits of it into the meaning of complex expressions. The resulting multidimensional semantics has the tools to deal with key cases of semantic flexibility in appropriately constrained ways, making it a promising framework to pursue the project of truth-conditional pragmatics. (shrink)
Studies of several languages, including Swahili [swa], suggest that realis (actual, realizable) and irrealis (unlikely, counterfactual) meanings vary along a scale (e.g., 0.0–1.0). T-values (True, False) and P-values (probability) account for this pattern. However, logic cannot describe or explain (a) epistemic stances toward beliefs, (b) deontic and dynamic stances toward states-of-being and actions, and (c) context-sensitivity in conditional interpretations. (a)–(b) are deictic properties (positions, distance) of ‘embodied’ Frames of Reference (FoRs)—space-time loci in which agents perceive and from which they (...) contextually act (Rohrer 2007a, b). I argue that the embodied FoR describes and explains (a)–(c) better than T-values and P-values alone. In this cognitive-functional-descriptive study, I represent these embodied FoRs using Unified Modeling Language (UML) mental spaces in analyzing Swahili conditional constructions to show how necessary, sufficient, and contributing conditions obtain on the embodied FoR networks level. (shrink)
A puzzle of an unmarked clock, used by Timothy Williamson to question the KK principle, was separately adapted by David Christensen and Adam Elga to critique a principle of Rational Reflection. Both authors, we argue, flout the received relationship between ideal agency and the classical distinction between systematic and random error, namely that ideal agents are subject only to the latter. As a result, these criticisms miss their mark.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.