A hard choice is a situation in which an agent is unable to make a justifiable choice from a given menu of alternatives. Our objective is to present a systematic treatment of the axiomatic structure of such situations. To do so, we draw on and contribute to the study of choicefunctions that can be indecisive, i.e., that may fail to select a non-empty set for some menus. In this more general framework, we present new characterizations (...) of two well-known choice rules, the maximally dominant choice rule and the top-cycle choice rule. Together with existing results, this yields an understanding of the circumstances in which hard choices arise. (shrink)
We investigate epistemic independence for choicefunctions in a multivariate setting. This work is a continuation of earlier work of one of the authors [23], and our results build on the characterization of choicefunctions in terms of sets of binary preferences recently established by De Bock and De Cooman [7]. We obtain the independent natural extension in this framework. Given the generality of choicefunctions, our expression for the independent natural extension is the (...) most general one we are aware of, and we show how it implies the independent natural extension for sets of desirable gambles, and therefore also for less informative imprecise-probabilistic models. Once this is in place, we compare this concept of epistemic independence to another independence concept for choicefunctions proposed by Seidenfeld [22], which De Bock and De Cooman [1] have called S-independence. We show that neither is more general than the other. (shrink)
Normative thinking about addiction has traditionally been divided between, on the one hand, a medical model which sees addiction as a disease characterized by compulsive and relapsing drug use over which the addict has little or no control and, on the other, a moral model which sees addiction as a choice characterized by voluntary behaviour under the control of the addict. Proponents of the former appeal to evidence showing that regular consumption of drugs causes persistent changes in the brain (...) structures and functions known to be involved in the motivation of behavior. On this evidence, it is often concluded that becoming addicted involves a transition from voluntary, chosen drug use to non-voluntary compulsive drug use. Against this view, proponents of the moral model provide ample evidence that addictive drug use involves voluntary chosen behaviour. In this paper we argue that although they are right about something, both views are mistaken. We present a third model that neither rules out the view of addictive drug use as compulsive, nor that it involves voluntary chosen behavior. -/- . (shrink)
According to an often repeated definition, economics is the science of individual choices and their consequences. The emphasis on choice is often used – implicitly or explicitly – to mark a contrast between markets and the state: While the price mechanism in well-functioning markets preserves freedom of choice and still efficiently coordinates individual actions, the state has to rely to some degree on coercion to coordinate individual actions. Since coercion should not be used arbitrarily, coordination by the state (...) needs to be legitimized by the consent of its citizens. The emphasis in economic theory on freedom of choice in the market sphere suggests that legitimization in the market sphere is “automatic” and that markets can thus avoid the typical legitimization problem of the state. In this paper, I shall question the alleged dichotomy between legitimization in the market and in the state. I shall argue that it is the result of a conflation of choice and consent in economics and show how an independent concept of consent makes the need for legitimization of market transactions visible. Footnotes1 For helpful comments and suggestions I am most grateful to Marc Fleurbaey, Alain Marciano, Herlinde Pauer-Studer, Thomas Pogge, Hans Bernhard Schmid, to seminar or conference participants in Aix-Marseille, Tutzing, Paris, and Amsterdam, and to two anonymous referees. (shrink)
We reexamine some of the classic problems connected with the use of cardinal utility functions in decision theory, and discuss Patrick Suppes's contributions to this field in light of a reinterpretation we propose for these problems. We analytically decompose the doctrine of ordinalism, which only accepts ordinal utility functions, and distinguish between several doctrines of cardinalism, depending on what components of ordinalism they specifically reject. We identify Suppes's doctrine with the major deviation from ordinalism that conceives of utility (...)functions as representing preference differences, while being non- etheless empirically related to choices. We highlight the originality, promises and limits of this choice-based cardinalism. (shrink)
The social welfare functional approach to social choice theory fails to distinguish a genuine change in individual well-beings from a merely representational change due to the use of different measurement scales. A generalization of the concept of a social welfare functional is introduced that explicitly takes account of the scales that are used to measure well-beings so as to distinguish between these two kinds of changes. This generalization of the standard theoretical framework results in a more satisfactory formulation of (...) welfarism, the doctrine that social alternatives are evaluated and socially ranked solely in terms of the well-beings of the relevant individuals. This scale-dependent form of welfarism is axiomatized using this framework. The implications of this approach for characterizing classes of social welfare orderings are also considered. (shrink)
Scientists often diverge widely when choosing between research programs. This can seem to be rooted in disagreements about which of several theories, competing to address shared questions or phenomena, is currently the most epistemically or explanatorily valuable—i.e. most successful. But many such cases are actually more directly rooted in differing judgments of pursuit-worthiness, concerning which theory will be best down the line, or which addresses the most significant data or questions. Using case studies from 16th-century astronomy and 20th-century geology and (...) biology, I argue that divergent theory choice is thus often driven by considerations of scientific process, even where direct epistemic or explanatory evaluation of its final products appears more relevant. Broadly following Kuhn’s analysis of theoretical virtues, I suggest that widely shared criteria for pursuit-worthiness function as imprecise, mutually-conflicting values. However, even Kuhn and others sensitive to pragmatic dimensions of theory ‘acceptance’, including the virtue of fruitfulness, still commonly understate the role of pursuit-worthiness—especially by exaggerating the impact of more present-oriented virtues, or failing to stress how ‘competing’ theories excel at addressing different questions or data. This framework clarifies the nature of the choice and competition involved in theory choice, and the role of alternative theoretical virtues. (shrink)
After severe brain injury, one of the key challenges for medical doctors is to determine the patient’s prognosis. Who will do well? Who will not do well? Physicians need to know this, and families need to do this too, to address choices regarding the continuation of life supporting therapies. However, current prognostication methods are insufficient to provide a reliable prognosis. -/- Functional Magnetic Resonance Imaging (MRI) holds considerable promise for improving the accuracy of prognosis in acute brain injury patients. Nonetheless, (...) research on functional MRI in the intensive care unit context is ethically challenging. These studies raise several ethical issues that have not been addressed so far. In this article, Prof. Charles Weijer and his co-workers provide a framework for researchers and ethics committees to design and review these studies in an ethically sound way. (shrink)
What is the quantum state of the universe? Although there have been several interesting suggestions, the question remains open. In this paper, I consider a natural choice for the universal quantum state arising from the Past Hypothesis, a boundary condition that accounts for the time-asymmetry of the universe. The natural choice is given not by a wave function but by a density matrix. I begin by classifying quantum theories into two types: theories with a fundamental wave function and (...) theories with a fundamental density matrix. The Past Hypothesis is compatible with infinitely many initial wave functions, none of which seems to be particularly natural. However, once we turn to density matrices, the Past Hypothesis provides a natural choice---the normalized projection onto the Past Hypothesis subspace in the Hilbert space. Nevertheless, the two types of theories can be empirically equivalent. To provide a concrete understanding of the empirical equivalence, I provide a novel subsystem analysis in the context of Bohmian theories. Given the empirical equivalence, it seems empirically underdetermined whether the universe is in a pure state or a mixed state. Finally, I discuss some theoretical payoffs of the density-matrix theories and present some open problems for future research. (Bibliographic note: the thesis was submitted for the Master of Science in mathematics at Rutgers University.). (shrink)
The practice of screening potential users of reproductive services is of profound social and political significance. Access screening is inconsistent with the principles of equality and self-determination, and violates individual and group human rights. Communities that strive to function in accord with those principles should not permit access screening, even screening that purports to be a benign exercise of professional discretion. Because reproductive choice is controversial, regulation by law may be required in most jurisdictions to provide effective protection for (...) reproductive rights. In Canada, for example, equal access can, and should be, guaranteed by federal regulations imposing strict conditions on the licences of fertility clinics. (shrink)
Some researchers and autistic activists have recently suggested that because some ‘autism-related’ behavioural atypicalities have a function or purpose they may be desirable rather than undesirable. Examples of such behavioural atypicalities include hand-flapping, repeatedly ordering objects (e.g., toys) in rows, and profoundly restricted routines. A common view, as represented in the Diagnostic and Statistical Manual of Mental Disorders (DSM) IV-TR (APA, 2000), is that many of these behaviours lack adaptive function or purpose, interfere with learning, and constitute the non-social behavioural (...) dysfunctions of those disorders making up the Autism Spectrum. As the DSM IV-TR continues to be the reference source of choice for professionals working with individuals with psychiatric difficulties, its characterization of the Autism Spectrum holds significant sway. We will suggest Extended Mind and Enactive Cognition Theories, which theorize that mind (or cognition) is embodied and environmentally embedded, as coherent conceptual and theoretical spaces within which to investigate the possibility that certain repetitive behaviours exhibited by autistics possess functions or purposes that make them desirable. As lenses through which to re-examine ‘autism-related’ behavioral atypicalities, these theories not only open up explanatory possibilities underdeveloped in the research literature, but also cohere with how some autistics describe their own experience. Our position navigates a middle way between the view of autism as understood in terms of impairment, deficit and dysfunction and one that seeks to de-pathologize the Spectrum. In so doing we seek to contribute to a continuing dialogue between researchers, clinicians and self- or parent advocates. (shrink)
Many fields (social choice, welfare economics, recommender systems) assume people express what benefits them via their 'revealed preferences'. Revealed preferences have well-documented problems when used this way, but are hard to displace in these fields because, as an information source, they are simple, universally applicable, robust, and high-resolution. In order to compete, other information sources (about participants' values, capabilities and functionings, etc) would need to match this. I present a conception of values as *attention policies resulting from constitutive judgements*, (...) and use it to build an alternative preference relation, Meaningful Choice, which retains many desirable features of revealed preference. (shrink)
I provide a characterization of weakly pseudo-rationalizable choicefunctions---that is, choicefunctions rationalizable by a set of acyclic relations---in terms of hyper-relations satisfying certain properties. For those hyper-relations Nehring calls extended preference relations, the central characterizing condition is weaker than (hyper-relation) transitivity but stronger than (hyper-relation) acyclicity. Furthermore, the relevant type of hyper-relation can be represented as the intersection of a certain class of its extensions. These results generalize known, analogous results for path independent choice (...)functions. (shrink)
This paper generalizes rationalizability of a choice function by a single acyclic binary relation to rationalizability by a set of such relations. Rather than selecting those options in a menu that are maximal with respect to a single binary relation, a weakly pseudo-rationalizable choice function selects those options that are maximal with respect to at least one binary relation in a given set. I characterize the class of weakly pseudo-rationalizable choicefunctions in terms of simple functional (...) properties. This result also generalizes Aizerman and Malishevski's characterization of pseudo-rationalizable choicefunctions, that is, choicefunctions rationalizable by a set of total orders. (shrink)
This paper presents a uniform semantic treatment of nonmonotonic inference operations that allow for inferences from infinite sets of premises. The semantics is formulated in terms of selection functions and is a generalization of the preferential semantics of Shoham (1987), (1988), Kraus, Lehman, and Magidor (1990) and Makinson (1989), (1993). A selection function picks out from a given set of possible states (worlds, situations, models) a subset consisting of those states that are, in some sense, the most preferred ones. (...) A proposition α is a nonmonotonic consequence of a set of propositions Γ iff α holds in all the most preferred Γ-states. In the literature on revealed preference theory, there are a number of well-known theorems concerning the representability of selection functions, satisfying certain properties, in terms of underlying preference relations. Such theorems are utilized here to give corresponding representation theorems for nonmonotonic inference operations. At the end of the paper, the connection between nonmonotonic inference and belief revision, in the sense of Alchourrón, Gärdenfors, and Makinson, is explored. In this connection, infinitary belief revision operations that allow for the revision of a theory with a possibly infinite set of propositions are introduced and characterized axiomatically. (shrink)
ABSTRACT Clinicians, researchers and the informed public have come to view addiction as a brain disease. However, in nature even extreme events often reflect normal processes, for instance the principles of plate tectonics explain earthquakes as well as the gradual changes in the face of the earth. In the same way, excessive drug use is predicted by general principles of choice. One of the implications of this result is that drugs do not turn addicts into compulsive drug users; they (...) retain the capacity to say ?no?. In support of the logical implications of the choice theory approach to addiction, research reveals that most addicts quit using drugs by about age 30, that most quit without professional help, that the correlates of quitting are the correlates of decision making, and, according to the most recent epidemiological evidence, the probability of quitting remains constant over time and independent of the onset of dependence. This last result implies that, after an initial period of heavy drug use, remission is independent of any further exposure to drugs. In short, there is much empirical support for the claim that addiction emerges as a function of the rules of everyday choice. (shrink)
The word 'and' can be used both intersectively, as in 'John lies and cheats', and collectively, as in 'John and Mary met'. Research has tried to determine which one of these two meanings is basic. Focusing on coordination of nouns ('liar and cheat'), this article argues that the basic meaning of 'and' is intersective. This theory has been successfully applied to coordination of other kinds of constituents (Partee & Rooth 1983; Winter 2001). Certain cases of noun coordination ('men and women') (...) challenge this view, and have therefore been argued to favor the collective theory (Heycock & Zamparelli 2005). The main result of this article is that the intersective theory actually predicts the collective behavior of 'and' in 'men and women'. 'And' leads to collectivity by interacting with silent operators involving set minimization and choicefunctions, which have been postulated to account for phenomena involving indefinites, collective predicates and coordinations of noun phrases (Winter 2001). This article also shows that the collective theory does not generalize to coordinations of noun phrases in the way it has been previously suggested. (shrink)
Unlike other kinds of theories of justice, reparatory justice can only be negatively defined, in non-ideal contexts in which initial wrongs had already been committed. For one, what counts and what does not count as wrongdoing or as an unjust state of affairs resulted from that wrongdoing depends on the normative framework upon which a theorist relies. Furthermore, the measures undertaken for alleviating historical injustices can be assessed only from the vantage point of other, independent normative considerations. In the present (...) paper I argue that this lack of substance is a feature that, far from being problematic, is what makes reparatory justice attractive. The specific example that I put forward is that of a reparatory justice account which seeks to instantiate the desiderata of a sufficientarian theory of justice. At first, distributive justice fills the content of reparatory justice, specifying up to what level reparations in-kind or compensatory measure should go. Afterwards, reparatory justice clarifies and provides epistemic inputs for distributive justice. Reparatory justice thus becomes an epistemic source for distributive justice, in that it provides the means for assessing whether someone’s level of well-being can be traced to her choice or to a wider, historically-sensitive operationalization of her “circumstances”. (shrink)
A choice function C is rational iff: if it allows a path through a sequence of decisions with a particular outcome, then that outcome is amongst the ones that C would have chosen from amongst all the possible outcomes of the sequence. This implies, and it is the strongest definition that implies, that anyone who is irrational could be talked out of their own preferences. It also implies weak but non-vacuous constraints on choices over ends. These do not include (...) alpha or beta. (shrink)
The relations between rationality and optimization have been widely discussed in the wake of Herbert Simon's work, with the common conclusion that the rationality concept does not imply the optimization principle. The paper is partly concerned with adding evidence for this view, but its main, more challenging objective is to question the converse implication from optimization to rationality, which is accepted even by bounded rationality theorists. We discuss three topics in succession: (1) rationally defensible cyclical choices, (2) the revealed preference (...) theory of optimization, and (3) the infinite regress of optimization. We conclude that (1) and (2) provide evidence only for the weak thesis that rationality does not imply optimization. But (3) is seen to deliver a significant argument for the strong thesis that optimization does not imply rationality. (shrink)
ABSTRACT. The relations between rationality and optimization have been widely discussed in the wake of Herbert Simon’s work, with the common conclusion that the rationality concept does not imply the optimization principle. The paper is partly concerned with adding evidence for this view, but its main, more challenging objective is to question the converse implication from optimization to rationality, which is accepted even by bounded rationality theorists. We discuss three topics in succession: (1) rationally defensible cyclical choices, (2) the revealed (...) preference theory of optimization, and (3) the infinite regress of optimization. We conclude that (1) and (2) provide evidence only for the weak thesis that rationality does not imply optimization. But (3) is seen to deliver a significant argument for the strong thesis that optimization does not imply rationality. (shrink)
This chapter of the Handbook of Utility Theory aims at covering the connections between utility theory and social ethics. The chapter first discusses the philosophical interpretations of utility functions, then explains how social choice theory uses them to represent interpersonal comparisons of welfare in either utilitarian or non-utilitarian representations of social preferences. The chapter also contains an extensive account of John Harsanyi's formal reconstruction of utilitarianism and its developments in the later literature, especially when society faces uncertainty rather (...) than probabilistic risk. (shrink)
It is well known that classical, aka ‘sharp’, Bayesian decision theory, which models belief states as single probability functions, faces a number of serious difficulties with respect to its handling of agnosticism. These difficulties have led to the increasing popularity of so-called ‘imprecise’ models of decision-making, which represent belief states as sets of probability functions. In a recent paper, however, Adam Elga has argued in favour of a putative normative principle of sequential choice that he claims to (...) be borne out by the sharp model but not by any promising incarnation of its imprecise counterpart. After first pointing out that Elga has fallen short of establishing that his principle is indeed uniquely borne out by the sharp model, I cast aspersions on its plausibility. I show that a slight weakening of the principle is satisfied by at least one, but interestingly not all, varieties of the imprecise model and point out that Elga has failed to motivate his stronger commitment. (shrink)
In a quantum universe with a strong arrow of time, it is standard to postulate that the initial wave function started in a particular macrostate---the special low-entropy macrostate selected by the Past Hypothesis. Moreover, there is an additional postulate about statistical mechanical probabilities according to which the initial wave function is a ''typical'' choice in the macrostate. Together, they support a probabilistic version of the Second Law of Thermodynamics: typical initial wave functions will increase in entropy. Hence, there (...) are two sources of randomness in such a universe: the quantum-mechanical probabilities of the Born rule and the statistical mechanical probabilities of the Statistical Postulate. I propose a new way to understand time's arrow in a quantum universe. It is based on what I call the Thermodynamic Theories of Quantum Mechanics. According to this perspective, there is a natural choice for the initial quantum state of the universe, which is given by not a wave function but by a density matrix. The density matrix plays a microscopic role: it appears in the fundamental dynamical equations of those theories. The density matrix also plays a macroscopic / thermodynamic role: it is exactly the projection operator onto the Past Hypothesis subspace. Thus, given an initial subspace, we obtain a unique choice of the initial density matrix. I call this property "the conditional uniqueness" of the initial quantum state. The conditional uniqueness provides a new and general strategy to eliminate statistical mechanical probabilities in the fundamental physical theories, by which we can reduce the two sources of randomness to only the quantum mechanical one. I also explore the idea of an absolutely unique initial quantum state, in a way that might realize Penrose's idea of a strongly deterministic universe. (shrink)
Georg Cantor's absolute infinity, the paradoxical Burali-Forti class Ω of all ordinals, is a monstrous non-entity for which being called a "class" is an undeserved dignity. This must be the ultimate vexation for mathematical philosophers who hold on to some residual sense of realism in set theory. By careful use of Ω, we can rescue Georg Cantor's 1899 "proof" sketch of the Well-Ordering Theorem––being generous, considering his declining health. We take the contrapositive of Cantor's suggestion and add Zermelo's choice (...) function. This results in a concise and uncomplicated proof of the Well-Ordering Theorem. (shrink)
We argue that a semantics for counterfactual conditionals in terms of comparative overall similarity faces a formal limitation due to Arrow’s impossibility theorem from social choice theory. According to Lewis’s account, the truth-conditions for counterfactual conditionals are given in terms of the comparative overall similarity between possible worlds, which is in turn determined by various aspects of similarity between possible worlds. We argue that a function from aspects of similarity to overall similarity should satisfy certain plausible constraints while Arrow’s (...) impossibility theorem rules out that such a function satisfies all the constraints simultaneously. We argue that a way out of this impasse is to represent aspectual similarity in terms of ranking functions instead of representing it in a purely ordinal fashion. Further, we argue against the claim that the determination of overall similarity by aspects of similarity faces a difficulty in addition to the Arrovian limitation, namely the incommensurability of different aspects of similarity. The phenomena that have been cited as evidence for such incommensurability are best explained by ordinary vagueness. (shrink)
Discusses the question of the objectivity or subjectivity of moral judgments, hoping to illuminate it by contrasting moral and aesthetic judgments. In her critical assessment of the nature of moral judgments, Foot concludes that some such judgments (as e.g. that Nazism was evil) are definitely objective. The concept of morality here supplies criteria independent of local standards, which function as fixed starting points in arguments across local boundaries, whereas, by contrast, aesthetic truths can ultimately depend on locally determined criteria. More (...) problematic is the apparently different relation of moral and of aesthetic judgments to rational choice. Individuals may have no reason to choose what is beautiful, but we think that they must always have reason to choose what is morally right, which raises one of the most difficult problems in moral philosophy. (shrink)
Choice often proceeds in two stages: We construct a shortlist on the basis of limited and uncertain information about the options and then reduce this uncertainty by examining the shortlist in greater detail. The goal is to do well when making a final choice from the option set. I argue that we cannot realise this goal by constructing a ranking over the options at shortlisting stage which determines of each option whether it is more or less worthy of (...) being included in a shortlist. This is relevant to the 2010 UK Equality Act. The Act requires that shortlists be constructed on grounds of candidate rankings and affirmative action is only permissible for equally qualified candidates. This is misguided: Shortlisting candidates with lower expected qualifications but higher variance may raise the chance of finding an exceptionally strong candidate. If it does, then shortlisting such candidates would make eminent business sense and there is nothing unfair about it. This observation opens up room for including more underrepresented candidates with protected characteristics, as they are more likely to display greater variance in the selector’s credence functions at shortlisting stage. (shrink)
Moss (2018) argues that rational agents are best thought of not as having degrees of belief in various propositions but as having beliefs in probabilistic contents, or probabilistic beliefs. Probabilistic contents are sets of probability functions. Probabilistic belief states, in turn, are modeled by sets of probabilistic contents, or sets of sets of probability functions. We argue that this Mossean framework is of considerable interest quite independently of its role in Moss’ account of probabilistic knowledge or her semantics (...) for epistemic modals and probability operators. It is an extremely general model of uncertainty. Indeed, it is at least as general and expressively powerful as every other current imprecise probability framework, including lower probabilities, lower previsions, sets of probabilities, sets of desirable gambles, and choicefunctions. In addition, we partially answer an important question that Moss leaves open, viz., why should rational agents have consistent probabilistic beliefs? We show that an important subclass of Mossean believers avoid Dutch bookability iff they have consistent probabilistic beliefs. (shrink)
The paper discusses the sense in which the changes undergone by normative economics in the twentieth century can be said to be progressive. A simple criterion is proposed to decide whether a sequence of normative theories is progressive. This criterion is put to use on the historical transition from the new welfare economics to social choice theory. The paper reconstructs this classic case, and eventually concludes that the latter theory was progressive compared with the former. It also briefly comments (...) on the recent developments in normative economics and their connection with the previous two stages. (Published Online April 18 2006) Footnotes1 This paper suspersedes an earlier one entitled “Is There Progress in Normative Economics?” (Mongin 2002). I thank the organizers of the Fourth ESHET Conference (Graz 2000) for the opportunity they gave me to lecture on this topic. Thanks are also due to J. Alexander, K. Arrow, A. Bird, R. Bradley, M. Dascal, W. Gaertner, N. Gravel, D. Hausman, B. Hill, C. Howson, N. McClennen, A. Trannoy, J. Weymark, J. Worrall, two annonymous referees of this journal, and especially the editor M. Fleurbaey, for helpful comments. The editor's suggestions contributed to determine the final orientation of the paper. The author is grateful to the LSE and the Lachmann Foundation for their support at the time when he was writing the initial version. (shrink)
Suppose that it is rational to choose or intend a course of action if and only if the course of action maximizes some sort of expectation of some sort of value. What sort of value should this definition appeal to? According to an influential neo-Humean view, the answer is “Utility”, where utility is defined as a measure of subjective preference. According to a rival neo-Aristotelian view, the answer is “Choiceworthiness”, where choiceworthiness is an irreducibly normative notion of a course of (...) action that is good in a certain way. The neo-Humean view requires preferences to be measurable by means of a utility function. Various interpretations of what exactly a “preference” is are explored, to see if there is any interpretation that supports the claim that a rational agent’s “preferences” must satisfy the “axioms” that are necessary for them to be measurable in this way. It is argued that the only interpretation that supports the idea that the rational agent’s preferences must meet these axioms interprets “preferences” as a kind of value-judgment. But this turns out to be version of the neo-Aristotelian view, rather than the neo-Humean view. Rational intentions maximize expected choiceworthiness, not expected utility. (shrink)
While a large social-choice-theoretic literature discusses the aggregation of individual judgments into collective ones, there is much less formal work on the transformation of judgments in group communication. I develop a model of judgment transformation and prove a baseline impossibility theorem: Any judgment transformation function satisfying some initially plausible conditions is the identity function, under which no opinion change occurs. I identify escape routes from this impossibility and argue that the kind of group communication envisaged by deliberative democats must (...) be "holistic": It must focus on webs of connected propositions, not on one proposition at a time, which echoes the Duhem-Quine "holism thesis" on scientific theory testing. My approach provides a map of the logical space in which different possible group communication processes are located. (shrink)
Non-Archimedean probability functions allow us to combine regularity with perfect additivity. We discuss the philosophical motivation for a particular choice of axioms for a non-Archimedean probability theory and answer some philosophical objections that have been raised against infinitesimal probabilities in general.
I consider the problem of how to derive what an agent believes from their credence function and utility function. I argue the best solution of this problem is pragmatic, i.e. it is sensitive to the kinds of choices actually facing the agent. I further argue that this explains why our notion of justified belief appears to be pragmatic, as is argued e.g. by Fantl and McGrath. The notion of epistemic justification is not really a pragmatic notion, but it is being (...) applied to a pragmatically defined concept, i.e. belief. (shrink)
Suppose several individuals (e.g., experts on a panel) each assign probabilities to some events. How can these individual probability assignments be aggregated into a single collective probability assignment? This article reviews several proposed solutions to this problem. We focus on three salient proposals: linear pooling (the weighted or unweighted linear averaging of probabilities), geometric pooling (the weighted or unweighted geometric averaging of probabilities), and multiplicative pooling (where probabilities are multiplied rather than averaged). We present axiomatic characterisations of each class of (...) pooling functions (most of them classic, but one new) and argue that linear pooling can be justified procedurally, but not epistemically, while the other two pooling methods can be justified epistemically. The choice between them, in turn, depends on whether the individuals' probability assignments are based on shared information or on private information. We conclude by mentioning a number of other pooling methods. (shrink)
The problem of indeterminism in quantum mechanics usually being considered as a generalization determinism of classical mechanics and physics for the case of discrete (quantum) changes is interpreted as an only mathematical problem referring to the relation of a set of independent choices to a well-ordered series therefore regulated by the equivalence of the axiom of choice and the well-ordering “theorem”. The former corresponds to quantum indeterminism, and the latter, to classical determinism. No other premises (besides the above only (...) mathematical equivalence) are necessary to explain how the probabilistic causation of quantum mechanics refers to the unambiguous determinism of classical physics. The same equivalence underlies the mathematical formalism of quantum mechanics. It merged the well-ordered components of the vectors of Heisenberg’s matrix mechanics and the non-ordered members of the wave functions of Schrödinger’s undulatory mechanics. The mathematical condition of that merging is just the equivalence of the axiom of choice and the well-ordering theorem implying in turn Max Born’s probabilistic interpretation of quantum mechanics. Particularly, energy conservation is justified differently than classical physics. It is due to the equivalence at issue rather than to the principle of least action. One may involve two forms of energy conservation corresponding whether to the smooth changes of classical physics or to the discrete changes of quantum mechanics. Further both kinds of changes can be equated to each other under the unified energy conservation as well as the conditions for the violation of energy conservation to be investigated therefore directing to a certain generalization of energy conservation. (shrink)
We examine some of Connes’ criticisms of Robinson’s infinitesimals starting in 1995. Connes sought to exploit the Solovay model S as ammunition against non-standard analysis, but the model tends to boomerang, undercutting Connes’ own earlier work in functional analysis. Connes described the hyperreals as both a “virtual theory” and a “chimera”, yet acknowledged that his argument relies on the transfer principle. We analyze Connes’ “dart-throwing” thought experiment, but reach an opposite conclusion. In S , all definable sets of reals are (...) Lebesgue measurable, suggesting that Connes views a theory as being “virtual” if it is not definable in a suitable model of ZFC. If so, Connes’ claim that a theory of the hyperreals is “virtual” is refuted by the existence of a definable model of the hyperreal field due to Kanovei and Shelah. Free ultrafilters aren’t definable, yet Connes exploited such ultrafilters both in his own earlier work on the classification of factors in the 1970s and 80s, and in Noncommutative Geometry, raising the question whether the latter may not be vulnerable to Connes’ criticism of virtuality. We analyze the philosophical underpinnings of Connes’ argument based on Gödel’s incompleteness theorem, and detect an apparent circularity in Connes’ logic. We document the reliance on non-constructive foundational material, and specifically on the Dixmier trace −∫ (featured on the front cover of Connes’ magnum opus) and the Hahn–Banach theorem, in Connes’ own framework. We also note an inaccuracy in Machover’s critique of infinitesimal-based pedagogy. (shrink)
According to the reading offered here, Descartes' use of the meditative mode of writing was not a mere rhetorical device to win an audience accustomed to the spiritual retreat. His choice of the literary form of the spiritual exercise was consonant with, if not determined by, his theory of the mind and of the basis of human knowledge. Since Descartes' conception of knowledge implied the priority of the intellect over the senses, and indeed the priority of an intellect operating (...) independently of the senses, and since, in Descartes' view, the untutored individual was likely to be nearly wholly immersed in the senses, a procedure was needed for freeing the intellect from sensory domination so that the truth might be seen. Hence, the cognitive exercises of the Meditations, modeled not on the sense- and imagination-based exercises of Ignatius of Loyola, but on the Augustinian procedure of turning away from the senses and imagination to perceive the unpicturable with the fleshless eye of the mind. In accordance with this reading, the function of Descartes' skeptical arguments is not to introduce skepticism so that it can be defeated but to aid the meditator in withdrawing the mind from the senses in order to attend to truths of the pure intellect. These truths then offer the basis for a new natural philosophy, including a new theory of the senses. (shrink)
Whereas many others have scrutinized the Allais paradox from a theoretical angle, we study the paradox from an historical perspective and link our findings to a suggestion as to how decision theory could make use of it today. We emphasize that Allais proposed the paradox as a normative argument, concerned with ‘the rational man’ and not the ‘real man’, to use his words. Moreover, and more subtly, we argue that Allais had an unusual sense of the normative, being concerned not (...) so much with the rationality of choices as with the rationality of the agent as a person. These two claims are buttressed by a detailed investigation – the first of its kind – of the 1952 Paris conference on risk, which set the context for the invention of the paradox, and a detailed reconstruction – also the first of its kind – of Allais’s specific normative argument from his numerous but allusive writings. The paper contrasts these interpretations of what the paradox historically represented, with how it generally came to function within decision theory from the late 1970s onwards: that is, as an empirical refutation of the expected utility hypothesis, and more specifically of the condition of von Neumann–Morgenstern independence that underlies that hypothesis. While not denying that this use of the paradox was fruitful in many ways, we propose another use that turns out also to be compatible with an experimental perspective. Following Allais’s hints on ‘the experimental definition of rationality’, this new use consists in letting the experiment itself speak of the rationality or otherwise of the subjects. In the 1970s, a short sequence of papers inspired by Allais implemented original ways of eliciting the reasons guiding the subjects’ choices, and claimed to be able to draw relevant normative consequences from this information. We end by reviewing this forgotten experimental avenue not simply historically, but with a view to recommending it for possible use by decision theorists today. (shrink)
Deontological theories face difficulties in accounting for situations involving risk; the most natural ways of extending deontological principles to such situations have unpalatable consequences. In extending ethical principles to decision under risk, theorists often assume the risk must be incorporated into the theory by means of a function from the product of probability assignments to certain values. Deontologists should reject this assumption; essentially different actions are available to the agent when she cannot know that a certain act is in her (...) power, so we cannot simply understand her choice situation as a “risk-weighted” version of choice under certainty. (shrink)
It would be good to have a Bayesian decision theory that assesses our decisions and thinking according to everyday standards of rationality — standards that do not require logical omniscience (Garber 1983, Hacking 1967). To that end we develop a “fragmented” decision theory in which a single state of mind is represented by a family of credence functions, each associated with a distinct choice condition (Lewis 1982, Stalnaker 1984). The theory imposes a local coherence assumption guaranteeing that as (...) an agent's attention shifts, successive batches of "obvious" logical information become available to her. A rule of expected utility maximization can then be applied to the decision of what to attend to next during a train of thought. On the resulting theory, rationality requires ordinary agents to be logically competent and to often engage in trains of thought that increase the unification of their states of mind. But rationality does not require ordinary agents to be logically omniscient. (shrink)
This paper contends that Stoic logic (i.e. Stoic analysis) deserves more attention from contemporary logicians. It sets out how, compared with contemporary propositional calculi, Stoic analysis is closest to methods of backward proof search for Gentzen-inspired substructural sequent logics, as they have been developed in logic programming and structural proof theory, and produces its proof search calculus in tree form. It shows how multiple similarities to Gentzen sequent systems combine with intriguing dissimilarities that may enrich contemporary discussion. Much of Stoic (...) logic appears surprisingly modern: a recursively formulated syntax with some truth-functional propositional operators; analogues to cut rules, axiom schemata and Gentzen’s negation-introduction rules; an implicit variable-sharing principle and deliberate rejection of Thinning and avoidance of paradoxes of implication. These latter features mark the system out as a relevance logic, where the absence of duals for its left and right introduction rules puts it in the vicinity of McCall’s connexive logic. Methodologically, the choice of meticulously formulated meta-logical rules in lieu of axiom and inference schemata absorbs some structural rules and results in an economical, precise and elegant system that values decidability over completeness. (shrink)
In judgment aggregation, unlike preference aggregation, not much is known about domain restrictions that guarantee consistent majority outcomes. We introduce several conditions on individual judgments su¢ - cient for consistent majority judgments. Some are based on global orders of propositions or individuals, others on local orders, still others not on orders at all. Some generalize classic social-choice-theoretic domain conditions, others have no counterpart. Our most general condition generalizes Sen’s triplewise value-restriction, itself the most general classic condition. We also prove (...) a new characterization theorem: for a large class of domains, if there exists any aggregation function satisfying some democratic conditions, then majority voting is the unique such function. Taken together, our results provide new support for the robustness of majority rule. (shrink)
Critics of luck egalitarianism have claimed that, far from providing a justification for the public insurance functions of a welfare state as its proponents claim, the view objectionably abandons those who are deemed responsible for their dire straits. This article considers seven arguments that can be made in response to this ‘abandonment objection’. Four of these arguments are found wanting, with a recurrent problem being their reliance on a dubious sufficientarian or quasi-sufficientarian commitment to provide a threshold of goods (...) unconditionally. Three arguments succeed, showing that luck egalitarians have good reasons for assisting ‘negligent victims’ on account of changes that may occur in an individual between the time of their choice and their subsequent disadvantage, bad option luck, and doubts about free will and responsibility. Luck egalitarianism is therefore shown to offer strong support for public insurance. (shrink)
What is the relationship between degrees of belief and binary beliefs? Can the latter be expressed as a function of the former—a so-called “belief-binarization rule”—without running into difficulties such as the lottery paradox? We show that this problem can be usefully analyzed from the perspective of judgment-aggregation theory. Although some formal similarities between belief binarization and judgment aggregation have been noted before, the connection between the two problems has not yet been studied in full generality. In this paper, we seek (...) to fill this gap. The paper is organized around a baseline impossibility theorem, which we use to map out the space of possible solutions to the belief-binarization problem. Our theorem shows that, except in limiting cases, there exists no belief-binarization rule satisfying four initially plausible desiderata. Surprisingly, this result is a direct corollary of the judgment-aggregation variant of Arrow’s classic impossibility theorem in social choice theory. (shrink)
This paper examines some of the methods animals and humans have of adapting their environment. Because there are limits on how many different tasks a creature can be designed to do well in, creatures with the capacity to redesign their environments have an adaptive advantage over those who can only passively adapt to existing environmental structures. To clarify environmental redesign I rely on the formal notion of a task environment as a directed graph where the nodes are states and the (...) links are actions. One natural form of redesign is to change the topology of this graph structure so as to increase the likelihood of task success or to reduce its expected cost, measured in physical terms. This may be done by eliminating initial states hence eliminating choice points; by changing the action repertoire; by changing the consequence function; and lastly, by adding choice points. Another major method for adapting the environment is to change its cognitive congeniality. Such changes leave the state space formally intact but reduce the number and cost of mental operations needed for task success; they reliably increase the speed, accuracy or robustness of performance. The last section of the paper describes several of these epistemic or complementary actions found in human performance. (shrink)
Contemporary Humeans treat laws of nature as statements of exceptionless regularities that function as the axioms of the best deductive system. Such ‘Best System Accounts’ marry realism about laws with a denial of necessary connections among events. I argue that Hume’s predecessor, George Berkeley, offers a more sophisticated conception of laws, equally consistent with the absence of powers or necessary connections among events in the natural world. On this view, laws are not statements of regularities but the most general rules (...) God follows in producing the world. Pace most commentators, I argue that Berkeley’s view is neither instrumentalist nor reductionist. More important, the Berkeleyan Best System can solve some of the problems afflicting its Humean rivals, including the problems of theory choice and Nancy Cartwright’s ‘facticity’ dilemma. Some of these solutions are available in the contemporary context, without any appeal to God. Berkeley’s account deserves to be taken seriously in its own right. (shrink)
In economics, thought experiments are frequently justified by the difficulty of conducting controlled experiments. They serve several functions, such as establishing causal facts, isolating tendencies, and allowing inferences from models to reality. In this paper, I argue that thought experiments served a further function in economics: facilitating the quantitative definition and measurement of the theoretical concept of utility, thereby bridging the gap between theory and statistical data. I support my argument by a case study, the “hypothetical experiments” of the (...) Norwegian economist Ragnar Frisch (1895-1973). Frisch aimed to eliminate introspection and a subjective concept of utility from economic reasoning. At the same time, he sought behavioral foundations for economic theory that enabled quantitative reasoning. By using thought experiments to justify his set of choice axioms and facilitating the operationalization of utility, Frisch circumvented the problem of observing utility via actual experiments without eliminating the concept of utility from economic theory altogether. As such, these experiments helped Frisch to empirically support the theory’s most important results, such as the laws of demand and supply, without the input of new empirical findings. I suggest that Frisch’s experiments fulfill the main characteristics of thought experiments. (shrink)
According to standard comparativist views, death is bad insofar as it deprives someone of goods she would otherwise have had. In The Ethics of Killing, Jeff McMahan argues against such views and in favor of a gradualist account according to which how bad it is to die is a function of both the future goods of which the decedent is deprived and her cognitive development when she dies. Comparativists and gradualists therefore disagree about how bad it is to die at (...) different ages. In this paper I examine two prominent criticisms of gradualism and show that both misconstrue McMahan. I develop a related criticism that seems to show that a gradualist cannot coherently relate morbidity and mortality. This criticism also fails, but has an instructive implication for how policy-makers setting priorities for health care investments should regard choices between life-saving interventions and interventions against non-fatal diseases in the very young. (shrink)
In recent years, academics and educators have begun to use software mapping tools for a number of education-related purposes. Typically, the tools are used to help impart critical and analytical skills to students, to enable students to see relationships between concepts, and also as a method of assessment. The common feature of all these tools is the use of diagrammatic relationships of various kinds in preference to written or verbal descriptions. Pictures and structured diagrams are thought to be more comprehensible (...) than just words, and a clearer way to illustrate understanding of complex topics. Variants of these tools are available under different names: “concept mapping”, “mind mapping” and “argument mapping”. Sometimes these terms are used synonymously. However, as this paper will demonstrate, there are clear differences in each of these mapping tools. This paper offers an outline of the various types of tool available and their advantages and disadvantages. It argues that the choice of mapping tool largely depends on the purpose or aim for which the tool is used and that the tools may well be converging to offer educators as yet unrealised and potentially complementary functions. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.