The standard representationtheorem for expected utility theory tells us that if a subject’s preferences conform to certain axioms, then she can be represented as maximising her expected utility given a particular set of credences and utilities—and, moreover, that having those credences and utilities is the only way that she could be maximising her expected utility. However, the kinds of agents these theorems seem apt to tell us anything about are highly idealised, being always probabilistically coherent with infinitely (...) precise degrees of belief and full knowledge of all a priori truths. Ordinary subjects do not look very rational when compared to the kinds of agents usually talked about in decision theory. In this paper, I will develop an expected utility representationtheorem aimed at the representation of those who are neither probabilistically coherent, logically omniscient, nor expected utility maximisers across the board—that is, agents who are frequently irrational. The agents in question may be deductively fallible, have incoherent credences, limited representational capacities, and fail to maximise expected utility for all but a limited class of gambles. (shrink)
In this work we consider the problem of the approximate hedging of a contingent claim in the minimum mean square deviation criterion. A theorem on martingale representation in case of discrete time and an application of the result for semi-continuous market model are also given.
Multialgebras (or hyperalgebras or non-deterministic algebras) have been much studied in mathematics and in computer science. In 2016 Carnielli and Coniglio introduced a class of multialgebras called swap structures, as a semantic framework for dealing with several Logics of Formal Inconsistency (or LFIs) that cannot be semantically characterized by a single finite matrix. In particular, these LFIs are not algebraizable by the standard tools of abstract algebraic logic. In this paper, the first steps towards a theory of non-deterministic algebraization of (...) logics by swap structures are given. Specifically, a formal study of swap structures for LFIs is developed, by adapting concepts of universal algebra to multialgebras in a suitable way. A decomposition theorem similar to Birkhoff's representationtheorem is obtained for each class of swap structures. Moreover, when applied to the 3-valued algebraizable logics J3 and Ciore, their classes of algebraic models are retrieved, and the swap structures semantics become twist structures semantics (as independently introduced by M. Fidel and D. Vakarelov). This fact, together with the existence of a functor from the category of Boolean algebras to the category of swap structures for each LFI (which is closely connected with Kalman's functor), suggests that swap structures can be seen as non-deterministic twist structures. This opens new avenues for dealing with non-algebraizable logics by the more general methodology of multialgebraic semantics. (shrink)
Non-commuting quantities and hidden parameters – Wave-corpuscular dualism and hidden parameters – Local or nonlocal hidden parameters – Phase space in quantum mechanics – Weyl, Wigner, and Moyal – Von Neumann’s theorem about the absence of hidden parameters in quantum mechanics and Hermann – Bell’s objection – Quantum-mechanical and mathematical incommeasurability – Kochen – Specker’s idea about their equivalence – The notion of partial algebra – Embeddability of a qubit into a bit – Quantum computer is not Turing machine (...) – Is continuality universal? – Diffeomorphism and velocity – Einstein’s general principle of relativity – „Mach’s principle“ – The Skolemian relativity of the discrete and the continuous – The counterexample in § 6 of their paper – About the classical tautology which is untrue being replaced by the statements about commeasurable quantum-mechanical quantities – Logical hidden parameters – The undecidability of the hypothesis about hidden parameters – Wigner’s work and и Weyl’s previous one – Lie groups, representations, and psi-function – From a qualitative to a quantitative expression of relativity − psi-function, or the discrete by the random – Bartlett’s approach − psi-function as the characteristic function of random quantity – Discrete and/ or continual description – Quantity and its “digitalized projection“ – The idea of „velocity−probability“ – The notion of probability and the light speed postulate – Generalized probability and its physical interpretation – A quantum description of macro-world – The period of the as-sociated de Broglie wave and the length of now – Causality equivalently replaced by chance – The philosophy of quantum information and religion – Einstein’s thesis about “the consubstantiality of inertia ant weight“ – Again about the interpretation of complex velocity – The speed of time – Newton’s law of inertia and Lagrange’s formulation of mechanics – Force and effect – The theory of tachyons and general relativity – Riesz’s representationtheorem – The notion of covariant world line – Encoding a world line by psi-function – Spacetime and qubit − psi-function by qubits – About the physical interpretation of both the complex axes of a qubit – The interpretation of the self-adjoint operators components – The world line of an arbitrary quantity – The invariance of the physical laws towards quantum object and apparatus – Hilbert space and that of Minkowski – The relationship between the coefficients of -function and the qubits – World line = psi-function + self-adjoint operator – Reality and description – Does a „curved“ Hilbert space exist? – The axiom of choice, or when is possible a flattening of Hilbert space? – But why not to flatten also pseudo-Riemannian space? – The commutator of conjugate quantities – Relative mass – The strokes of self-movement and its philosophical interpretation – The self-perfection of the universe – The generalization of quantity in quantum physics – An analogy of the Feynman formalism – Feynman and many-world interpretation – The psi-function of various objects – Countable and uncountable basis – Generalized continuum and arithmetization – Field and entanglement – Function as coding – The idea of „curved“ Descartes product – The environment of a function – Another view to the notion of velocity-probability – Reality and description – Hilbert space as a model both of object and description – The notion of holistic logic – Physical quantity as the information about it – Cross-temporal correlations – The forecasting of future – Description in separable and inseparable Hilbert space – „Forces“ or „miracles“ – Velocity or time – The notion of non-finite set – Dasein or Dazeit – The trajectory of the whole – Ontological and onto-theological difference – An analogy of the Feynman and many-world interpretation − psi-function as physical quantity – Things in the world and instances in time – The generation of the physi-cal by mathematical – The generalized notion of observer – Subjective or objective probability – Energy as the change of probability per the unite of time – The generalized principle of least action from a new view-point – The exception of two dimensions and Fermat’s last theorem. (shrink)
The primary aim of this paper is the presentation of a foundation for causal decision theory. This is worth doing because causal decision theory (CDT) is philosophically the most adequate rational decision theory now available. I will not defend that claim here by elaborate comparison of the theory with all its competitors, but by providing the foundation. This puts the theory on an equal footing with competitors for which foundations have already been given. It turns out that it will also (...) produce a reply to the most serious objections made so far against CDT and against the particular version of CDT I will defend. (shrink)
The philosophy of science of Patrick Suppes is centered on two important notions that are part of the title of his recent book (Suppes 2002): Representation and Invariance. Representation is important because when we embrace a theory we implicitly choose a way to represent the phenomenon we are studying. Invariance is important because, since invariants are the only things that are constant in a theory, in a way they give the “objective” meaning of that theory. Every scientific theory (...) gives a representation of a class of structures and studies the invariant properties holding in that class of structures. In Suppes’ view, the best way to define this class of structures is via axiomatization. This is because a class of structures is given by a definition, and this same definition establishes which are the properties that a single structure must possess in order to belong to the class. These properties correspond to the axioms of a logical theory. In Suppes’ view, the best way to characterize a scientific structure is by giving a representationtheorem for its models and singling out the invariants in the structure. Thus, we can say that the philosophy of science of Patrick Suppes consists in the application of the axiomatic method to scientific disciplines. What I want to argue in this paper is that this application of the axiomatic method is also at the basis of a new approach that is being increasingly applied to the study of computer science and information systems, namely the approach of formal ontologies. The main task of an ontology is that of making explicit the conceptual structure underlying a certain domain. By “making explicit the conceptual structure” we mean singling out the most basic entities populating the domain and writing axioms expressing the main properties of these primitives and the relations holding among them. So, in both cases, the axiomatization is the main tool used to characterize the object of inquiry, being this object scientific theories (in Suppes’ approach), or information systems (for formal ontologies). In the following section I will present the view of Patrick Suppes on the philosophy of science and the axiomatic method, in section 3 I will survey the theoretical issues underlying the work that is being done in formal ontologies and in section 4 I will draw a comparison of these two approaches and explore similarities and differences between them. (shrink)
The text is a continuation of the article of the same name published in the previous issue of Philosophical Alternatives. The philosophical interpretations of the Kochen- Specker theorem (1967) are considered. Einstein's principle regarding the,consubstantiality of inertia and gravity" (1918) allows of a parallel between descriptions of a physical micro-entity in relation to the macro-apparatus on the one hand, and of physical macro-entities in relation to the astronomical mega-entities on the other. The Bohmian interpretation ( 1952) of quantum mechanics (...) proposes that all quantum systems be interpreted as dissipative ones and that the theorem be thus derstood. The conclusion is that the continual representation, by force or (gravitational) field between parts interacting by means of it, of a system is equivalent to their mutual entanglement if representation is discrete. Gravity (force field) and entanglement are two different, correspondingly continual and discrete, images of a single common essence. General relativity can be interpreted as a superluminal generalization of special relativity. The postulate exists of an alleged obligatory difference between a model and reality in science and philosophy. It can also be deduced by interpreting a corollary of the heorem. On the other hand, quantum mechanics, on the basis of this theorem and of V on Neumann's (1932), introduces the option that a model be entirely identified as the modeled reality and, therefore, that absolutely reality be recognized: this is a non-standard hypothesis in the epistemology of science. Thus, the true reality begins to be understood mathematically, i.e. in a Pythagorean manner, for its identification with its mathematical model. A few linked problems are highlighted: the role of the axiom of choice forcorrectly interpreting the theorem; whether the theorem can be considered an axiom; whether the theorem can be considered equivalent to the negation of the axiom. (shrink)
We generalize and extend the class of Sahlqvist formulae in arbitrary polyadic modal languages, to the class of so called inductive formulae. To introduce them we use a representation of modal polyadic languages in a combinatorial style and thus, in particular, develop what we believe to be a better syntactic approach to elementary canonical formulae altogether. By generalizing the method of minimal valuations à la Sahlqvist–van Benthem and the topological approach of Sambin and Vaccaro we prove that all inductive (...) formulae are elementary canonical and thus extend Sahlqvist’s theorem over them. In particular, we give a simple example of an inductive formula which is not frame-equivalent to any Sahlqvist formula. Then, after a deeper analysis of the inductive formulae as set-theoretic operators in descriptive and Kripke frames, we establish a somewhat stronger model-theoretic characterization of these formulae in terms of a suitable equivalence to syntactically simpler formulae in the extension of the language with reversive modalities. Lastly, we study and characterize the elementary canonical formulae in reversive languages with nominals, where the relevant notion of persistence is with respect to discrete frames. (shrink)
In this article, it is argued that the Gibbs-Liouville theorem is a mathematical representation of the statement that closed classical systems evolve deterministically. From the perspective of an observer of the system, whose knowledge about the degrees of freedom of the system is complete, the statement of deterministic evolution is equivalent to the notion that the physical distinctions between the possible states of the system, or, in other words, the information possessed by the observer about the system, is (...) never lost. Thus, it is proposed that the Gibbs-Liouville theorem is a statement about the dynamical evolution of a closed classical system valid in such situations where information about the system is conserved in time. Furthermore, in this article it is shown that the Hamilton equations and the Hamilton principle on phase space follow directly from the differential representation of the Gibbs-Liouville theorem, i.e. that the divergence of the Hamiltonian phase flow velocity vanish. Thus, considering that the Lagrangian and Hamiltonian formulations of classical mechanics are related via the Legendre transformation, it is obtained that these two standard formulations are both logical consequences of the statement of deterministic evolution, or, equivalently, information conservation. (shrink)
In the context of EPR-Bohm type experiments and spin detections confined to spacelike hypersurfaces, a local, deterministic and realistic model within a Friedmann-Robertson-Walker spacetime with a constant spatial curvature (S^3 ) is presented that describes simultaneous measurements of the spins of two fermions emerging in a singlet state from the decay of a spinless boson. Exact agreement with the probabilistic predictions of quantum theory is achieved in the model without data rejection, remote contextuality, superdeterminism or backward causation. A singularity-free Clifford-algebraic (...)representation of S^3 with vanishing spatial curvature and non-vanishing torsion is then employed to transform the model in a more elegant form. Several event-by-event numerical simulations of the model are presented, which confirm our analytical results with the accuracy of 4 parts in 10^4 . Possible implications of our results for practical applications such as quantum security protocols and quantum computing are briefly discussed. (shrink)
Although expected utility theory has proven a fruitful and elegant theory in the finite realm, attempts to generalize it to infinite values have resulted in many paradoxes. In this paper, we argue that the use of John Conway's surreal numbers shall provide a firm mathematical foundation for transfinite decision theory. To that end, we prove a surreal representationtheorem and show that our surreal decision theory respects dominance reasoning even in the case of infinite values. We then bring (...) our theory to bear on one of the more venerable decision problems in the literature: Pascal's Wager. Analyzing the wager showcases our theory's virtues and advantages. To that end, we analyze two objections against the wager: Mixed Strategies and Many Gods. After formulating the two objections in the framework of surreal utilities and probabilities, our theory correctly predicts that (1) the pure Pascalian strategy beats all mixed strategies, and (2) what one should do in a Pascalian decision problem depends on what one's credence function is like. Our analysis therefore suggests that although Pascal's Wager is mathematically coherent, it does not deliver what it purports to, a rationally compelling argument that people should lead a religious life regardless of how confident they are in theism and its alternatives. (shrink)
The orthodox theory of instrumental rationality, expected utility (EU) theory, severely restricts the way in which risk-considerations can figure into a rational individual's preferences. It is argued here that this is because EU theory neglects an important component of instrumental rationality. This paper presents a more general theory of decision-making, risk-weighted expected utility (REU) theory, of which expected utility maximization is a special case. According to REU theory, the weight that each outcome gets in decision-making is not the subjective probability (...) of that outcome; rather, the weight each outcome gets depends on both its subjective probability and its position in the gamble. Furthermore, the individual's utility function, her subjective probability function, and a function that measures her attitude towards risk can be separately derived from her preferences via a RepresentationTheorem. This theorem illuminates the role that each of these entities plays in preferences, and shows how REU theory explicates the components of instrumental rationality. (shrink)
In this paper the class of Fidel-structures for the paraconsistent logic mbC is studied from the point of view of Model Theory and Category Theory. The basic point is that Fidel-structures for mbC (or mbC-structures) can be seen as first-order structures over the signature of Boolean algebras expanded by two binary predicate symbols N (for negation) and O (for the consistency connective) satisfying certain Horn sentences. This perspective allows us to consider notions and results from Model Theory in order to (...) analyze the class of mbC-structures. Thus, substructures, union of chains, direct products, direct limits, congruences and quotient structures can be analyzed under this perspective. In particular, a Birkhoff-like representationtheorem for mbC-structures as subdirect poducts in terms of subdirectly irreducible mbC-structures is obtained by adapting a general result for first-order structures due to Caicedo. Moreover, a characterization of all the subdirectly irreducible mbC-structures is also given. An alternative decomposition theorem is obtained by using the notions of weak substructure and weak isomorphism considered by Fidel for Cn-structures. (shrink)
People with the kind of preferences that give rise to the St. Petersburg paradox are problematic---but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage's Sure Thing Principle, which we call the *Countable Sure Thing Principle*, as well as an infinitary generalization of von Neumann and Morgenstern's Independence axiom, which we call *Countable (...) Independence*. In violating these principles, they display foibles like those of people who deviate from standard expected utility theory in more mundane cases: they choose dominated strategies, pay to avoid information, and reject expert advice. We precisely characterize the preference relations that satisfy Countable Independence in several equivalent ways: a structural constraint on preferences, a representationtheorem, and the principle we began with, that every prospect has a value that some outcome could have. (shrink)
People with the kind of preferences that give rise to the St. Petersburg paradox are problematic---but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage's Sure Thing Principle, which we call the *Countable Sure Thing Principle*, as well as an infinitary generalization of von Neumann and Morgenstern's Independence axiom, which we call *Countable (...) Independence*. In violating these principles, they display foibles like those of people who deviate from standard expected utility theory in more mundane cases: they choose dominated strategies, pay to avoid information, and reject expert advice. We precisely characterize the preference relations that satisfy Countable Independence in several equivalent ways: a structural constraint on preferences, a representationtheorem, and the principle we began with, that every prospect has a value that some outcome could have. (shrink)
We prove a representationtheorem for preference relations over countably infinite lotteries that satisfy a generalized form of the Independence axiom, without assuming Continuity. The representing space consists of lexicographically ordered transfinite sequences of bounded real numbers. This result is generalized to preference orders on abstract superconvex spaces.
Jim Joyce argues for two amendments to probabilism. The first is the doctrine that credences are rational, or not, in virtue of their accuracy or “closeness to the truth” (1998). The second is a shift from a numerically precise model of belief to an imprecise model represented by a set of probability functions (2010). We argue that both amendments cannot be satisfied simultaneously. To do so, we employ a (slightly-generalized) impossibility theorem of Seidenfeld, Schervish, and Kadane (2012), who show (...) that there is no strictly proper scoring rule for imprecise probabilities. -/- The question then is what should give way. Joyce, who is well aware of this no-go result, thinks that a quantifiability constraint on epistemic accuracy should be relaxed to accommodate imprecision. We argue instead that another Joycean assumption — called strict immodesty— should be rejected, and we prove a representationtheorem that characterizes all “mildly” immodest measures of inaccuracy. (shrink)
According to the priority view, or prioritarianism, it matters more to beneﬁt people the worse oﬀ they are. But how exactly should the priority view be deﬁned? This article argues for a highly general characterization which essentially involves risk, but makes no use of evaluative measurements or the expected utility axioms. A representationtheorem is provided, and when further assumptions are added, common accounts of the priority view are recovered. A defense of the key idea behind the priority (...) view, the priority principle, is provided. But it is argued that the priority view fails on both ethical and conceptual grounds. (shrink)
We present an elementary system of axioms for the geometry of Minkowski spacetime. It strikes a balance between a simple and streamlined set of axioms and the attempt to give a direct formalization in first-order logic of the standard account of Minkowski spacetime in [Maudlin 2012] and [Malament, unpublished]. It is intended for future use in the formalization of physical theories in Minkowski spacetime. The choice of primitives is in the spirit of [Tarski 1959]: a predicate of betwenness and a (...) four place predicate to compare the square of the relativistic intervals. Minkowski spacetime is described as a four dimensional ‘vector space’ that can be decomposed everywhere into a spacelike hyperplane - which obeys the Euclidean axioms in [Tarski and Givant, 1999] - and an orthogonal timelike line. The length of other ‘vectors’ are calculated according to Pythagora’s theorem. We conclude with a RepresentationTheorem relating models of our system that satisfy second order continuity to the mathematical structure called ‘Minkowski spacetime’ in physics textbooks. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representationtheorem: Any preference (...) relation that satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
In this paper, I introduce an intrinsic account of the quantum state. This account contains three desirable features that the standard platonistic account lacks: (1) it does not refer to any abstract mathematical objects such as complex numbers, (2) it is independent of the usual arbitrary conventions in the wave function representation, and (3) it explains why the quantum state has its amplitude and phase degrees of freedom. -/- Consequently, this account extends Hartry Field’s program outlined in Science Without (...) Numbers (1980), responds to David Malament’s long-standing impossibility conjecture (1982), and establishes an important first step towards a genuinely intrinsic and nominalistic account of quantum mechanics. I will also compare the present account to Mark Balaguer’s (1996) nominalization of quantum mechanics and discuss how it might bear on the debate about “wave function realism.” In closing, I will suggest some possible ways to extend this account to accommodate spinorial degrees of freedom and a variable number of particles (e.g. for particle creation and annihilation). -/- Along the way, I axiomatize the quantum phase structure as what I shall call a “periodic difference structure” and prove a representationtheorem as well as a uniqueness theorem. These formal results could prove fruitful for further investigation into the metaphysics of phase and theoretical structure. (shrink)
Savage's framework of subjective preference among acts provides a paradigmatic derivation of rational subjective probabilities within a more general theory of rational decisions. The system is based on a set of possible states of the world, and on acts, which are functions that assign to each state a consequence. The representationtheorem states that the given preference between acts is determined by their expected utilities, based on uniquely determined probabilities (assigned to sets of states), and numeric utilities assigned (...) to consequences. Savage's derivation, however, is based on a highly problematic well-known assumption not included among his postulates: for any consequence of an act in some state, there is a "constant act" which has that consequence in all states. This ability to transfer consequences from state to state is, in many cases, miraculous -- including simple scenarios suggested by Savage as natural cases for applying his theory. We propose a simplification of the system, which yields the representationtheorem without the constant act assumption. We need only postulates P1-P6. This is done at the cost of reducing the set of acts included in the setup. The reduction excludes certain theoretical infinitary scenarios, but includes the scenarios that should be handled by a system that models human decisions. (shrink)
According to standard rational choice theory, as commonly used in political science and economics, an agent's fundamental preferences are exogenously fixed, and any preference change over decision options is due to Bayesian information learning. Although elegant and parsimonious, such a model fails to account for preference change driven by experiences or psychological changes distinct from information learning. We develop a model of non-informational preference change. Alternatives are modelled as points in some multidimensional space, only some of whose dimensions play a (...) role in shaping the agentís preferences. Any change in these "motivationally salient" dimensions can change the agent's preferences. How it does so is described by a new representationtheorem. Our model not only captures a wide range of frequently observed phenomena, but also generalizes some standard representations of preferences in political science and economics. (shrink)
We extend the framework of Inductive Logic to Second Order languages and introduce Wilmers' Principle, a rational principle for probability functions on Second Order languages. We derive a representationtheorem for functions satisfying this principle and investigate its relationship to the first order principles of Regularity and Super Regularity.
The arguments for Bayesianism in the literature fall into three broad categories. There are Dutch Book arguments, both of the traditional pragmatic variety and the modern ‘depragmatised’ form. And there are arguments from the so-called ‘representation theorems’. The arguments have many similarities, for example they have a common conclusion, and they all derive epistemic constraints from considerations about coherent preferences, but they have enough differences to produce hostilities between their proponents. In a recent paper, Maher (1997) has argued that (...) the pragmatised Dutch Book arguments are unsound and the depragmatised Dutch Book arguments question begging. He urges we instead use the representationtheorem argument as in his (1993). In this paper I argue that Maher’s own argument is question-begging, though in a more subtle and interesting way than his Dutch Book wielding opponents. (shrink)
The received view about emergence and reduction is that they are incompatible categories. I argue in this paper that, contrary to the received view, emergence and reduction can hold together. To support this thesis, I focus attention on dynamical systems and, on the basis of a general representationtheorem, I argue that, as far as these systems are concerned, the emulation relationship is sufficient for reduction (intuitively, a dynamical system DS1 emulates a second dynamical system DS2 when DS1 (...) exactly reproduces the whole dynamics of DS2). This representational view of reduction, contrary to the standard deductivist one, is compatible with the existence of structural properties of the reduced system that are not also properties of the reducing one. Therefore, under this view, by no means are reduction and emergence incompatible categories but, rather, complementary ones. (shrink)
According to the traditional Bayesian view of credence, its structure is that of precise probability, its objects are descriptive propositions about the empirical world, and its dynamics are given by conditionalization. Each of the three essays that make up this thesis deals with a different variation on this traditional picture. The first variation replaces precise probability with sets of probabilities. The resulting imprecise Bayesianism is sometimes motivated on the grounds that our beliefs should not be more precise than the evidence (...) calls for. One known problem for this evidentially motivated imprecise view is that in certain cases, our imprecise credence in a particular proposition will remain the same no matter how much evidence we receive. In the first essay I argue that the problem is much more general than has been appreciated so far, and that it’s difficult to avoid without compromising the initial evidentialist motivation. The second variation replaces descriptive claims with moral claims as the objects of credence. I consider three standard arguments for probabilism with respect to descriptive uncertainty—representationtheorem arguments, Dutch book arguments, and accuracy arguments—in order to examine whether such arguments can also be used to establish probabilism with respect to moral uncertainty. In the second essay, I argue that by and large they can, with some caveats. First, I don’t examine whether these arguments can be given sound non-cognitivist readings, and any conclusions therefore only hold conditional on cognitivism. Second, decision-theoretic representation theorems are found to be less convincing in the moral case, because there they implausibly commit us to thinking that intertheoretic comparisons of value are always possible. Third and finally, certain considerations may lead one to think that imprecise probabilism provides a more plausible model of moral epistemology. The third variation considers whether, in addition to conditionalization, agents may also change their minds by becoming aware of propositions they had not previously entertained, and therefore not previously assigned any probability. More specifically, I argue that if we wish to make room for reflective equilibrium in a probabilistic moral epistemology, we must allow for awareness growth. In the third essay, I sketch the outline of such a Bayesian account of reflective equilibrium. Given that this account gives a central place to awareness growth, and that the rationality constraints on belief change by awareness growth are much weaker than those on belief change by conditionalization, it follows that the rationality constraints on the credences of agents who are seeking reflective equilibrium are correspondingly weaker. (shrink)
Voting Advice Applications (VAAs) are online tools designed to help citizens decide how to vote. They typically offer their users a representation of what is at stake in an election by matching user preferences on issues with those of parties or candidates. While the use of VAAs has boomed in recent years in both established and new democracies, this new phenomenon in the electoral landscape has received little attention from political theorists. The current academic debate is focused on epistemic (...) aspects of the question how a VAA can adequately represent electoral politics. We argue that conceptual and normative presuppositions at play in the background of the tool are at least as important. Even a well-developed VAA does not simply reflect what is at stake in the election by neutrally passing along information. Rather, it structures political information in a way that is informed by the developers’ presuppositions. Yet, these presuppositions remain hidden if we interpret the tool as a mirror that offers the user a reflection of him/herself situated within the political landscape. VAAs should therefore be understood as electoral dioramas, staged according to a contestable picture of politics. (shrink)
Two kinds of people might find this useful: first, those interested in the modern debate over ideas and representation who don’t happen to read French, or who do, but would like to have in one place the relevant excerpts, to see whether looking at the originals is worth their time. Second are teachers of modern philosophy. The back-and-forth among these figures makes for a refreshing change from the massive, often self-contained works that characterize much of the rest of such (...) a course. For example, one could easily work in chapter 3 (the high point of the debate, from my point of view) between Descartes and Berkeley. (shrink)
It is a live possibility that certain of our experiences reliably misrepresent the world around us. I argue that tracking theories of mental representation have difficulty allowing for this possibility, and that this is a major consideration against them.
This paper generalises the classical Condorcet jury theorem from majority voting over two options to plurality voting over multiple options. The paper further discusses the debate between epistemic and procedural democracy and situates its formal results in that debate. The paper finally compares a number of different social choice procedures for many-option choices in terms of their epistemic merits. An appendix explores the implications of some of the present mathematical results for the question of how probable majority cycles (as (...) in Condorcet's paradox) are in large electorates. (shrink)
In this short survey article, I discuss Bell’s theorem and some strategies that attempt to avoid the conclusion of non-locality. I focus on two that intersect with the philosophy of probability: (1) quantum probabilities and (2) superdeterminism. The issues they raised not only apply to a wide class of no-go theorems about quantum mechanics but are also of general philosophical interest.
This paper examines visual representation from a distinctive, interdisciplinary perspective that draws on ethics, visual studies and critical race theory. Suggests ways to clarify complex issues of representational ethics in marketing communications and marketing representations, suggesting an analysis that makes identity creation central to societal marketing concerns. Analyzes representations of the exotic Other in disparate marketing campaigns, drawing upon tourist promotions, advertisements, and mundane objects in material culture. Moreover, music is an important force in marketing communication: visual representations in (...) music promotions are also explored as data for inquiry. Offers an alternative to phenomenologically based approaches in marketing and consumer research scholarship that use consumer responses to generate data. Contributes additional insight into societal marketing and places global marketing processes within the intersection of ethics, aesthetics and representation. (shrink)
In response to recent work on the aggregation of individual judgments on logically connected propositions into collective judgments, it is often asked whether judgment aggregation is a special case of Arrowian preference aggregation. We argue for the converse claim. After proving two impossibility theorems on judgment aggregation (using "systematicity" and "independence" conditions, respectively), we construct an embedding of preference aggregation into judgment aggregation and prove Arrow’s theorem (stated for strict preferences) as a corollary of our second result. Although we (...) thereby provide a new proof of Arrow’s theorem, our main aim is to identify the analogue of Arrow’s theorem in judgment aggregation, to clarify the relation between judgment and preference aggregation, and to illustrate the generality of the judgment aggregation model. JEL Classi…cation: D70, D71.. (shrink)
We give two social aggregation theorems under conditions of risk, one for constant population cases, the other an extension to variable populations. Intra and interpersonal welfare comparisons are encoded in a single ‘individual preorder’. The theorems give axioms that uniquely determine a social preorder in terms of this individual preorder. The social preorders described by these theorems have features that may be considered characteristic of Harsanyi-style utilitarianism, such as indifference to ex ante and ex post equality. However, the theorems are (...) also consistent with the rejection of all of the expected utility axioms, completeness, continuity, and independence, at both the individual and social levels. In that sense, expected utility is inessential to Harsanyi-style utilitarianism. In fact, the variable population theorem imposes only a mild constraint on the individual preorder, while the constant population theorem imposes no constraint at all. We then derive further results under the assumption of our basic axioms. First, the individual preorder satisfies the main expected utility axiom of strong independence if and only if the social preorder has a vector-valued expected total utility representation, covering Harsanyi’s utilitarian theorem as a special case. Second, stronger utilitarian-friendly assumptions, like Pareto or strong separability, are essentially equivalent to strong independence. Third, if the individual preorder satisfies a ‘local expected utility’ condition popular in non-expected utility theory, then the social preorder has a ‘local expected total utility’ representation. Fourth, a wide range of non-expected utility theories nevertheless lead to social preorders of outcomes that have been seen as canonically egalitarian, such as rank-dependent social preorders. Although our aggregation theorems are stated under conditions of risk, they are valid in more general frameworks for representing uncertainty or ambiguity. (shrink)
We present a new “reason-based” approach to the formal representation of moral theories, drawing on recent decision-theoretic work. We show that any moral theory within a very large class can be represented in terms of two parameters: a specification of which properties of the objects of moral choice matter in any given context, and a specification of how these properties matter. Reason-based representations provide a very general taxonomy of moral theories, as differences among theories can be attributed to differences (...) in their two key parameters. We can thus formalize several distinctions, such as between consequentialist and non-consequentialist theories, between universalist and relativist theories, between agent-neutral and agent-relative theories, between monistic and pluralistic theories, between atomistic and holistic theories, and between theories with a teleological structure and those without. Reason-based representations also shed light on an important but under-appreciated phenomenon: the “underdetermination of moral theory by deontic content”. (shrink)
This dissertation argues that mental representation is identical to phenomenal consciousness, and everything else that appears to be both mental and a matter of representation is not genuine mental representation, but either in some way derived from mental representation, or a case of non-mental representation.
Neuropsychological findings used to motivate the "two visual systems" hypothesis have been taken to endanger a pair of widely accepted claims about spatial representation in conscious visual experience. The first is the claim that visual experience represents 3-D space around the perceiver using an egocentric frame of reference. The second is the claim that there is a constitutive link between the spatial contents of visual experience and the perceiver's bodily actions. In this paper, I review and assess three main (...) sources of evidence for the two visual systems hypothesis. I argue that the best interpretation of the evidence is in fact consistent with both claims. I conclude with some brief remarks on the relation between visual consciousness and rational agency. (shrink)
The computational paradigm, which has dominated psychology and artificial intelligence since the cognitive revolution, has been a source of intense debate. Recently, several cognitive scientists have argued against this paradigm, not by objecting to computation, but rather by objecting to the notion of representation. Our analysis of these objections reveals that it is not the notion of representation per se that is causing the problem, but rather specific properties of representations as they are used in various psychological theories. (...) Our analysis suggests that all theorists accept the idea that cognitive processing involves internal information-carrying states that mediate cognitive processing. These mediating states are a superordinate category of representations. We discuss five properties that can be added to mediating states and examine their importance in various cognitive models. Finally, three methodological lessons are drawn from our analysis and discussion. (shrink)
In this article we present an advanced version of Dual-PECCS, a cognitively-inspired knowledge representation and reasoning system aimed at extending the capabilities of artificial systems in conceptual categorization tasks. It combines different sorts of common-sense categorization (prototypical and exemplars-based categorization) with standard monotonic categorization procedures. These different types of inferential procedures are reconciled according to the tenets coming from the dual process theory of reasoning. On the other hand, from a representational perspective, the system relies on the hypothesis of (...) conceptual structures represented as heterogeneous proxytypes. Dual-PECCS has been experimentally assessed in a task of conceptual categorization where a target concept illustrated by a simple common-sense linguistic description had to be identified by resorting to a mix of categorization strategies, and its output has been compared to human responses. The obtained results suggest that our approach can be beneficial to improve the representational and reasoning conceptual capabilities of standard cognitive artificial systems, and –in addition– that it may be plausibly applied to different general computational models of cognition. The current version of the system, in fact, extends our previous work, in that Dual-PECCS is now integrated and tested into two cognitive architectures, ACT-R and CLARION, implementing different assumptions on the underlying invariant structures governing human cognition. Such integration allowed us to extend our previous evaluation. (shrink)
The main thesis of this paper is twofold. In the first half of the paper, (§§1-2), I argue that there are two notions of mental representation, which I call objective and subjective. In the second part (§§3-7), I argue that this casts familiar tracking theories of mental representation as incomplete: while it is clear how they might account for objective representation, they at least require supplementation to account for subjective representation.
In this article, network science is discussed from a methodological perspective, and two central theses are defended. The first is that network science exploits the very properties that make a system complex. Rather than using idealization techniques to strip those properties away, as is standard practice in other areas of science, network science brings them to the fore, and uses them to furnish new forms of explanation. The second thesis is that network representations are particularly helpful in explaining the properties (...) of non-decomposable systems. Where part-whole decomposition is not possible, network science provides a much-needed alternative method of compressing information about the behavior of complex systems, and does so without succumbing to problems associated with combinatorial explosion. The article concludes with a comparison between the uses of network representation analyzed in the main discussion, and an entirely distinct use of network representation that has recently been discussed in connection with mechanistic modeling. (shrink)
Condorcet's famous jury theorem reaches an optimistic conclusion on the correctness of majority decisions, based on two controversial premises about voters: they are competent and vote independently, in a technical sense. I carefully analyse these premises and show that: whether a premise is justi…ed depends on the notion of probability considered; none of the notions renders both premises simultaneously justi…ed. Under the perhaps most interesting notions, the independence assumption should be weakened.
During the last decades, many cognitive architectures (CAs) have been realized adopting different assumptions about the organization and the representation of their knowledge level. Some of them (e.g. SOAR [35]) adopt a classical symbolic approach, some (e.g. LEABRA[ 48]) are based on a purely connectionist model, while others (e.g. CLARION [59]) adopt a hybrid approach combining connectionist and symbolic representational levels. Additionally, some attempts (e.g. biSOAR) trying to extend the representational capacities of CAs by integrating diagrammatical representations and reasoning (...) are also available [34]. In this paper we propose a reflection on the role that Conceptual Spaces, a framework developed by Peter G¨ardenfors [24] more than fifteen years ago, can play in the current development of the Knowledge Level in Cognitive Systems and Architectures. In particular, we claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by G¨ardenfors [23] for defending the need of a conceptual, intermediate, representation level between the symbolic and the sub-symbolic one. In particular we focus on the advantages offered by Conceptual Spaces (w.r.t. symbolic and sub-symbolic approaches) in dealing with the problem of compositionality of representations based on typicality traits. Additionally, we argue that Conceptual Spaces could offer a unifying framework for interpreting many kinds of diagrammatic and analogical representations. As a consequence, their adoption could also favor the integration of diagrammatical representation and reasoning in CAs. (shrink)
The neural vehicles of mental representation play an explanatory role in cognitive psychology that their realizers do not. In this paper, I argue that the individuation of realizers as vehicles of representation restricts the sorts of explanations in which they can participate. I illustrate this with reference to Rupert’s (2011) claim that representational vehicles can play an explanatory role in psychology in virtue of their quantity or proportion. I propose that such quantity-based explanatory claims can apply only to (...) realizers and not to vehicles, in virtue of the particular causal role that vehicles play in psychological explanations. (shrink)
This paper engages critically with anti-representationalist arguments pressed by prominent enactivists and their allies. The arguments in question are meant to show that the “as-such” and “job-description” problems constitute insurmountable challenges to causal-informational theories of mental content. In response to these challenges, a positive account of what makes a physical or computational structure a mental representation is proposed; the positive account is inspired partly by Dretske’s views about content and partly by the role of mental representations in contemporary cognitive (...) scientific modeling. (shrink)
The objective of this article is twofold. First, a methodological issue is addressed. It is pointed out that even if philosophers of mathematics have been recently more and more concerned with the practice of mathematics, there is still a need for a sharp deﬁnition of what the targets of a philosophy of mathematical practice should be. Three possible objects of inquiry are put forward: (1) the collective dimension of the practice of mathematics; (2) the cognitives capacities requested to the practitioners; (...) and (3) the speciﬁc forms of representation and notation shared and selected by the practitioners. Moreover, it is claimed that a broadening of the notion of ‘permissible action’ as introduced by Larvor (2012) with respect to mathematical arguments, allows for a consideration of all these three elements simultaneously. Second, a case from topology – the proof of Alexander’s theorem – is presented to illustrate a concrete analysis of a mathematical practice and to exemplify the proposed method. It is discussed that the attention to the three elements of the practice identiﬁed above brings to the emergence of philosophically relevant features in the practice of topology: the need for a revision in the deﬁnition of criteria of validity, the interest in tracking the operations that are performed on the notation, and the constant and fruitful back-and-forth from one representation to another in dealing with mathematical content. Finally, some suggestions for further research are given in the conclusions. (shrink)
Amalgamating evidence of different kinds for the same hypothesis into an overall confirmation is analogous, I argue, to amalgamating individuals’ preferences into a group preference. The latter faces well-known impossibility theorems, most famously “Arrow’s Theorem”. Once the analogy between amalgamating evidence and amalgamating preferences is tight, it is obvious that amalgamating evidence might face a theorem similar to Arrow’s. I prove that this is so, and end by discussing the plausibility of the axioms required for the theorem.
The sensorimotor theory of perceptual experience claims that perception is constituted by bodily interaction with the environment, drawing on practical knowledge of the systematic ways that sensory inputs are disposed to change as a result of movement. Despite the theory’s associations with enactivism, it is sometimes claimed that the appeal to ‘knowledge’ means that the theory is committed to giving an essential theoretical role to internal representation, and therefore to a form of orthodox cognitive science. This paper defends the (...) role ascribed to knowledge by the theory, but argues that this knowledge can and should be identified with bodily skill rather than representation. Making the further argument that the notion of ‘representation hunger’ can be replaced with ‘prima facie representation hunger’, it concludes that although the theory could optionally be developed scientifically in part by reference to internal representation, it makes a strong and natural fit with anti-representationalist embodied or enactive cognitive science. (shrink)
Making sense of modeling: beyond representation Content Type Journal Article Category Original paper in Philosophy of Science Pages 335-352 DOI 10.1007/s13194-011-0032-8 Authors Isabelle Peschard, Philosophy Department, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132, USA Journal European Journal for Philosophy of Science Online ISSN 1879-4920 Print ISSN 1879-4912 Journal Volume Volume 1 Journal Issue Volume 1, Number 3.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.