A wide variety of ontologies relevant to the biological and medical domains are available through the OBO Foundry portal, and their number is growing rapidly. Integration of these ontologies, while requiring considerable effort, is extremely desirable. However, heterogeneities in format and style pose serious obstacles to such integration. In particular, inconsistencies in naming conventions can impair the readability and navigability of ontology class hierarchies, and hinder their alignment and integration. While other sources of diversity are tremendously complex and challenging, agreeing (...) a set of common naming conventions is an achievable goal, particularly if those conventions are based on lessons drawn from pooled practical experience and surveys of community opinion. We summarize a review of existing naming conventions and highlight certain disadvantages with respect to general applicability in the biological domain. We also present the results of a survey carried out to establish which naming conventions are currently employed by OBO Foundry ontologies and to determine what their special requirements regarding the naming of entities might be. Lastly, we propose an initial set of typographic, syntactic and semantic conventions for labelling classes in OBO Foundry ontologies. Adherence to common naming conventions is more than just a matter of aesthetics. Such conventions provide guidance to ontology creators, help developers avoid flaws and inaccuracies when editing, and especially when interlinking, ontologies. Common naming conventions will also assist consumers of ontologies to more readily understand what meanings were intended by the authors of ontologies used in annotating bodies of data. (shrink)
The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to (...) existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource providing details on the people, policies, and issues being addressed in association with OBI. (shrink)
Throughout the biological and biomedical sciences there is a growing need for, prescriptive ‘minimum information’ (MI) checklists specifying the key information to include when reporting experimental results are beginning to find favor with experimentalists, analysts, publishers and funders alike. Such checklists aim to ensure that methods, data, analyses and results are described to a level sufficient to support the unambiguous interpretation, sophisticated search, reanalysis and experimental corroboration and reuse of data sets, facilitating the extraction of maximum value from data sets (...) them. However, such ‘minimum information’ MI checklists are usually developed independently by groups working within representatives of particular biologically- or technologically-delineated domains. Consequently, an overview of the full range of checklists can be difficult to establish without intensive searching, and even tracking thetheir individual evolution of single checklists may be a non-trivial exercise. Checklists are also inevitably partially redundant when measured one against another, and where they overlap is far from straightforward. Furthermore, conflicts in scope and arbitrary decisions on wording and sub-structuring make integration difficult. This presents inhibit their use in combination. Overall, these issues present significant difficulties for the users of checklists, especially those in areas such as systems biology, who routinely combine information from multiple biological domains and technology platforms. To address all of the above, we present MIBBI (Minimum Information for Biological and Biomedical Investigations); a web-based communal resource for such checklists, designed to act as a ‘one-stop shop’ for those exploring the range of extant checklist projects, and to foster collaborative, integrative development and ultimately promote gradual integration of checklists. (shrink)
The Pareto principle states that if the members of society express the same preference judgment between two options, this judgment is compelling for society. A building block of normative economics and social choice theory, and often borrowed by contemporary political philosophy, the principle has rarely been subjected to philosophical criticism. The paper objects to it on the ground that it indifferently applies to those cases in which the individuals agree on both their expressed preferences and their reasons for entertaining them, (...) and those cases in which they agree on their expressed preferences, while differing on their reasons. The latter are cases of "spurious unanimity", and it is normatively inappropriate, or so the paper argues, to defend unanimity preservation at the social level for them, so the Pareto principle is formulated much too broadly. The objection seems especially powerful when the principle is applied in an ex ante context of uncertainty, in which individuals can disagree on both their probabilities and utilities, and nonetheless agree on their preferences over prospects. (shrink)
Whereas many others have scrutinized the Allais paradox from a theoretical angle, we study the paradox from an historical perspective and link our findings to a suggestion as to how decision theory could make use of it today. We emphasize that Allais proposed the paradox as a normative argument, concerned with ‘the rational man’ and not the ‘real man’, to use his words. Moreover, and more subtly, we argue that Allais had an unusual sense of the normative, being concerned not (...) so much with the rationality of choices as with the rationality of the agent as a person. These two claims are buttressed by a detailed investigation – the first of its kind – of the 1952 Paris conference on risk, which set the context for the invention of the paradox, and a detailed reconstruction – also the first of its kind – of Allais’s specific normative argument from his numerous but allusive writings. The paper contrasts these interpretations of what the paradox historically represented, with how it generally came to function within decision theory from the late 1970s onwards: that is, as an empirical refutation of the expected utility hypothesis, and more specifically of the condition of von Neumann–Morgenstern independence that underlies that hypothesis. While not denying that this use of the paradox was fruitful in many ways, we propose another use that turns out also to be compatible with an experimental perspective. Following Allais’s hints on ‘the experimental definition of rationality’, this new use consists in letting the experiment itself speak of the rationality or otherwise of the subjects. In the 1970s, a short sequence of papers inspired by Allais implemented original ways of eliciting the reasons guiding the subjects’ choices, and claimed to be able to draw relevant normative consequences from this information. We end by reviewing this forgotten experimental avenue not simply historically, but with a view to recommending it for possible use by decision theorists today. (shrink)
According to a theorem recently proved in the theory of logical aggregation, any nonconstant social judgment function that satisfies independence of irrelevant alternatives (IIA) is dictatorial. We show that the strong and not very plausible IIA condition can be replaced with a minimal independence assumption plus a Pareto-like condition. This new version of the impossibility theorem likens it to Arrow’s and arguably enhances its paradoxical value.
As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savage’s postulates on preferences under uncertainty entail a subjective expected utility representation, and this asserts only the existence and uniqueness of a subjective probability measure, regardless of its properties. What is missing is a preference condition corresponding to stochastic independence. To fill this (...) significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework. (shrink)
We investigate the conflict between the ex ante and ex post criteria of social welfare in a new framework of individual and social decisions, which distinguishes between two sources of uncertainty, here interpreted as an objective and a subjective source respectively. This framework makes it possible to endow the individuals and society not only with ex ante and ex post preferences, as is usually done, but also with interim preferences of two kinds, and correspondingly, to introduce interim forms of the (...) Pareto principle. After characterizing the ex ante and ex post criteria, we present a first solution to their conflict that extends the former as much possible in the direction of the latter. Then, we present a second solution, which goes in the opposite direction, and is also maximally assertive. Both solutions translate the assumed Pareto conditions into weighted additive utility representations, and both attribute to the individuals common probability values on the objective source of uncertainty, and different probability values on the subjective source. We discuss these solutions in terms of two conceptual arguments, i.e., the by now classic spurious unanimity argument and a novel informational argument labelled complementary ignorance. The paper complies with the standard economic methodology of basing probability and utility representations on preference axioms, but for the sake of completeness, also considers a construal of objective uncertainty based on the assumption of an exogeneously given probability measure. JEL classification: D70; D81. (shrink)
Judgment aggregation theory, or rather, as we conceive of it here, logical aggregation theory generalizes social choice theory by having the aggregation rule bear on judgments of all kinds instead of merely preference judgments. It derives from Kornhauser and Sager’s doctrinal paradox and List and Pettit’s discursive dilemma, two problems that we distinguish emphatically here. The current theory has developed from the discursive dilemma, rather than the doctrinal paradox, and the final objective of the paper is to give the latter (...) its own theoretical development along the line of recent work by Dietrich and Mongin. However, the paper also aims at reviewing logical aggregation theory as such, and it covers impossibility theorems by Dietrich, Dietrich and List, Dokow and Holzman, List and Pettit, Mongin, Nehring and Puppe, Pauly and van Hees, providing a uniform logical framework in which they can be compared with each other. The review goes through three historical stages: the initial paradox and dilemma, the scattered early results on the independence axiom, and the so-called canonical theorem, a collective achievement that provided the theory with its specific method of analysis. The paper goes some way towards philosophical logic, first by briefly connecting the aggregative framework of judgment with the modern philosophy of judgment, and second by thoroughly discussing and axiomatizing the ‘general logic’ built in this framework. (shrink)
Nudge is a concept of policy intervention that originates in Thaler and Sunstein's (2008) popular eponymous book. Following their own hints, we distinguish three properties of nudge interventions: they redirect individual choices by only slightly altering choice conditions (here nudge 1), they use rationality failures instrumentally (here nudge 2), and they alleviate the unfavourable effects of these failures (here nudge 3). We explore each property in semantic detail and show that no entailment relation holds between them. This calls into question (...) the theoretical unity of nudge, as intended by Thaler and Sunstein and most followers. We eventually recommend pursuing each property separately, both in policy research and at the foundational level. We particularly emphasize the need of reconsidering the respective roles of decision theory and behavioural economics to delineate nudge 2 correctly. The paper differs from most of the literature in focusing on the definitional rather than the normative problems of nudge. (shrink)
The paper has a twofold aim. On the one hand, it provides what appears to be the first game-theoretic modeling of Napoleon’s last campaign, which ended dramatically on 18 June 1815 at Waterloo. It is specifically concerned with the decision Napoleon made on 17 June 1815 to detach part of his army against the Prussians he had defeated, though not destroyed, on 16 June at Ligny. Military historians agree that this decision was crucial but disagree about whether it was rational. (...) Hypothesizing a zero-sum game between Napoleon and Blücher, and computing its solution, we show that it could have been a cautious strategy on the former's part to divide his army, a conclusion which runs counter to the charges of misjudgement commonly heard since Clausewitz. On the other hand, the paper addresses methodological issues. We defend its case study against the objections of irrelevance that have been raised elsewhere against “analytic narratives”, and conclude that military campaigns provide an opportunity for successful application of the formal theories of rational choice. Generalizing the argument, we finally investigate the conflict between narrative accounts – the historians' standard mode of expression – and mathematical modeling. (shrink)
Popper's well-known demarcation criterion has often been understood to distinguish statements of empirical science according to their logical form. Implicit in this interpretation of Popper's philosophy is the belief that when the universe of discourse of the empirical scientist is infinite, empirical universal sentences are falsifiable but not verifiable, whereas the converse holds for existential sentences. A remarkable elaboration of this belief is to be found in Watkins's early work on the statements he calls “all-and-some,” such as: “For every metal (...) there is a melting point.” All-and-some statements are both universally and existentially quantified in that order. Watkins argued that AS should be regarded as both nonfalsifiable and nonverifiable, for they partake in the logical fate of both universal and existential statements. This claim is subject to the proviso that the bound variables are “uncircumscribed” ; i.e., that the universe of discourse is infinite. (shrink)
In abstract argumentation, each argument is regarded as atomic. There is no internal structure to an argument. Also, there is no specification of what is an argument or an attack. They are assumed to be given. This abstract perspective provides many advantages for studying the nature of argumentation, but it does not cover all our needs for understanding argumentation or for building tools for supporting or undertaking argumentation. If we want a more detailed formalization of arguments than is available with (...) abstract argumentation, we can turn to structured argumentation, which is the topic of this special issue of Argument and Computation. In structured argumentation, we assume a formal language for representing knowledge and specifying how arguments and counterarguments can be constructed from that knowledge. An argument is then said to be structured in the sense that normally, the premises and claim of the argument are made explicit, and the relationship between the premises and claim is formally defined (for instance, using logical entailment). In this introduction, we provide a brief overview of the approaches covered in this special issue on structured argumentation. (shrink)
The paper analyses economic evaluations by distinguishing evaluative statements from actual value judgments. From this basis, it compares four solutions to the value neutrality problem in economics. After rebutting the strong theses about neutrality (normative economics is illegitimate) and non-neutrality (the social sciences are value-impregnated), the paper settles the case between the weak neutrality thesis (common in welfare economics) and a novel, weak non-neutrality thesis that extends the realm of normative economics more widely than the other weak thesis does.
We introduce a ranking of multidimensional alternatives, including uncertain prospects as a particular case, when these objects can be given a matrix form. This ranking is separable in terms of rows and columns, and continuous and monotonic in the basic quantities. Owing to the theory of additive separability developed here, we derive very precise numerical representations over a large class of domains (i.e., typically notof the Cartesian product form). We apply these representationsto (1)streams of commodity baskets through time, (2)uncertain social (...) prospects, (3)uncertain individual prospects. Concerning(1), we propose a finite horizon variant of Koopmans’s (1960) axiomatization of infinite discounted utility sums. The main results concern(2). We push the classic comparison between the exanteand expostsocial welfare criteria one step further by avoiding any expected utility assumptions, and as a consequence obtain what appears to be the strongest existing form of Harsanyi’s (1955) Aggregation Theorem. Concerning(3), we derive a subjective probability for Anscombe and Aumann’s (1963) finite case by merely assuming that there are two epistemically independent sources of uncertainty. (shrink)
Following a long-standing philosophical tradition, impartiality is a distinctive and determining feature of moral judgments, especially in matters of distributive justice. This broad ethical tradition was revived in welfare economics by Vickrey, and above all, Harsanyi, under the form of the so-called Impartial Observer Theorem. The paper offers an analytical reconstruction of this argument and a step-wise philosophical critique of its premisses. It eventually provides a new formal version of the theorem based on subjective probability.
Abstract: Economists are accustomed to distinguishing between a positive and a normative component of their work, a distinction that is peculiar to their field, having no exact counterpart in the other social sciences. The distinction has substantially changed over time, and the different ways of understanding it today are reflective of its history. Our objective is to trace the origins and initial forms of the distinction, from the English classical political economy of the first half of the 19th century to (...) the emergence of welfare economics in the first half of the 20th century. This sequential account will also serve to identify the main representative positions along with the arguments used to support them, and it thus prepares the ground for a discussion that will be less historical and more strictly conceptual. -/- Résumé : Les économistes ont coutume de distinguer entre une composante positive et une composante normative de leurs travaux, ce qui est une singularité de leur discipline, car cette distinction n'a pas de répondant exact dans les autres sciences sociales. Elle a fortement évolué au cours du temps et les différentes manières de la concevoir aujourd'hui en reflètent l'histoire. On se propose ici d'en retracer les origines et les premières formes, de l'économie politique classique anglaise de la première moitié du XIXe siècle jusqu'à l'apparition de l'économie du bien-être dans la première moitié du XXe siècle. Ce parcours séquentiel vise aussi à identifier les positions les plus représentatives et les arguments invoqués pour les soutenir, en préparant ainsi une discussion qui serait moins historique et plus strictement conceptuelle. (shrink)
The relations between rationality and optimization have been widely discussed in the wake of Herbert Simon's work, with the common conclusion that the rationality concept does not imply the optimization principle. The paper is partly concerned with adding evidence for this view, but its main, more challenging objective is to question the converse implication from optimization to rationality, which is accepted even by bounded rationality theorists. We discuss three topics in succession: (1) rationally defensible cyclical choices, (2) the revealed preference (...) theory of optimization, and (3) the infinite regress of optimization. We conclude that (1) and (2) provide evidence only for the weak thesis that rationality does not imply optimization. But (3) is seen to deliver a significant argument for the strong thesis that optimization does not imply rationality. (shrink)
Stochastic independence has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory. Bayesian decision theorists such as Savage can be criticized for being silent about stochastic independence. From their current preference axioms, they can derive no more than the definitional properties of a probability measure. In a new framework of twofold uncertainty, we introduce preference axioms that entail not (...) only these definitional properties, but also the stochastic independence of the two sources of uncertainty. This goes some way towards filling a curious lacuna in Bayesian decision theory. (shrink)
English Title: Time and scansion: rythmical meaning of Duration between Husserl and Bachelard. -/- Abstract: Inside phenomenological search, present time and instant live inside a troubled dialectic: for Husserl present runs, widening out past and future, in the same moment, like the Heraclitean bowstring which stretches between two dimensions. Gaston Bachelard, on the contrary, is the thinker of Discreteness, where temporal continuum is linked to the reciprocal differentiating of instants in the duration. So, the conceptions of time inside these philosophers (...) seem to be opposed one to the other, but inside these two modalities of scansion we meet a steady thread, which underlies both the interpretations, which precipitate one on the other. Let’s read the taking shape of the positions inside La dialectique de la durée (1936) and in Husserl’s writings on attention, linked to the rise of an aspect of the thing above the others (1904 -1905) treated in Husserliana XXXVIII (Wahrnehmung und Aufmerksamkeit, Texte aus dem Nachlass, (1893 – 1912). /// -/- Resumen: En la investigación fenomenológica los conceptos de instante y presente viven in potente una abierta tensión constitutiva: en la filosofía de Husserl es el presente el que transcurre, dilatándose conjuntamente hacia el pasado y el futuro como la heraclítea cuerda del arco, que se extiende entre los dos lados de una dirección unitaria y continua. Gaston Bachelard, por el contrario, es un pensador de la discreción, no existe un continuo temporal sino una gran sutura en la que los instantes se diferencian en la duración. Las concepciones del tiempo entre los dos filósofos parecen oponerse la una a la otra y, sin embargo, entre las dos formas de escansión, existe un hilo firme, que ciñe las dos interpretaciones, haciéndolas precipitar una dentro de la otra, en nombre del contenido perceptivo. Las dos posiciones emergen con su diferencia en La dialéctica de la duración [1936] y en los textos que Husserl ha dedicado al tema de la atención y el interés [1898; 1904; 1905]. -/- . (shrink)
This chapter briefly reviews the present state of judgment aggregation theory and tentatively suggests a future direction for that theory. In the review, we start by emphasizing the difference between the doctrinal paradox and the discursive dilemma, two idealized examples which classically serve to motivate the theory, and then proceed to reconstruct it as a brand of logical theory, unlike in some other interpretations, using a single impossibility theorem as a key to its technical development. In the prospective part, having (...) mentioned existing applications to social choice theory and computer science, which we do not discuss here, we consider a potential application to law and economics. This would be based on a deeper exploration of the doctrinal paradox and its relevance to the functioning of collegiate courts. On this topic, legal theorists have provided empirical observations and theoretical hints that judgment aggregation theorists would be in a position to clarify and further elaborate. As a general message, the chapter means to suggest that the future of judgment aggregation theory lies with its applications rather than its internal theoretical development. (shrink)
ABSTRACT. The relations between rationality and optimization have been widely discussed in the wake of Herbert Simon’s work, with the common conclusion that the rationality concept does not imply the optimization principle. The paper is partly concerned with adding evidence for this view, but its main, more challenging objective is to question the converse implication from optimization to rationality, which is accepted even by bounded rationality theorists. We discuss three topics in succession: (1) rationally defensible cyclical choices, (2) the revealed (...) preference theory of optimization, and (3) the infinite regress of optimization. We conclude that (1) and (2) provide evidence only for the weak thesis that rationality does not imply optimization. But (3) is seen to deliver a significant argument for the strong thesis that optimization does not imply rationality. (shrink)
This article reviews Herbert Simon's theory of bounded rationality, with a view of deemphasizing his "satisficing" model, and by contrast, of emphasizing his distinction between "procedural" and "substantive" rationality. The article also discusses a possible move from neo-classical economists to respond to Simon's criticisms, i.e., a reduction of bounded rationality to a special case of second-optimization, using Stigler's search theory. This move is eventually dismissed.
This monographic chapter explains how expected utility (EU) theory arose in von Neumann and Morgenstern, how it was called into question by Allais and others, and how it gave way to non-EU theories, at least among the specialized quarters of decion theory. I organize the narrative around the idea that the successive theoretical moves amounted to resolving Duhem-Quine underdetermination problems, so they can be assessed in terms of the philosophical recommendations made to overcome these problems. I actually follow Duhem's recommendation, (...) which was essentially to rely on the passing of time to make many experiments and arguments available, and evebntually strike a balance between competing theories on the basis of this improved knowledge. Although Duhem's solution seems disappointingly vague, relying as it does on "bon sens" to bring an end to the temporal process, I do not think there is any better one in the philosophical literature, and I apply it here for what it is worth. In this perspective, EU theorists were justified in resisting the first attempts at refuting their theory, including Allais's in the 50s, but they would have lacked "bon sens" in not acknowledging their defeat in the 80s, after the long process of pros and cons had sufficiently matured. This primary Duhemian theme is actually combined with a secondary theme - normativity. I suggest that EU theory was normative at its very beginning and has remained so all along, and I express dissatisfaction with the orthodox view that it could be treated as a straightforward descriptive theory for purposes of prediction and scientific test. This view is usually accompanied with a faulty historical reconstruction, according to which EU theorists initially formulated the VNM axioms descriptively and retreated to a normative construal once they fell threatened by empirical refutation. From my historical study, things did not evolve in this way, and the theory was both proposed and rebutted on the basis of normative arguments already in the 1950s. The ensuing, major problem was to make choice experiments compatible with this inherently normative feature of theory. Compability was obtained in some experiments, but implicitly and somewhat confusingly, for instance by excluding overtly incoherent subjects or by creating strong incentives for the subjects to reflect on the questions and provide answers they would be able to defend. I also claim that Allais had an intuition of how to combine testability and normativity, unlike most later experimenters, and that it would have been more fruitful to work from his intuition than to make choice experiments of the naively empirical style that flourished after him. In sum, it can be said that the underdetermination process accompanying EUT was resolved in a Duhemian way, but this was not without major inefficiencies. To embody explicit rationality considerations into experimental schemes right from the beginning would have limited the scope of empirical research, avoided wasting resources to get only minor findings, and speeded up the Duhemian process of groping towards a choice among competing theories. (shrink)
The article discusses Friedman's classic claim that economics can be based on irrealistic assumptions. It exploits Samuelson's distinction between two "F-twists" (that is, "it is an advantage for an economic theory to use irrealistic assumptions" vs "the more irrealistic the assumptions, the better the economic theory"), as well as Nagel's distinction between three philosophy-of-science construals of the basic claim. On examination, only one of Nagel's construals seems promising enough. It involves the neo-positivistic distinction between theoretical and non-theoretical ("observable") terms; so (...) Friedman would in some sense argue for the major role of theoretical terms in economics. The paper uses a model-theoretic apparatus to refine the selected construal and check whether it can be made to support the claim. This inquiry leads to essentially negative results for both F-twists, and the final conclusion is that they are left unsupported. (shrink)
The paper revisits the rationality principle from the particular perspective of the unity of social sciences. It has been argued that the principle was the unique law of the social sciences and that accordingly there are no deep differences between them (Popper). It has also been argued that the rationality principle was specific to economics as opposed to the other social sciences, especially sociology (Pareto). The paper rejects these opposite views on the grounds that the rationality principle is strictly metaphysical (...) and does not have the logical force required to deliver interesting deductions. Explanation in the social sciences takes place at a level of specialization that is always higher than that of the principle itself. However, what is peculiar about economics is that it specializes the explanatory rational schemes to a degree unparalleled in history and sociology. As a consequence, there is a backward-and-forward move between specific and general formulations of rationality that takes place in economics and has no analogue in the other social sciences. (shrink)
This article critically discusses the concept of economic rationality, arguing that it is too narrow and specific to encompass the full concept of practical rationality. Economic rationality is identified here with the use of the optimizing model of decision, as well as of expected utility apparatus to deal with uncertainty. To argue that practical rationality is broader than economic rationality, the article claims that practical rationality includes bounded rationality as a particular case, and that bounded rationality cannot be reduced to (...) economic rationality as defined here. (shrink)
Two decades ago, Rolf Landauer (1991) argued that “information is physical” and ought to have a role in the scientific analysis of reality comparable to that of matter, energy, space and time. This would also help to bridge the gap between biology and mathematics and physics. Although it can be argued that we are living in the ‘golden age’ of biology, both because of the great challenges posed by medicine and the environment and the significant advances that have been made—especially (...) in genetics and molecular and cell biology—we feel that information as an essential aspect of life has been neglected, or at least misunderstood. We therefore summon Maxwell’s Demon and its distant relative the ratchet, and apply these to biology. (shrink)
This chapter of the Handbook of Utility Theory aims at covering the connections between utility theory and social ethics. The chapter first discusses the philosophical interpretations of utility functions, then explains how social choice theory uses them to represent interpersonal comparisons of welfare in either utilitarian or non-utilitarian representations of social preferences. The chapter also contains an extensive account of John Harsanyi's formal reconstruction of utilitarianism and its developments in the later literature, especially when society faces uncertainty rather than probabilistic (...) risk. (shrink)
Taking the philosophical standpoint, this article compares the mathematical theory of individual decision-making with the folk psychology conception of action, desire and belief. It narrows down its topic by carrying the comparison vis-à-vis Savage's system and its technical concept of subjective probability, which is referred to the basic model of betting as in Ramsey. The argument is organized around three philosophical theses: (i) decision theory is nothing but folk psychology stated in formal language (Lewis), (ii) the former substantially improves on (...) the latter, but is unable to overcome its typical limitations, especially its failure to separate desire and belief empirically (Davidson), (iii) the former substantially improves on the latter, and through these innovations, overcomes some of the limitations. The aim of the article is to establish (iii) not only against the all too simple thesis (i), but also against the subtle thesis (ii). (shrink)
This paper discusses the nature of normative economics by distinguishing the alternative conceptions that economists have entertained in this respect. It attempts at connecting these conceptions with their philosophical sources, which essentially consist in variants of positivism and Weber's philosophy of values. The paper defends the claim that positive and normative economics differ from each other to the extent that the former does not, while the latter does, involve value judgments made by the theorist himself. This claim runs counter to (...) the Weberian thesis of value-freedom that is implicitly endorsed by a majority of today's normative economists. -/- . (shrink)
We introduce and study a variety of modal logics of parallelism, orthogonality, and affine geometries, for which we establish several completeness, decidability and complexity results and state a number of related open, and apparently difficult problems. We also demonstrate that lack of the finite model property of modal logics for sufficiently rich affine or projective geometries (incl. the real affine and projective planes) is a rather common phenomenon.
At this stage of the COVID-19 pandemic, two policy aims are imperative: avoiding the need for a general lockdown of the population, with all its economic, social and health costs, and preventing the healthcare system from being overwhelmed by the unchecked spread of infection. Achieving these two aims requires the consideration of unpalatable measures. Julian Savulescu and James Cameron argue that mandatory isolation of the elderly is justified under these circumstances, as they are at increased risk of becoming severely ill (...) from COVID-19, and are thus likely to put disproportionate strain on limited healthcare resources. However, their arguments for this strategy are contingent on the lack of viable alternatives. We suggest that there is a possible alternative: a mandatory, centralised contact-tracing app. We argue that this strategy is ethically preferable to the selective isolation of the elderly, because it does not target members of a certain group, relying instead on the movements of each individual, and because it avoids the extended isolation of certain members of the society. Although this type of contact-tracing app has its drawbacks, we contend that this measure warrants serious consideration. (shrink)
This paper is concerned with representations of belief by means of nonadditive probabilities of the Dempster-Shafer (DS) type. After surveying some foundational issues and results in the D.S. theory, including Suppes's related contributions, the paper proceeds to analyze the connection of the D.S. theory with some of the work currently pursued in epistemic logic. A preliminary investigation of the modal logic of belief functions à la Shafer is made. There it is shown that the Alchourrron-Gärdenfors-Makinson (A.G.M.) logic of belief change (...) is closely related to the D.S. theory. The final section compares the critique of Bayesianism which underlies the present paper with some important objections raised by Suppes against this doctrine. -/- . (shrink)
This is a critical notice/review essay on *L'embryogenèse du monde et le Dieu silencieux*, a manuscript completed by Raymond Ruyer in the early 1980s. It came out as a monograph in November 2013, with the Éditions Klincksieck in Paris. It offers a presentation in an organized fashion of many aspects of his thought. Ruyer considered that a book about God could only be churned into a series of chapters on the unachievable character of our knowledge in different domains of human (...) inquiry. The nature of this final solution on God's relationship to the world and to natural forms is here assessed critically. (shrink)
The paper discusses the sense in which the changes undergone by normative economics in the twentieth century can be said to be progressive. A simple criterion is proposed to decide whether a sequence of normative theories is progressive. This criterion is put to use on the historical transition from the new welfare economics to social choice theory. The paper reconstructs this classic case, and eventually concludes that the latter theory was progressive compared with the former. It also briefly comments on (...) the recent developments in normative economics and their connection with the previous two stages. (Published Online April 18 2006) Footnotes1 This paper suspersedes an earlier one entitled “Is There Progress in Normative Economics?” (Mongin 2002). I thank the organizers of the Fourth ESHET Conference (Graz 2000) for the opportunity they gave me to lecture on this topic. Thanks are also due to J. Alexander, K. Arrow, A. Bird, R. Bradley, M. Dascal, W. Gaertner, N. Gravel, D. Hausman, B. Hill, C. Howson, N. McClennen, A. Trannoy, J. Weymark, J. Worrall, two annonymous referees of this journal, and especially the editor M. Fleurbaey, for helpful comments. The editor's suggestions contributed to determine the final orientation of the paper. The author is grateful to the LSE and the Lachmann Foundation for their support at the time when he was writing the initial version. (shrink)
Transhumanism is a means of advocating a re-engineering of conditions that surround human existence at both ends. The problem set before us in this chapter is to inquire into what determined its appearance, in particular in the humanism it seeks to overcome. We look at the spirit of overcoming itself, and the impatience with the Self, in order to try to understand why it seeks a saving power in technology. We then consider how the evolutionary account of the production of (...) organisms does not set them against a perfect standard, but rather injects in them a contingency that seems to be near to the heart of the problem. We then try to assess the objective basis for improvements and manipulation of nature, and although we do not find it forbidden on all occasions, it seems that the criteria for such alterations are impossible to detach from a form of eugenics. We finally open a window toward a theological account of the problem, and find that the desire of autonomy and independence is inevitably going to be challenged by the Christian dogma of creation. (shrink)
This text reconsiders the philosophizing into the future of mankind and futurology done by molecular biologist Gunther Stent in *The Coming of the Golden Age* in the light of Raymond Ruyer's critical notice published in the aftermath of the publication of Stent's book in French translation. For Ruyer, it is an occasion to revisit his own take on what he called in his last work a "theology of the opposition between the organic and the rational," and to restate in a (...) new light his conclusions concerning Cournot's suggestion as to the becoming of social relationships in a context of management of complexity of association. It is argued here that both Stent and Ruyer share a common thermodynamic, informational, and also surprisingly Nietzschean ascendency in judging of the possible outcomes for the human race. (shrink)
The term “Complex Systems Biology” was introduced a few years ago [Kaneko, 2006] and, although not yet of widespread use, it seems particularly well suited to indicate an approach to biology which is well rooted in complex systems science. Although broad generalizations are always dangerous, it is safe to state that mainstream biology has been largely dominated by a gene-centric view in the last decades, due to the success of molecular biology. So the one gene - one trait approch, which (...) has often proved to be effective, has been extended to cover even complex traits. This simplifying view has been appropriately criticized, and the movement called systems biology has taken off. Systems biology [Noble, 2006] emphasizes the presence of several feedback loops in biological systems, which severely limit the range of validity of explanations based upon linear causal chains (e.g. gene → behaviour). Mathematical modelling is one the favourite tools of systems biologists to analyze the possible effects of competing negative and positive feedback loops which can be observed at several levels (from molecules to organelles, cells, tissues, organs, organisms, ecosystems). Systems biology is by now a well-established field, as it can be inferred by the rapid growth in number of conferences and journals devoted to it, as well as by the existence of several grants and funded projects.Systems biology is mainly focused upon the description of specific biological items, like for example specific organisms, or specific organs in a class of animals, or specific genetic-metabolic circuits. It therefore leaves open the issue of the search for general principles of biological organization, which apply to all living beings or to at least to broad classes. We know indeed that there are some principles of this kind, biological evolution being the most famous one. The theory of cellular organization also qualifies as a general principle. But the main focus of biological research has been that of studying specific cases, with some reluctance to accept (and perhaps a limited interest for) broad generalizations. This may however change, and this is indeed the challenge of complex systems biology: looking for general principles in biological systems, in the spirit of complex systems science which searches for similar features and behaviours in various kinds of systems. The hope to find such general principles appears well founded, and I will show in Section 2 that there are indeed data which provide support to this claim. Besides data, there are also general ideas and models concerning the way in which biological systems work. The strategy, in this case, is that of introducing simplified models of biological organisms or processes, and to look for their generic properties: this term, borrowed from statistical physics, is used for those properties which are shared by a wide class of systems. In order to model these properties, the most effective approach has been so far that of using ensembles of systems, where each member can be different from another one, and to look for those properties which are widespread. This approach was introduced many years ago [Kauffman, 1971] in modelling gene regulatory networks. At that time one had very few information about the way in which the expression of a given gene affects that of other genes, apart from the fact that this influence is real and can be studied in few selected cases (like e.g. the lactose metabolism in E. coli). Today, after many years of triumphs of molecular biology, much more has been discovered, however the possibility of describing a complete map of gene-gene interactions in a moderately complex organism is still out of reach. Therefore the goal of fully describing a network of interacting genes in a real organism could not be (and still cannot be) achieved. But a different approach has proven very fruitful, that of asking what are the typical properties of such a set of interacting genes. Making some plausible hypotheses and introducing some simplifying assumptions, Kauffman was able to address some important problems. In particular, he drew attention to the fact that a dynamical system of interacting genes displays selforganizing properties which explain some key aspects of life, most notably the existence of a limited number of cellular types in every multicellular organism (these numbers are of the order of a few hundreds, while the number of theoretically possible types, absent interactions, would be much much larger than the number of protons in the universe). In section 3 I will describe the ensemble based approach in the context of gene regulatory networks, and I will show that it can describe some important experimental data. Finally, in section 4 I will discuss some methodological aspects. (shrink)
This article discusses the rationality principle, especially in Popper's version, on the occasion of a commentary of Maurice Lagueux's book, Rationality and Explanation in Economics (2010).
In the framework of judgment aggregation, we assume that some formulas of the agenda are singled out as premisses, and that both Independence (formula-wise aggregation) and Unanimity Preservation hold for them. Whether premiss-based aggregation thus defined is compatible with conclusion-based aggregation, as defined by Unanimity Preservation on the non-premisses, depends on how the premisses are logically connected, both among themselves and with other formulas. We state necessary and sufficient conditions under which the combination of both approaches leads to dictatorship (resp. (...) oligarchy), either just on the premisses or on the whole agenda. This framework is inspired by the doctrinal paradox of legal theory and arguably relevant to this field as well as political science and political economy. When the set of premisses coincides with the whole agenda, a limiting case of our assumptions, we obtain several existing results in judgment aggregation theory. (shrink)
This essay presents and discusses the currently most famous among the deductive conceptions of explanation, i.e., the deductive-nomological one, and proceeds to apply it to microeconomic theory. After restating the basic ideas, the essay investigates some of the important objections raised against it, with a view to decide whether or not they invalidate the proposed application to economics.
The paper summarizes expected utility theory, both in its original von Neumann-Morgenstern version and its later developments, and discusses the normative claims to rationality made by this theory.
Suppose you are presented with three red objects. You are then asked to take a careful look at each possible pair of objects, and to decide whether or not their members look chromatically the same. You carry out the instructions thoroughly, and the following propositions sum up the results of your empirical investigation: <blockquote> i. red object #1 looks the same in colour as red object #2. </blockquote> ii. red object #2 looks the same in colour as red object #3.
The paper surveys the currently available axiomatizations of common belief (CB) and common knowledge (CK) by means of modal propositional logics. (Throughout, knowledge- whether individual or common- is defined as true belief.) Section 1 introduces the formal method of axiomatization followed by epistemic logicians, especially the syntax-semantics distinction, and the notion of a soundness and completeness theorem. Section 2 explains the syntactical concepts, while briefly discussing their motivations. Two standard semantic constructions, Kripke structures and neighbourhood structures, are introduced in Sections (...) 3 and 4, respectively. It is recalled that Aumann's partitional model of CK is a particular case of a definition in terms of Kripke structures. The paper also restates the well-known fact that Kripke structures can be regarded as particular cases of neighbourhood structures. Section 3 reviews the soundness and completeness theorems proved w.r.t. the former structures by Fagin, Halpern, Moses and Vardi, as well as related results by Lismont. Section 4 reviews the corresponding theorems derived w.r.t. the latter structures by Lismont and Mongin. A general conclusion of the paper is that the axiomatization of CB does not require as strong systems of individual belief as was originally thought- only monotonicity has thusfar proved indispensable. Section 5 explains another consequence of general relevance: despite the "infinitary" nature of CB, the axiom systems of this paper admit of effective decision procedures, i.e., they are decidable in the logician's sense. (shrink)
From the comparison of the Grundrisse (1857-58) manuscripts with Marx's subsequent writings, it is clear that the so-called « deduction » of fundamental economic categories follows two distinctive patterns, one of which is close to ordinary logical analysis, the other being inspired by Hegel's dialectics of essence. This duality is reflected in the double meaning of the concept of « presupposition » (Voraussetzung) and, finally, in the simultaneous endorsement by the Grundrisse of two labour-value theories, one of which is Smithian-like, (...) the other is Ricardian. Marx's reinterpretation of economic value as an « immanent measure », i.e., his claim that commodities are measured by each other when exchange takes place, should help to bridge the gap between the two theories. However, such reinterpretation is shown to be inadequate ; as a result, Marx's account of value should be seen as internally inconsistent. (shrink)
This French article aims at analyzing the Ricardian problem of an "invariable standard of value" in Ricardo's own terms. It is argued that Ricardo's commentators and modern followers have changed these terms significantly. The problem actually branches into two subproblems, i.e., that of "invariability" strictly, and that of "neutrality with respect to distribution". These subproblems do not matter to Ricardo to the same extent. He regards the latter (in various formulations recapitulated here) as a complication of the former, which is (...) the crucial one in his search for a "good" standard. This exemplifies precisely how Ricardo could theoretically focus on the production side of the economy at the expense of the distribution side. With these conclusions at hand, the paper can be critical of Marx's and Sraffa's interpretations of the Ricardian problem of the standard: respectively, because Marx's is simply incorrect, and because Sraffa's solved a problem that was unrelated to the original one in Ricardo. -/- -/- . (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.