One might think that if the majority of virtue signallers judge that a proposition is true, then there is significant evidence for the truth of that proposition. Given the Condorcet Jury Theorem, individual virtue signallers need not be very reliable for the majority judgment to be very likely to be correct. Thus, even people who are skeptical of the judgments of individual virtue signallers should think that if a majority of them judge that a proposition is true, then that provides (...) significant evidence that the proposition is true. We argue that this is mistaken. Various empirical studies converge on the following point: humans are very conformist in the contexts in which virtue signalling occurs. And stereotypical virtue signallers are even more conformist in such contexts. So we should be skeptical of the claim that virtue signallers are sufficiently independent for the Condorcet Jury Theorem to apply. We do not seek to decisively rule out the relevant application of the Condorcet Jury Theorem. But we do show that careful consideration of the available evidence should make us very skeptical of that application. Consequently, a defense of virtue signalling would need to engage with these findings and show that despite our strong tendencies for conformism, our judgements are sufficiently independent for the Condorcet Jury Theorem to apply. This suggests new directions for the debate about the epistemology of virtue signalling. (shrink)
The curriculum design, faculty characteristics, and experience of implementing masters' level international research ethics training programs supported by the Fogarty International Center was investigated. Multiple pedagogical approaches were employed to adapt to the learning needs of the trainees. While no generally agreed set of core competencies exists for advanced research ethics training, more than 75% of the curricula examined included international issues in research ethics, responsible conduct of research, human rights, philosophical foundations of research ethics, and research regulation and ethical (...) review process. Common skills taught included critical thinking, research methodology and statistics, writing, and presentation proficiency. Curricula also addressed the cultural, social, and religious context of the trainees related to research ethics. Programs surveyed noted trainee interest in Western concepts of research ethics and the value of the transnational exchange of ideas. Similar faculty expertise profiles existed in all programs. Approximately 40% of faculty were female. Collaboration between faculty from low- and middleincome countries (LMICs) and high-income countries (HICs) occurred in most programs and at least 50% of HIC faculty had previous LMIC experience. This paper is part of a collection of papers analyzing the Fogarty International Research Ethics Education and Curriculum Development program. (shrink)
In this article, I seek to make sense of the oft-invoked idea of 'public emergency' and of some of its (supposedly) radical moral implications. I challenge controversial claims by Tom Sorell, Michael Walzer, and Giorgio Agamben, and argue for a more discriminating understanding of the category and its moral force.
The aim of this paper is to discuss the “Austro-American” logical empiricism proposed by physicist and philosopher Philipp Frank, particularly his interpretation of Carnap’s Aufbau, which he considered the charter of logical empiricism as a scientific world conception. According to Frank, the Aufbau was to be read as an integration of the ideas of Mach and Poincaré, leading eventually to a pragmatism quite similar to that of the American pragmatist William James. Relying on this peculiar interpretation, Frank intended to bring (...) about a rapprochement between the logical empiricism of the Vienna Circle in exile and American pragmatism. In the course of this project, in the last years of his career, Frank outlined a comprehensive, socially engaged philosophy of science that could serve as a “link between science and philosophy”. (shrink)
The Pareto principle states that if the members of society express the same preference judgment between two options, this judgment is compelling for society. A building block of normative economics and social choice theory, and often borrowed by contemporary political philosophy, the principle has rarely been subjected to philosophical criticism. The paper objects to it on the ground that it indifferently applies to those cases in which the individuals agree on both their expressed preferences and their reasons for entertaining them, (...) and those cases in which they agree on their expressed preferences, while differing on their reasons. The latter are cases of "spurious unanimity", and it is normatively inappropriate, or so the paper argues, to defend unanimity preservation at the social level for them, so the Pareto principle is formulated much too broadly. The objection seems especially powerful when the principle is applied in an ex ante context of uncertainty, in which individuals can disagree on both their probabilities and utilities, and nonetheless agree on their preferences over prospects. (shrink)
In this paper, I argue that looking at the concept of neural function through the lens of cognition alone risks cognitive myopia: it leads neuroscientists to focus only on mechanisms with cognitive functions that process behaviorally relevant information when conceptualizing “neural function”. Cognitive myopia tempts researchers to neglect neural mechanisms with noncognitive functions which do not process behaviorally relevant information but maintain and repair neural and other systems of the body. Cognitive myopia similarly affects philosophy of neuroscience because scholars overlook (...) noncognitive functions when analyzing issues surrounding e.g., functional decomposition or the multifunctionality of neural structures. I argue that we can overcome cognitive myopia by adopting a patchwork approach that articulates cognitive and noncognitive “patches” of the concept of neural function. Cognitive patches describe mechanisms with causally specific effects on cognition and behavior which are likely operative in transforming sensory or other inputs into motor outputs. Noncognitive patches describe mechanisms that lack such specific effects; these mechanisms are enabling conditions for cognitive functions to occur. I use these distinctions to characterize two noncognitive functions at the mesoscale of neural circuits: subsistence functions like breathing are implemented by central pattern generators and are necessary to maintain the life of the organism. Infrastructural functions like gain control are implemented by canonical microcircuits and prevent neural system damage while cognitive processing occurs. By adding conceptual patches that describe these functions, a patchwork approach can overcome cognitive myopia and help us explain how the brain’s capacities as an information processing device are constrained by its ability to maintain and repair itself as a physiological apparatus. (shrink)
This paper sets out to evaluate the claim that Aristotle’s Assertoric Syllogistic is a relevance logic or shows significant similarities with it. I prepare the grounds for a meaningful comparison by extracting the notion of relevance employed in the most influential work on modern relevance logic, Anderson and Belnap’s Entailment. This notion is characterized by two conditions imposed on the concept of validity: first, that some meaning content is shared between the premises and the conclusion, and second, that the premises (...) of a proof are actually used to derive the conclusion. Turning to Aristotle’s Prior Analytics, I argue that there is evidence that Aristotle’s Assertoric Syllogistic satisfies both conditions. Moreover, Aristotle at one point explicitly addresses the potential harmfulness of syllogisms with unused premises. Here, I argue that Aristotle’s analysis allows for a rejection of such syllogisms on formal grounds established in the foregoing parts of the Prior Analytics. In a final section I consider the view that Aristotle distinguished between validity on the one hand and syllogistic validity on the other. Following this line of reasoning, Aristotle’s logic might not be a relevance logic, since relevance is part of syllogistic validity and not, as modern relevance logic demands, of general validity. I argue that the reasons to reject this view are more compelling than the reasons to accept it and that we can, cautiously, uphold the result that Aristotle’s logic is a relevance logic. (shrink)
According to a theorem recently proved in the theory of logical aggregation, any nonconstant social judgment function that satisfies independence of irrelevant alternatives (IIA) is dictatorial. We show that the strong and not very plausible IIA condition can be replaced with a minimal independence assumption plus a Pareto-like condition. This new version of the impossibility theorem likens it to Arrow’s and arguably enhances its paradoxical value.
The present article is published in Proche-Orient Chrétien, N.66, VOL.3-4, JAN. 2017, USJ: Beirut, pp. 425-430. It is a philosophical review of Philippe Capelle-Dumont and Yannick Courtel book “Religion et Liberté” that fetches the records of the First International Symposium of the Francophone Society of Philosophy of Religion about the two concepts Religion and Freedom. On one hand, religion has always been considered as a pole of practices and references contrary to freedom declining a dependence on a "binding doctrine"; (...) on the other hand, religion has undergone several political representations in the many spaces of cultural, social and international life which is urgent to re-examine. The article proposes a synthesis of the conferences of Philippe Capelle-Dumont, Jean-Luc Marion, Jean Greisch, Joseph O’Leary, François Chenet, Souleymane Bachir Diagne, Francis Jacques, Pierluigi Valenza, Danielle Cohen-Levinas, Yannick Courtel and Jean Grondin who concludes with the “freedom to philosophize about religion”. (shrink)
Whereas many others have scrutinized the Allais paradox from a theoretical angle, we study the paradox from an historical perspective and link our findings to a suggestion as to how decision theory could make use of it today. We emphasize that Allais proposed the paradox as a normative argument, concerned with ‘the rational man’ and not the ‘real man’, to use his words. Moreover, and more subtly, we argue that Allais had an unusual sense of the normative, being concerned not (...) so much with the rationality of choices as with the rationality of the agent as a person. These two claims are buttressed by a detailed investigation – the first of its kind – of the 1952 Paris conference on risk, which set the context for the invention of the paradox, and a detailed reconstruction – also the first of its kind – of Allais’s specific normative argument from his numerous but allusive writings. The paper contrasts these interpretations of what the paradox historically represented, with how it generally came to function within decision theory from the late 1970s onwards: that is, as an empirical refutation of the expected utility hypothesis, and more specifically of the condition of von Neumann–Morgenstern independence that underlies that hypothesis. While not denying that this use of the paradox was fruitful in many ways, we propose another use that turns out also to be compatible with an experimental perspective. Following Allais’s hints on ‘the experimental definition of rationality’, this new use consists in letting the experiment itself speak of the rationality or otherwise of the subjects. In the 1970s, a short sequence of papers inspired by Allais implemented original ways of eliciting the reasons guiding the subjects’ choices, and claimed to be able to draw relevant normative consequences from this information. We end by reviewing this forgotten experimental avenue not simply historically, but with a view to recommending it for possible use by decision theorists today. (shrink)
The concept of the cortical column refers to vertical cell bands with similar response properties, which were initially observed by Vernon Mountcastle’s mapping of single cell recordings in the cat somatic cortex. It has subsequently guided over 50 years of neuroscientific research, in which fundamental questions about the modularity of the cortex and basic principles of sensory information processing were empirically investigated. Nevertheless, the status of the column remains controversial today, as skeptical commentators proclaim that the vertical cell bands are (...) a functionally insignificant by-product of ontogenetic development. This paper inquires how the column came to be viewed as an elementary unit of the cortex from Mountcastle’s discovery in 1955 until David Hubel and Torsten Wiesel’s reception of the Nobel Prize in 1981. I first argue that Mountcastle’s vertical electrode recordings served as criteria for applying the column concept to electrophysiological data. In contrast to previous authors, I claim that this move from electrophysiological data to the phenomenon of columnar responses was concept-laden, but not theory-laden. In the second part of the paper, I argue that Mountcastle’s criteria provided Hubel Wiesel with a conceptual outlook, i.e. it allowed them to anticipate columnar patterns in the cat and macaque visual cortex. I argue that in the late 1970s, this outlook only briefly took a form that one could call a ‘theory’ of the cerebral cortex, before new experimental techniques started to diversify column research. I end by showing how this account of early column research fits into a larger project that follows the conceptual development of the column into the present. (shrink)
We investigate the conflict between the ex ante and ex post criteria of social welfare in a new framework of individual and social decisions, which distinguishes between two sources of uncertainty, here interpreted as an objective and a subjective source respectively. This framework makes it possible to endow the individuals and society not only with ex ante and ex post preferences, as is usually done, but also with interim preferences of two kinds, and correspondingly, to introduce interim forms of the (...) Pareto principle. After characterizing the ex ante and ex post criteria, we present a first solution to their conflict that extends the former as much possible in the direction of the latter. Then, we present a second solution, which goes in the opposite direction, and is also maximally assertive. Both solutions translate the assumed Pareto conditions into weighted additive utility representations, and both attribute to the individuals common probability values on the objective source of uncertainty, and different probability values on the subjective source. We discuss these solutions in terms of two conceptual arguments, i.e., the by now classic spurious unanimity argument and a novel informational argument labelled complementary ignorance. The paper complies with the standard economic methodology of basing probability and utility representations on preference axioms, but for the sake of completeness, also considers a construal of objective uncertainty based on the assumption of an exogeneously given probability measure. JEL classification: D70; D81. (shrink)
The objection of horrible commands claims that divine command metaethics is doomed to failure because it is committed to the extremely counterintuitive assumption that torture of innocents, rape, and murder would be morally obligatory if God commanded these acts. Morriston, Wielenberg, and Sinnott-Armstrong have argued that formulating this objection in terms of counterpossibles is particularly forceful because it cannot be simply evaded by insisting on God’s necessary perfect moral goodness. I show that divine command metaethics can be defended even against (...) this counterpossible version of the objection of horrible commands because we can explain the truth-value intuitions about the disputed counterpossibles as the result of conversational implicatures. Furthermore, I show that this pragmatics-based defence of divine command metaethics has several advantages over Pruss’s reductio counterargument against the counterpossible version of the objection of horrible commands. (shrink)
Judgment aggregation theory, or rather, as we conceive of it here, logical aggregation theory generalizes social choice theory by having the aggregation rule bear on judgments of all kinds instead of merely preference judgments. It derives from Kornhauser and Sager’s doctrinal paradox and List and Pettit’s discursive dilemma, two problems that we distinguish emphatically here. The current theory has developed from the discursive dilemma, rather than the doctrinal paradox, and the final objective of the paper is to give the latter (...) its own theoretical development along the line of recent work by Dietrich and Mongin. However, the paper also aims at reviewing logical aggregation theory as such, and it covers impossibility theorems by Dietrich, Dietrich and List, Dokow and Holzman, List and Pettit, Mongin, Nehring and Puppe, Pauly and van Hees, providing a uniform logical framework in which they can be compared with each other. The review goes through three historical stages: the initial paradox and dilemma, the scattered early results on the independence axiom, and the so-called canonical theorem, a collective achievement that provided the theory with its specific method of analysis. The paper goes some way towards philosophical logic, first by briefly connecting the aggregative framework of judgment with the modern philosophy of judgment, and second by thoroughly discussing and axiomatizing the ‘general logic’ built in this framework. (shrink)
In contemporary human brain mapping, it is commonly assumed that the “mind is what the brain does”. Based on that assumption, task-based imaging studies of the last three decades measured differences in brain activity that are thought to reflect the exercise of human mental capacities (e.g., perception, attention, memory). With the advancement of resting state studies, tractography and graph theory in the last decade, however, it became possible to study human brain connectivity without relying on cognitive tasks or constructs. It (...) therefore is currently an open question whether the assumption that “the mind is what the brain does” is an indispensable working hypothesis in human brain mapping. This paper argues that the hypothesis is, in fact, dispensable. If it is dropped, researchers can “meet the brain on its own terms” by searching for new, more adequate concepts to describe human brain organization. Neuroscientists can establish such concepts by conducting exploratory experiments that do not test particular cognitive hypotheses. The paper provides a systematic account of exploratory neuroscientific research that would allow researchers to form new concepts and formulate general principles of brain connectivity, and to combine connectivity studies with manipulation methods to identify neural entities in the brain. These research strategies would be most fruitful if applied to the mesoscopic scale of neuronal assemblies, since the organizational principles at this scale are currently largely unknown. This could help researchers to link microscopic and macroscopic evidence to provide a more comprehensive understanding of the human brain. The paper concludes by comparing this account of exploratory neuroscientific experiments to recent proposals for large-scale, discovery-based studies of human brain connectivity. (shrink)
In this public debate with Philippe Deterre (research director in immunology at the CNRS) – held at l'Enclos Rey in Paris' 15th district during the biennial Conference of the Réseau Blaise Pascal in March 2017 –, I defended the usefulness of natural theology. I first clarify theology's nature and understanding, then I speak about a tradition that upheld the public and exterior knowledge of God, and make an effort to show the presence of a theme reminiscent of natural theology (...) behind attempts at the good life. I then ask whether natural theology would only exist for the Christian. In the reply to my opponent's own reaction, I insist on the incongruity of separating our knowledge of God from our knowledge of science's wonderful discoveries, I ask whether nature could be said to be crafty and "ingenious," and I conclude by building a case for the return of God in public conversation, as part of an effort that our world needs in terms of finding back its compass, and restoring an ideal of living rationally. (shrink)
As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savage’s postulates on preferences under uncertainty entail a subjective expected utility representation, and this asserts only the existence and uniqueness of a subjective probability measure, regardless of its properties. What is missing is a preference condition corresponding to stochastic independence. To fill this (...) significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework. (shrink)
Nudge is a concept of policy intervention that originates in Thaler and Sunstein's (2008) popular eponymous book. Following their own hints, we distinguish three properties of nudge interventions: they redirect individual choices by only slightly altering choice conditions (here nudge 1), they use rationality failures instrumentally (here nudge 2), and they alleviate the unfavourable effects of these failures (here nudge 3). We explore each property in semantic detail and show that no entailment relation holds between them. This calls into question (...) the theoretical unity of nudge, as intended by Thaler and Sunstein and most followers. We eventually recommend pursuing each property separately, both in policy research and at the foundational level. We particularly emphasize the need of reconsidering the respective roles of decision theory and behavioural economics to delineate nudge 2 correctly. The paper differs from most of the literature in focusing on the definitional rather than the normative problems of nudge. (shrink)
The paper analyses economic evaluations by distinguishing evaluative statements from actual value judgments. From this basis, it compares four solutions to the value neutrality problem in economics. After rebutting the strong theses about neutrality (normative economics is illegitimate) and non-neutrality (the social sciences are value-impregnated), the paper settles the case between the weak neutrality thesis (common in welfare economics) and a novel, weak non-neutrality thesis that extends the realm of normative economics more widely than the other weak thesis does.
We introduce a ranking of multidimensional alternatives, including uncertain prospects as a particular case, when these objects can be given a matrix form. This ranking is separable in terms of rows and columns, and continuous and monotonic in the basic quantities. Owing to the theory of additive separability developed here, we derive very precise numerical representations over a large class of domains (i.e., typically notof the Cartesian product form). We apply these representationsto (1)streams of commodity baskets through time, (2)uncertain social (...) prospects, (3)uncertain individual prospects. Concerning(1), we propose a finite horizon variant of Koopmans’s (1960) axiomatization of infinite discounted utility sums. The main results concern(2). We push the classic comparison between the exanteand expostsocial welfare criteria one step further by avoiding any expected utility assumptions, and as a consequence obtain what appears to be the strongest existing form of Harsanyi’s (1955) Aggregation Theorem. Concerning(3), we derive a subjective probability for Anscombe and Aumann’s (1963) finite case by merely assuming that there are two epistemically independent sources of uncertainty. (shrink)
Following a long-standing philosophical tradition, impartiality is a distinctive and determining feature of moral judgments, especially in matters of distributive justice. This broad ethical tradition was revived in welfare economics by Vickrey, and above all, Harsanyi, under the form of the so-called Impartial Observer Theorem. The paper offers an analytical reconstruction of this argument and a step-wise philosophical critique of its premisses. It eventually provides a new formal version of the theorem based on subjective probability.
The notion of “hierarchy” is one of the most commonly posited organizational principles in systems neuroscience. To this date, however, it has received little philosophical analysis. This is unfortunate, because the general concept of hierarchy ranges over two approaches with distinct empirical commitments, and whose conceptual relations remain unclear. We call the first approach the “representational hierarchy” view, which posits that an anatomical hierarchy of feed-forward, feed-back, and lateral connections underlies a signal processing hierarchy of input-output relations. Because the representational (...) hierarchy view holds that unimodal sensory representations are subsequently elaborated into more categorical and rule-based ones, it is committed to an increasing degree of abstraction along the hierarchy. The second view, which we call “topological hierarchy", is not committed to different representational functions or degrees of abstraction at different levels. Topological approaches instead posit that the hierarchical level of a part of the brain depends on how central it is to the pattern of connections in the system. Based on the current evidence, we argue that three conceptual relations between the two approaches are possible: topological hierarchies could substantiate the traditional representational hierarchy, conflict with it, or contribute to a plurality of approaches needed to understand the organization of the brain. By articulating each of these possibilities, our analysis attempts to open a conceptual space in which further neuroscientific and philosophical reasoning about neural hierarchy can proceed. (shrink)
I will present and criticise the two theories of truthmaking David Armstrong offers us in Truth and Truthmakers (Armstrong 2004), show to what extent they are incompatible and identify troublemakers for both of them, a notorious – Factualism, the view that the world is a world of states of affairs – and a more recent one – the view that every predication is necessary. Factualism, combined with truthmaker necessitarianism – ‘truthmaking is necessitation’ – leads Armstrong to an all-embracing totality state (...) of affairs that necessitates not only everything that is the case but also everything else – that which is not the case, that which is merely possible or even impossible. All the things so dear to realists – rocks, natural properties, real persons – become mere abstractions from this ontological monster. The view that every predication is necessary does in some sense the opposite: it does away with totality states of affairs and, arguably, also with states of affairs. We have particulars and universals, partially identical and necessarily connected to everything else. Just by the existence of anything, everything is necessitated – the whole world mirrored in every monad. Faced with the choice between these two equally unappealing alternatives, I suggest returning to Armstrong’s more empiricist past: the world is not an all-inclusive One, nor necessitated by every single particular and every single universal, but a plurality of particulars and universals, interconnected by a contingent and internal relation of exemplification. While a close variant, truthmaker essentialism, can perhaps be saved, this means giving up on truthmaker necessitarianism. This, I think, what it takes to steer a clear empiricist course between the Scylla of Spinozist general factness and the Charybdis of a Leibnizian overdose of brute necessities. (shrink)
The paper has a twofold aim. On the one hand, it provides what appears to be the first game-theoretic modeling of Napoleon’s last campaign, which ended dramatically on 18 June 1815 at Waterloo. It is specifically concerned with the decision Napoleon made on 17 June 1815 to detach part of his army against the Prussians he had defeated, though not destroyed, on 16 June at Ligny. Military historians agree that this decision was crucial but disagree about whether it was rational. (...) Hypothesizing a zero-sum game between Napoleon and Blücher, and computing its solution, we show that it could have been a cautious strategy on the former's part to divide his army, a conclusion which runs counter to the charges of misjudgement commonly heard since Clausewitz. On the other hand, the paper addresses methodological issues. We defend its case study against the objections of irrelevance that have been raised elsewhere against “analytic narratives”, and conclude that military campaigns provide an opportunity for successful application of the formal theories of rational choice. Generalizing the argument, we finally investigate the conflict between narrative accounts – the historians' standard mode of expression – and mathematical modeling. (shrink)
in a nervous system of a given species. This chapter provides a critical perspective on the role of connectomes in neuroscientific practice and asks how the connectomic approach fits into a larger context in which network thinking permeates technology, infrastructure, social life, and the economy. In the first part of this chapter, we argue that, seen from the perspective of ongoing research, the notion of connectomes as “complete descriptions” is misguided. Our argument combines Rachel Ankeny’s analysis of neuroanatomical wiring diagrams (...) as “descriptive models” with Hans-Joerg Rheinberger’s notion of “epistemic objects,” i.e., targets of research that are still partially unknown. Combining these aspects we conclude that connectomes are constitutively epistemic objects: there just is no way to turn them into permanent and complete technical standards because the possibilities to map connection properties under different modeling assumptions are potentially inexhaustible. In the second part of the chapter, we use this understanding of connectomes as constitutively epistemic objects in order to critically assess the historical and political dimensions of current neuroscientific research. We argue that connectomics shows how the notion of the “brain as a network” has become the dominant metaphor of contemporary brain research. We further point out that this metaphor shares (potentially problematic) affinities to the form of contemporary “network societies.” We close by pointing out how the relation between connectomes and networks in society could be used in a more fruitful manner. (shrink)
We argue that moral decision making is reasons-based, focusing on the idea that people encounter decisions as questions to be answered and that they process reasons to the extent that they can see them as putative answers to those questions. After introducing our topic, we sketch the erotetic reasons-based framework for decision making. We then describe three experiments that extend this framework to moral decision making in different question frames, cast doubt on theories of moral decision making that discount reasons (...) and appeal, and replicate our initial finds in moral contexts that do not involve direct physical harm. We conclude by reinterpreting Stanley Milgram’s studies in destructive obedience in our new framework. (shrink)
The relations between rationality and optimization have been widely discussed in the wake of Herbert Simon's work, with the common conclusion that the rationality concept does not imply the optimization principle. The paper is partly concerned with adding evidence for this view, but its main, more challenging objective is to question the converse implication from optimization to rationality, which is accepted even by bounded rationality theorists. We discuss three topics in succession: (1) rationally defensible cyclical choices, (2) the revealed preference (...) theory of optimization, and (3) the infinite regress of optimization. We conclude that (1) and (2) provide evidence only for the weak thesis that rationality does not imply optimization. But (3) is seen to deliver a significant argument for the strong thesis that optimization does not imply rationality. (shrink)
In abstract argumentation, each argument is regarded as atomic. There is no internal structure to an argument. Also, there is no specification of what is an argument or an attack. They are assumed to be given. This abstract perspective provides many advantages for studying the nature of argumentation, but it does not cover all our needs for understanding argumentation or for building tools for supporting or undertaking argumentation. If we want a more detailed formalization of arguments than is available with (...) abstract argumentation, we can turn to structured argumentation, which is the topic of this special issue of Argument and Computation. In structured argumentation, we assume a formal language for representing knowledge and specifying how arguments and counterarguments can be constructed from that knowledge. An argument is then said to be structured in the sense that normally, the premises and claim of the argument are made explicit, and the relationship between the premises and claim is formally defined (for instance, using logical entailment). In this introduction, we provide a brief overview of the approaches covered in this special issue on structured argumentation. (shrink)
(Back Cover:) « La pensée métaphysique renaîtra demain. Ce sont des savants qui ont le goût et le sens de la pensée conduite jusqu’au terme de ses exigences internes, et des philosophes initiés aux sciences expérimentales qui, en commun, la feront. » L’œuvre de Claude Tresmontant (1925-1997) illustre parfaitement cette recherche de la métaphysique d’un monde en devenir, qui sait écouter et se modeler sur la transformation – la métamorphose – promise à une Création finalisée. Le trait commun aux exposés (...) ici présentés sous forme définitive a été cette recherche autour d’une pensée qui renouvelle de l’intérieur la métaphysique en réalisant le vœu de Bergson qu’elle devienne « auscultatrice », que l’énigme que pose l’homme face à son origine et sa destinée ne soit pas recouverte par une pensée qui se perdrait dans la description des choses ou dans l’esprit de système, mais qui non plus n’irait se recroqueviller sur elle-même, en narrant sa propre expérience subjective sous le mode de la déréliction. Claude Tresmontant a su penser l’être en genèse, et il a cherché à renouveler la question de l’existence de Dieu, en en transformant la problématique en dialogue avec les sciences contemporaines. À ce goût de l’être dont les sciences ont renouvelé l’approche, il a également voulu infuser un « supplément d’âme », en repensant la réalité de la création et l’horizon de la cause finale, toujours à partir de la nature ultimement théologique de la réponse à la question « qu’est-ce que l’homme ? » Contributeurs : Yves Tourenne, Philippe Gagnon, Fabien Revol, Brunor, Frédéric Crouslé, Bertrand Souchard, Emmanuel Gabellieri. Table of Contents: Note Liminaire (Ph. Gagnon) - 7 En quoi la pensée de Claude Tresmontant nous stimule-t-elle encore ? Hommage et critique (Y. Tourenne) - 13 L’imbrication de la preuve de Dieu et de la cosmologie chez Tresmontant représente-t-elle une preuve ? (Ph. Gagnon) - 27 L’usage apologétique de la philosophie de Tresmontant dans les Indices pensables de Brunor (F. Revol) - 49 Réponse à Fabien Revol (Brunor) - 77 Qu’est-ce qui cloche dans la théologie de Claude Tresmontant ? (Fr. Crouslé) - 85 Les métaphysiques principales de Claude Tresmontant : la foi biblique est-elle à part de la raison philosophique grecque ? (B. Souchard) - 109 Maurice Blondel et Claude Tresmontant (E. Gabellieri) - 123 La vision informationnelle de Tresmontant, surtout en référence au problème de l’âme (Ph. Gagnon) - 133. (shrink)
The paper revisits the rationality principle from the particular perspective of the unity of social sciences. It has been argued that the principle was the unique law of the social sciences and that accordingly there are no deep differences between them (Popper). It has also been argued that the rationality principle was specific to economics as opposed to the other social sciences, especially sociology (Pareto). The paper rejects these opposite views on the grounds that the rationality principle is strictly metaphysical (...) and does not have the logical force required to deliver interesting deductions. Explanation in the social sciences takes place at a level of specialization that is always higher than that of the principle itself. However, what is peculiar about economics is that it specializes the explanatory rational schemes to a degree unparalleled in history and sociology. As a consequence, there is a backward-and-forward move between specific and general formulations of rationality that takes place in economics and has no analogue in the other social sciences. (shrink)
This article offers a summary of Whitehead’s life, along with bibliographical indications, and it additionnally gives reference markers to help understand how Whitehead renewed cosmology by unearthing a new understanding of a subject that would not be detached from its corporeal rootedness. Then, a more particular understanding of Whitehead’s criticism of the technoscentific project is sought, as to its absence of self-scrutiny. An additional consideration of ecology, and then religion, are offered.
ABSTRACT. The relations between rationality and optimization have been widely discussed in the wake of Herbert Simon’s work, with the common conclusion that the rationality concept does not imply the optimization principle. The paper is partly concerned with adding evidence for this view, but its main, more challenging objective is to question the converse implication from optimization to rationality, which is accepted even by bounded rationality theorists. We discuss three topics in succession: (1) rationally defensible cyclical choices, (2) the revealed (...) preference theory of optimization, and (3) the infinite regress of optimization. We conclude that (1) and (2) provide evidence only for the weak thesis that rationality does not imply optimization. But (3) is seen to deliver a significant argument for the strong thesis that optimization does not imply rationality. (shrink)
In Richard Bradley’s book, Decision Theory with a Human Face, we have selected two themes for discussion. The first is the Bolker-Jeffrey theory of decision, which the book uses throughout as a tool to reorganize the whole field of decision theory, and in particular to evaluate the extent to which expected utility theories may be normatively too demanding. The second theme is the redefinition strategy that can be used to defend EU theories against the Allais and Ellsberg paradoxes, a strategy (...) that the book by and large endorses, and even develops in an original way concerning the Ellsberg paradox. We argue that the BJ theory is too specific to fulfil Bradley’s foundational project and that the redefinition strategy fails in both the Allais and Ellsberg cases. Although we share Bradley’s conclusion that EU theories do not state universal rationality requirements, we reach it not by a comparison with BJ theory, but by a comparison with the non-EU theories that the paradoxes have heuristically suggested. (shrink)
The article discusses Friedman's classic claim that economics can be based on irrealistic assumptions. It exploits Samuelson's distinction between two "F-twists" (that is, "it is an advantage for an economic theory to use irrealistic assumptions" vs "the more irrealistic the assumptions, the better the economic theory"), as well as Nagel's distinction between three philosophy-of-science construals of the basic claim. On examination, only one of Nagel's construals seems promising enough. It involves the neo-positivistic distinction between theoretical and non-theoretical ("observable") terms; so (...) Friedman would in some sense argue for the major role of theoretical terms in economics. The paper uses a model-theoretic apparatus to refine the selected construal and check whether it can be made to support the claim. This inquiry leads to essentially negative results for both F-twists, and the final conclusion is that they are left unsupported. (shrink)
Much has been discussed about angels in terms of their nature and their actions in the Bible. But philosophically, there hasn't been much discussed about the existence of angels. Specifically, whether angels can be shown to exist by reason. This paper argues that reason can lead us to conclude that angels do exist.
This monographic chapter explains how expected utility (EU) theory arose in von Neumann and Morgenstern, how it was called into question by Allais and others, and how it gave way to non-EU theories, at least among the specialized quarters of decion theory. I organize the narrative around the idea that the successive theoretical moves amounted to resolving Duhem-Quine underdetermination problems, so they can be assessed in terms of the philosophical recommendations made to overcome these problems. I actually follow Duhem's recommendation, (...) which was essentially to rely on the passing of time to make many experiments and arguments available, and evebntually strike a balance between competing theories on the basis of this improved knowledge. Although Duhem's solution seems disappointingly vague, relying as it does on "bon sens" to bring an end to the temporal process, I do not think there is any better one in the philosophical literature, and I apply it here for what it is worth. In this perspective, EU theorists were justified in resisting the first attempts at refuting their theory, including Allais's in the 50s, but they would have lacked "bon sens" in not acknowledging their defeat in the 80s, after the long process of pros and cons had sufficiently matured. This primary Duhemian theme is actually combined with a secondary theme - normativity. I suggest that EU theory was normative at its very beginning and has remained so all along, and I express dissatisfaction with the orthodox view that it could be treated as a straightforward descriptive theory for purposes of prediction and scientific test. This view is usually accompanied with a faulty historical reconstruction, according to which EU theorists initially formulated the VNM axioms descriptively and retreated to a normative construal once they fell threatened by empirical refutation. From my historical study, things did not evolve in this way, and the theory was both proposed and rebutted on the basis of normative arguments already in the 1950s. The ensuing, major problem was to make choice experiments compatible with this inherently normative feature of theory. Compability was obtained in some experiments, but implicitly and somewhat confusingly, for instance by excluding overtly incoherent subjects or by creating strong incentives for the subjects to reflect on the questions and provide answers they would be able to defend. I also claim that Allais had an intuition of how to combine testability and normativity, unlike most later experimenters, and that it would have been more fruitful to work from his intuition than to make choice experiments of the naively empirical style that flourished after him. In sum, it can be said that the underdetermination process accompanying EUT was resolved in a Duhemian way, but this was not without major inefficiencies. To embody explicit rationality considerations into experimental schemes right from the beginning would have limited the scope of empirical research, avoided wasting resources to get only minor findings, and speeded up the Duhemian process of groping towards a choice among competing theories. (shrink)
This article critically discusses the concept of economic rationality, arguing that it is too narrow and specific to encompass the full concept of practical rationality. Economic rationality is identified here with the use of the optimizing model of decision, as well as of expected utility apparatus to deal with uncertainty. To argue that practical rationality is broader than economic rationality, the article claims that practical rationality includes bounded rationality as a particular case, and that bounded rationality cannot be reduced to (...) economic rationality as defined here. (shrink)
Abstract: Economists are accustomed to distinguishing between a positive and a normative component of their work, a distinction that is peculiar to their field, having no exact counterpart in the other social sciences. The distinction has substantially changed over time, and the different ways of understanding it today are reflective of its history. Our objective is to trace the origins and initial forms of the distinction, from the English classical political economy of the first half of the 19th century to (...) the emergence of welfare economics in the first half of the 20th century. This sequential account will also serve to identify the main representative positions along with the arguments used to support them, and it thus prepares the ground for a discussion that will be less historical and more strictly conceptual. -/- Résumé : Les économistes ont coutume de distinguer entre une composante positive et une composante normative de leurs travaux, ce qui est une singularité de leur discipline, car cette distinction n'a pas de répondant exact dans les autres sciences sociales. Elle a fortement évolué au cours du temps et les différentes manières de la concevoir aujourd'hui en reflètent l'histoire. On se propose ici d'en retracer les origines et les premières formes, de l'économie politique classique anglaise de la première moitié du XIXe siècle jusqu'à l'apparition de l'économie du bien-être dans la première moitié du XXe siècle. Ce parcours séquentiel vise aussi à identifier les positions les plus représentatives et les arguments invoqués pour les soutenir, en préparant ainsi une discussion qui serait moins historique et plus strictement conceptuelle. (shrink)
Stochastic independence has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory. Bayesian decision theorists such as Savage can be criticized for being silent about stochastic independence. From their current preference axioms, they can derive no more than the definitional properties of a probability measure. In a new framework of twofold uncertainty, we introduce preference axioms that entail not (...) only these definitional properties, but also the stochastic independence of the two sources of uncertainty. This goes some way towards filling a curious lacuna in Bayesian decision theory. (shrink)
It is a central tenet of ethical intuitionism as defended by W. D. Ross and others that moral theory should reflect the convictions of mature moral agents. Hence, intuitionism is plausible to the extent that it corresponds to our well-considered moral judgments. After arguing for this claim, I discuss whether intuitionists offer an empirically adequate account of our moral obligations. I do this by applying recent empirical research by John Mikhail that is based on the idea of a universal moral (...) grammar to a number of claims implicit in W. D. Ross’s normative theory. I argue that the results at least partly vindicate intuitionism. (shrink)
While the past century of neuroscientific research has brought considerable progress in defining the boundaries of the human cerebral cortex, there are cases in which the demarcation of one area from another remains fuzzy. Despite the existence of clearly demarcated areas, examples of gradual transitions between areas are known since early cytoarchitectonic studies. Since multi-modal anatomical approaches and functional connectivity studies brought renewed attention to the topic, a better understanding of the theoretical and methodological implications of fuzzy boundaries in brain (...) science can be conceptually useful. This article provides a preliminary conceptual framework to understand this problem by applying philosophical theories of vagueness to three levels of neuroanatomical research. For the first two levels (cytoarchitectonics and fMRI studies), vagueness will be distinguished from other forms of uncertainty, such as imprecise measurement or ambiguous causal sources of activation. The article proceeds to discuss the implications of these levels for the anatomical study of connectivity between cortical areas. There, vagueness gets imported into connectivity studies since the network structure is dependent on the parcellation scheme and thresholds have to be used to delineate functional boundaries. Functional connectivity may introduce an additional form of vagueness, as it is an organizational principle of the brain. The article concludes by discussing what steps are appropriate to define areal boundaries more precisely. (shrink)
This paper is concerned with representations of belief by means of nonadditive probabilities of the Dempster-Shafer (DS) type. After surveying some foundational issues and results in the D.S. theory, including Suppes's related contributions, the paper proceeds to analyze the connection of the D.S. theory with some of the work currently pursued in epistemic logic. A preliminary investigation of the modal logic of belief functions à la Shafer is made. There it is shown that the Alchourrron-Gärdenfors-Makinson (A.G.M.) logic of belief change (...) is closely related to the D.S. theory. The final section compares the critique of Bayesianism which underlies the present paper with some important objections raised by Suppes against this doctrine. -/- . (shrink)
This chapter of the Handbook of Utility Theory aims at covering the connections between utility theory and social ethics. The chapter first discusses the philosophical interpretations of utility functions, then explains how social choice theory uses them to represent interpersonal comparisons of welfare in either utilitarian or non-utilitarian representations of social preferences. The chapter also contains an extensive account of John Harsanyi's formal reconstruction of utilitarianism and its developments in the later literature, especially when society faces uncertainty rather than probabilistic (...) risk. (shrink)
What we read in the major synthetic writings of Teilhard shows a thought aware of the incessant interaction between natural entities and the organizing power which exerts an attraction on them that becomes practically infallible beyond a tipping point. This testifies of a prescient view that has many connections to the mode of thinking of cybernetics.
Description courte (Électre, 2019) : Une étude d'un des principaux axes de réflexion du philosophe des sciences et de la nature Raymond Ruyer (1902-1987). À la lumière des découvertes de l'embryogenèse et en s'appuyant par ailleurs sur la théorie de l'information, il proposa une interprétation des concepts unificateurs de la cybernétique mécaniste. -/- Short Descriptor (Electre 2019): A study of one of the main axes of reflection of the French philosopher of science and of nature Raymond Ruyer (1902-1987). Relying on (...) the discoveries about embryogenesis, and also with the use of information theory, Ruyer proposed an interpretation of the main unifying concepts of mechanistic cybernetics. -/- Cet ouvrage propose une étude fouillée d'un des principaux axes de réflexion du philosophe des sciences et de la nature français Raymond Ruyer (1902–1987) : la cybernétique. Après avoir proposé une philosophie structuraliste, Ruyer la modifia à la lumière des découvertes de l'embryogenèse, puis il proposa une interprétation des concepts unificateurs de la cybernétique mécaniste. Réfléchissant sur cette dernière et sur la théorie de l'information, en particulier sur l'origine de l'information, il défendit que cette cybernétique n'était qu'une lecture inversée de la vraie cybernétique, qui nous donnerait de lire dans l'expérience même les traces du pouvoir morphogénétique, appréhendé comme un champ axiologique. Dans un texte résumant son propre parcours, Ruyer affirma finalement que la critique de la théorie de l'information « peut donner […] l'espoir d'aboutir à quelque chose comme une nouvelle théologie. » Les idées directrices de Ruyer sont tout particulièrement contextualisées ici à partir de la question du développement des formes en biologie, et de celles de la génétique, de la genèse stochastique de l'ordre, et de l'identification mentale ou physique de l'information. Il se termine en départageant ce qui est théologique et axiologique dans ce projet de métaphysique qui, bien que resté inachevé, n'en représente pas moins le plus impressionnant conçu en France au siècle dernier. – This book offers an in-depth study of one of the main axes in the reflection of French philosopher of science and nature Raymond Ruyer. In a text summarising his own development, Ruyer stated about the philosophical critique of information theory that it "is what can give the most long-lasting hope of getting to something like a new theology." After propounding a structuralist philosophy, and distinguishing between form and structure, to then modify it in the light of discoveries in embryogenesis, Ruyer offered a re-evaluation of the unifying concepts of mechanistic cybernetics. Thinking about it and about information theory, he defended the idea that this cybernetics was in reality an inverted reading of the real one, which would allow us to read in experience itself traces of the morphogenetic power, apprehended as the axiological field. On some transversal points, the development of forms in biology and genetics, the stochastic genesis of order, the identification of information with either psychological and mental, or physical reality, behaviour, and the access to meaning, this work exposes the main ideas of Ruyer while situating them in the context of the breadth of others' contributions. It ends by determining what is theological and axiological in this project for a metaphysics which, although unfinished, is nevertheless the most impressive effort done in France in the last century. – Available on i6doc dot com (ISBN 978-2-930517-56-8 ; pdf 978-2-930517-57-5). (shrink)
This chapter briefly reviews the present state of judgment aggregation theory and tentatively suggests a future direction for that theory. In the review, we start by emphasizing the difference between the doctrinal paradox and the discursive dilemma, two idealized examples which classically serve to motivate the theory, and then proceed to reconstruct it as a brand of logical theory, unlike in some other interpretations, using a single impossibility theorem as a key to its technical development. In the prospective part, having (...) mentioned existing applications to social choice theory and computer science, which we do not discuss here, we consider a potential application to law and economics. This would be based on a deeper exploration of the doctrinal paradox and its relevance to the functioning of collegiate courts. On this topic, legal theorists have provided empirical observations and theoretical hints that judgment aggregation theorists would be in a position to clarify and further elaborate. As a general message, the chapter means to suggest that the future of judgment aggregation theory lies with its applications rather than its internal theoretical development. (shrink)
The paper discusses the sense in which the changes undergone by normative economics in the twentieth century can be said to be progressive. A simple criterion is proposed to decide whether a sequence of normative theories is progressive. This criterion is put to use on the historical transition from the new welfare economics to social choice theory. The paper reconstructs this classic case, and eventually concludes that the latter theory was progressive compared with the former. It also briefly comments on (...) the recent developments in normative economics and their connection with the previous two stages. (Published Online April 18 2006) Footnotes1 This paper suspersedes an earlier one entitled “Is There Progress in Normative Economics?” (Mongin 2002). I thank the organizers of the Fourth ESHET Conference (Graz 2000) for the opportunity they gave me to lecture on this topic. Thanks are also due to J. Alexander, K. Arrow, A. Bird, R. Bradley, M. Dascal, W. Gaertner, N. Gravel, D. Hausman, B. Hill, C. Howson, N. McClennen, A. Trannoy, J. Weymark, J. Worrall, two annonymous referees of this journal, and especially the editor M. Fleurbaey, for helpful comments. The editor's suggestions contributed to determine the final orientation of the paper. The author is grateful to the LSE and the Lachmann Foundation for their support at the time when he was writing the initial version. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.