Switch to: References

Citations of:

Statistical Decision Functions

Wiley: New York (1950)

Add citations

You must login to add citations.
  1. Expected Comparative Utility Theory: A New Theory of Rational Choice.David Robert - 2018 - Philosophical Forum 49 (1):19-37.
    In this paper, I argue for a new normative theory of rational choice under risk, namely expected comparative utility (ECU) theory. I first show that for any choice option, a, and for any state of the world, G, the measure of the choiceworthiness of a in G is the comparative utility (CU) of a in G—that is, the difference in utility, in G, between a and whichever alternative to a carries the greatest utility in G. On the basis of this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Individual behavior under risk and under uncertainty: An experimental study. [REVIEW]M. Cohen, J. Y. Jaffray & T. Said - 1985 - Theory and Decision 18 (2):203-228.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Philosophical foundations for worst-case arguments.Lara Buchak - 2023 - Politics, Philosophy and Economics 22 (3):215-242.
    Certain ethical views hold that we should pay more attention, even exclusive attention, to the worst-case scenario. Prominent examples include Rawls's Difference Principle and the Precautionary Principle. These views can be anchored in formal principles of decision theory, in two different ways. On the one hand, they can rely on ambiguity-aversion: the idea that we cannot assign sharp probabilities to various scenarios, and that if we cannot assign sharp probabilities, we should decide pessimistically, as if the probabilities are unfavorable. On (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Qualitative Heuristics For Balancing the Pros and Cons.Jean-François Bonnefon, Didier Dubois, Hélène Fargier & Sylvie Leblois - 2008 - Theory and Decision 65 (1):71-95.
    Balancing the pros and cons of two options is undoubtedly a very appealing decision procedure, but one that has received scarce scientific attention so far, either formally or empirically. We describe a formal framework for pros and cons decisions, where the arguments under consideration can be of varying importance, but whose importance cannot be precisely quantified. We then define eight heuristics for balancing these pros and cons, and compare the predictions of these to the choices made by 62 human participants (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • On reviewing machine dreams : Zoomed-in versus zoomed-out.Lawrence A. Boland - 2006 - Philosophy of the Social Sciences 36 (4):480-495.
    continues to receive many reviews. Judging by recent reviews, this is a very controversial book. The question considered here is, how can one fairly review a controversial book—particularly when the book is widely popular and, for a history of economic thought book, a best seller? This essay uses Mirowski’s book as a case study to propose one answer for this question. In the process, it will examine how others seem to have answered this question. Key Words: methodology • reviews • (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Neyman-Pearson theory as decision theory, and as inference theory; with a criticism of the Lindley-Savage argument for bayesian theory.Allan Birnbaum - 1977 - Synthese 36 (1):19 - 49.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • On the Rationality of Decisions with Unreliable Probabilities.Birman Fernando - 2009 - Disputatio 3 (26):97-116.
    The standard Bayesian recipe for selecting the rational choice is presented. A familiar example in which the recipe fails to produce any definite result is introduced. It is argued that a generalization of Gärdenfors’ and Sahlin’s theory of unreliable probabilities — which itself does not guarantee a solution to the problem — offers the best available approach. But a number of challenges to this approach are also presented and discussed.
    Download  
     
    Export citation  
     
    Bookmark  
  • Signal detection with criterion noise: Applications to recognition memory.Aaron S. Benjamin, Michael Diaz & Serena Wee - 2009 - Psychological Review 116 (1):84-115.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Trying to Resolve the Two-Envelope Problem.Casper J. Albers, Barteld P. Kooi & Willem Schaafsma - 2005 - Synthese 145 (1):89-109.
    After explaining the well-known two-envelope paradox by indicating the fallacy involved, we consider the two-envelope problem of evaluating the factual information provided to us in the form of the value contained by the envelope chosen first. We try to provide a synthesis of contributions from economy, psychology, logic, probability theory (in the form of Bayesian statistics), mathematical statistics (in the form of a decision-theoretic approach) and game theory. We conclude that the two-envelope problem does not allow a satisfactory solution. An (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Likelihood Method for Decision under Uncertainty.Mohammed Abdellaoui & Peter P. Wakker - 2005 - Theory and Decision 58 (1):3-76.
    This paper introduces the likelihood method for decision under uncertainty. The method allows the quantitative determination of subjective beliefs or decision weights without invoking additional separability conditions, and generalizes the Savage–de Finetti betting method. It is applied to a number of popular models for decision under uncertainty. In each case, preference foundations result from the requirement that no inconsistencies are to be revealed by the version of the likelihood method appropriate for the model considered. A unified treatment of subjective decision (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Dynamics of Thought.Peter Gardenfors - 2005 - Dordrecht, Netherland: Springer.
    This volume is a collection of some of the most important philosophical papers by Peter Gärdenfors. Spanning a period of more than 20 years of his research, they cover a wide ground of topics, from early works on decision theory, belief revision and nonmonotonic logic to more recent work on conceptual spaces, inductive reasoning, semantics and the evolutions of thinking. Many of the papers have only been published in places that are difficult to access. The common theme of all the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Statistics and Probability Have Always Been Value-Laden: An Historical Ontology of Quantitative Research Methods.Michael J. Zyphur & Dean C. Pierides - 2020 - Journal of Business Ethics 167 (1):1-18.
    Quantitative researchers often discuss research ethics as if specific ethical problems can be reduced to abstract normative logics (e.g., virtue ethics, utilitarianism, deontology). Such approaches overlook how values are embedded in every aspect of quantitative methods, including ‘observations,’ ‘facts,’ and notions of ‘objectivity.’ We describe how quantitative research practices, concepts, discourses, and their objects/subjects of study have always been value-laden, from the invention of statistics and probability in the 1600s to their subsequent adoption as a logic made to appear as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Information Processing: The Language and Analytical Tools for Cognitive Psychology in the Information Age.Aiping Xiong & Robert W. Proctor - 2018 - Frontiers in Psychology 9:362645.
    The information age can be dated to the work of Norbert Wiener and Claude Shannon in the 1940s. Their work on cybernetics and information theory, and many subsequent developments, had a profound influence on reshaping the field of psychology from what it was prior to the 1950s. Contemporaneously, advances also occurred in experimental design and inferential statistical testing stemming from the work of Ronald Fisher, Jerzy Neyman, and Egon Pearson. These interdisciplinary advances from outside of psychology provided the conceptual and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Relatively robust decisions.Thomas A. Weber - 2022 - Theory and Decision 94 (1):35-62.
    It is natural for humans to judge the outcome of a decision under uncertainty as a percentage of an ex-post optimal performance. We propose a robust decision-making framework based on a relative performance index. It is shown that if the decision maker’s preferences satisfy quasisupermodularity, single-crossing, and a nondecreasing log-differences property, the worst-case relative performance index can be represented as the lower envelope of two extremal performance ratios. The latter is used to characterize the agent’s optimal robust decision, which has (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Quality and quantity of information exchange.Robert van Rooy - 2003 - Journal of Logic, Language and Information 12 (4):423-451.
    The paper deals with credible and relevantinformation flow in dialogs: How useful is it for areceiver to get some information, how useful is it fora sender to give this information, and how much credibleinformation can we expect to flow between sender andreceiver? What is the relation between semantics andpragmatics? These Gricean questions will be addressedfrom a decision and game-theoretical point of view.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Application of Bayes' Theorem in Valuating Depression Tests Performance.Marco Tommasi, Grazia Ferrara & Aristide Saggino - 2018 - Frontiers in Psychology 9.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Statistical decisions under ambiguity.Jörg Stoye - 2011 - Theory and Decision 70 (2):129-148.
    This article provides unified axiomatic foundations for the most common optimality criteria in statistical decision theory. It considers a decision maker who faces a number of possible models of the world (possibly corresponding to true parameter values). Every model generates objective probabilities, and von Neumann–Morgenstern expected utility applies where these obtain, but no probabilities of models are given. This is the classic problem captured by Wald’s (Statistical decision functions, 1950) device of risk functions. In an Anscombe–Aumann environment, I characterize Bayesianism (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Causation: An alternative.Wolfgang Spohn - 2006 - British Journal for the Philosophy of Science 57 (1):93-119.
    The paper builds on the basically Humean idea that A is a cause of B iff A and B both occur, A precedes B, and A raises the metaphysical or epistemic status of B given the obtaining circumstances. It argues that in pursuit of a theory of deterministic causation this ‘status raising’ is best explicated not in regularity or counterfactual terms, but in terms of ranking functions. On this basis, it constructs a rigorous theory of deterministic causation that successfully deals (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Coherent choice functions under uncertainty.Teddy Seidenfeld, Mark J. Schervish & Joseph B. Kadane - 2010 - Synthese 172 (1):157-176.
    We discuss several features of coherent choice functions—where the admissible options in a decision problem are exactly those that maximize expected utility for some probability/utility pair in fixed set S of probability/utility pairs. In this paper we consider, primarily, normal form decision problems under uncertainty—where only the probability component of S is indeterminate and utility for two privileged outcomes is determinate. Coherent choice distinguishes between each pair of sets of probabilities regardless the “shape” or “connectedness” of the sets of probabilities. (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • The evolutionary stability of optimism, pessimism, and complete ignorance.Burkhard C. Schipper - 2021 - Theory and Decision 90 (3-4):417-454.
    We seek an evolutionary explanation for why in some situations humans maintain either optimistic or pessimistic attitudes toward uncertainty and are ignorant to relevant aspects of their environment. Players in strategic games face Knightian uncertainty about opponents’ actions and maximize individually their Choquet expected utility with respect to neo-additive capacities allowing for both an optimistic or pessimistic attitude toward uncertainty as well as ignorance to strategic dependencies. An optimist overweighs good outcomes. A complete ignorant never reacts to opponents’ changes of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Knowledge and decision: Introduction to the Synthese topical collection.Moritz Schulz, Patricia Rich, Jakob Koscholke & Roman Heil - 2022 - Synthese 200 (2):1-13.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Minimax and the value of information.Evan Sadler - 2015 - Theory and Decision 78 (4):575-586.
    In his discussion of minimax decision rules, Savage presents an example purporting to show that minimax applied to negative expected utility is an inadequate decision criterion for statistics; he suggests the application of a minimax regret rule instead. The crux of Savage’s objection is the possibility that a decision maker would choose to ignore even “extensive” information. More recently, Parmigiani has suggested that minimax regret suffers from the same flaw. He demonstrates the existence of “relevant” experiments that a minimax regret (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The key to the knowledge norm of action is ambiguity.Patricia Rich - 2021 - Synthese 199 (3-4):9669-9698.
    Knowledge-first epistemology includes a knowledge norm of action: roughly, act only on what you know. This norm has been criticized, especially from the perspective of so-called standard decision theory. Mueller and Ross provide example decision problems which seem to show that acting properly cannot require knowledge. I argue that this conclusion depends on applying a particular decision theory which is ill-motivated in this context. Agents’ knowledge is often most plausibly formalized as an ambiguous epistemic state, and the theory of decision (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Choice under complete uncertainty when outcome spaces are state dependent.Clemens Puppe & Karl H. Schlag - 2009 - Theory and Decision 66 (1):1-16.
    One central objection to the maximin payoff criterion is that it focuses on the state that yields the lowest payoffs regardless of how low these are. We allow different states to have different sets of possible outcomes and show that the original axioms of Milnor (1954) continue to characterize the maximin payoff criterion, provided that the sets of payoffs achievable across states overlap. If instead payoffs in some states are always lower than in all others then ignoring the “bad” states (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Fisher, Neyman-Pearson or NHST? A tutorial for teaching data testing.Jose D. Perezgonzalez - 2015 - Frontiers in Psychology 6.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Minimax, information and ultrapessimism.Giovanni Parmigiani - 1992 - Theory and Decision 33 (3):241-252.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Rule For Updating Ambiguous Beliefs.Cesaltina Pacheco Pires - 2002 - Theory and Decision 53 (2):137-152.
    When preferences are such that there is no unique additive prior, the issue of which updating rule to use is of extreme importance. This paper presents an axiomatization of the rule which requires updating of all the priors by Bayes rule. The decision maker has conditional preferences over acts. It is assumed that preferences over acts conditional on event E happening, do not depend on lotteries received on Ec, obey axioms which lead to maxmin expected utility representation with multiple priors, (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Experimental evidence on case-based decision theory.Wolfgang Ossadnik, Dirk Wilmsmann & Benedikt Niemann - 2013 - Theory and Decision 75 (2):211-232.
    This paper starts out from the proposition that case-based decision theory is an appropriate tool to explain human decision behavior in situations of structural ignorance. Although the developers of CBDT suggest its reality adequacy, CBDT has not yet been tested empirically very often, especially not in repetitive decision situations. Therefore, our main objective is to analyse the decision behavior of subjects in a repeated-choice experiment by comparing the explanation power of CBDT to reinforcement learning and to classical decision criteria under (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Equilibria with vector-valued utilities and preference information. The analysis of a mixed duopoly.Amparo M. Mármol, Luisa Monroy, M. Ángeles Caraballo & Asunción Zapata - 2017 - Theory and Decision 83 (3):365-383.
    This paper deals with the equilibria of games when the agents have multiple objectives and, therefore, their utilities cannot be represented by a single value, but by a vector containing the various dimensions of the utility. Our approach allows the incorporation of partial information about the preferences of the agents into the model, and permits the identification of the set of equilibria in accordance with this information. We also propose an additional conservative criterion which can be applied in this framework (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ockham Efficiency Theorem for Stochastic Empirical Methods.Kevin T. Kelly & Conor Mayo-Wilson - 2010 - Journal of Philosophical Logic 39 (6):679-712.
    Ockham’s razor is the principle that, all other things being equal, scientists ought to prefer simpler theories. In recent years, philosophers have argued that simpler theories make better predictions, possess theoretical virtues like explanatory power, and have other pragmatic virtues like computational tractability. However, such arguments fail to explain how and why a preference for simplicity can help one find true theories in scientific inquiry, unless one already assumes that the truth is simple. One new solution to that problem is (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Optimize, satisfice, or choose without deliberation? A simple minimax-regret assessment.Charles F. Manski - 2017 - Theory and Decision 83 (2):155-173.
    Simon introduced satisficing, but he did not provide a precise definition or analysis. Other researchers have subsequently interpreted satisficing in various ways, but a consensus perspective still has not emerged. This paper interprets satisficing as a class of decision strategies that a person might use when seeking to optimize in a setting where deliberation is costly. Costly deliberation lies at the heart of Simon’s motivation of satisficing, but he did not formalize the idea. I do so here, studying decision making (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Actualist rationality.Charles F. Manski - 2011 - Theory and Decision 71 (2):195-210.
    This article concerns the prescriptive function of decision analysis. Consider an agent who must choose an action yielding welfare that varies with an unknown state of nature. It is often asserted that such an agent should adhere to consistency axioms which imply that behavior can be represented as maximization of expected utility. However, our agent is not concerned the consistency of his behavior across hypothetical choice sets. He only wants to make a reasonable choice from the choice set that he (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Ignorance, probability and rational choice.Isaac Levi - 1982 - Synthese 53 (3):387-417.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Models and statistical inference: The controversy between Fisher and neyman–pearson.Johannes Lenhard - 2006 - British Journal for the Philosophy of Science 57 (1):69-91.
    The main thesis of the paper is that in the case of modern statistics, the differences between the various concepts of models were the key to its formative controversies. The mathematical theory of statistical inference was mainly developed by Ronald A. Fisher, Jerzy Neyman, and Egon S. Pearson. Fisher on the one side and Neyman–Pearson on the other were involved often in a polemic controversy. The common view is that Neyman and Pearson made Fisher's account more stringent mathematically. It is (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • The epistemic consequences of pragmatic value-laden scientific inference.Adam P. Kubiak & Paweł Kawalec - 2021 - European Journal for Philosophy of Science 11 (2):1-26.
    In this work, we explore the epistemic import of the value-ladenness of Neyman-Pearson’s Theory of Testing Hypotheses by reconstructing and extending Daniel Steel’s argument for the legitimate influence of pragmatic values on scientific inference. We focus on how to properly understand N-P’s pragmatic value-ladenness and the epistemic reliability of N-P. We develop an account of the twofold influence of pragmatic values on N-P’s epistemic reliability and replicability. We refer to these two distinguished aspects as “direct” and “indirect”. We discuss the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Models and Statistical Inference: The Controversy between Fisher and Neyman–Pearson.Lenhard Johannes - 2006 - British Journal for the Philosophy of Science 57 (1):69-91.
    The main thesis of the paper is that in the case of modern statistics, the differences between the various concepts of models were the key to its formative controversies. The mathematical theory of statistical inference was mainly developed by Ronald A. Fisher, Jerzy Neyman, and Egon S. Pearson. Fisher on the one side and Neyman–Pearson on the other were involved often in a polemic controversy. The common view is that Neyman and Pearson made Fisher's account more stringent mathematically. It is (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Non-Measurability, Imprecise Credences, and Imprecise Chances.Yoaav Isaacs, Alan Hájek & John Hawthorne - 2021 - Mind 131 (523):892-916.
    – We offer a new motivation for imprecise probabilities. We argue that there are propositions to which precise probability cannot be assigned, but to which imprecise probability can be assigned. In such cases the alternative to imprecise probability is not precise probability, but no probability at all. And an imprecise probability is substantially better than no probability at all. Our argument is based on the mathematical phenomenon of non-measurable sets. Non-measurable propositions cannot receive precise probabilities, but there is a natural (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Carnap's inductive probabilities as a contribution to decision theory.Joachim Hornung - 1980 - Metamedicine 1 (3):325-367.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Carnap's inductive probabilities as a contribution to decision theory.Joachim Hornung - 1980 - Theoretical Medicine and Bioethics 1 (3):325-367.
    Common probability theories only allow the deduction of probabilities by using previously known or presupposed probabilities. They do not, however, allow the derivation of probabilities from observed data alone. The question thus arises as to how probabilities in the empirical sciences, especially in medicine, may be arrived at. Carnap hoped to be able to answer this question byhis theory of inductive probabilities. In the first four sections of the present paper the above mentioned problem is discussed in general. After a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Inductive Incompleteness.Matthias Hild - 2006 - Philosophical Studies 128 (1):109-135.
    Nelson Goodman cast the ‘problem of induction’ as the task of articulating the principles and standards by which to distinguish valid from invalid inductive inferences. This paper explores some logical bounds on the ability of a rational reasoner to accomplish this task. By a simple argument, either an inductive inference method cannot admit its own fallibility, or there exists some non-inferable hypothesis whose non-inferability the method cannot infer (violating the principle of ‘negative introspection’). The paper discusses some implications of this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ambiguity in asset pricing and portfolio choice: a review of the literature. [REVIEW]Massimo Guidolin & Francesca Rinaldi - 2013 - Theory and Decision 74 (2):183-217.
    We survey the literature that has explored the implications of decision-making under ambiguity for financial market outcomes, such as portfolio choice and equilibrium asset prices. This ambiguity literature has led to a number of significant advances in our ability to rationalize empirical features of asset returns and portfolio decisions, such as the failure of the two-fund separation theorem in portfolio decisions, the modest exposure to risky securities observed for a majority of investors, the home equity preference in international portfolio diversification, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Unreliable probabilities, risk taking, and decision making.Peter Gärdenfors & Nils-Eric Sahlin - 1982 - Synthese 53 (3):361-386.
    Download  
     
    Export citation  
     
    Bookmark   79 citations  
  • Objective and subjective rationality and decisions with the best and worst case in mind.Simon Grant, Patricia Rich & Jack Stecher - 2020 - Theory and Decision 90 (3-4):309-320.
    We study decision under uncertainty in an Anscombe–Aumann framework. Two binary relations characterize a decision-maker: one incomplete relation, reflecting her objective rationality, and a second complete relation, reflecting her subjective rationality. We require the latter to be an extension of the former. Our key axiom is a dominance condition. Our main theorem provides a representation of the two relations. The objectively rational relation has a Bewley-style multiple prior representation. Using this set of priors, we fully characterize the subjectively rational relation (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A measure for the distance between an interval hypothesis and the truth.Roberto Festa - 1986 - Synthese 67 (2):273 - 320.
    The problem of distance from the truth, and more generally distance between hypotheses, is considered here with respect to the case of quantitative hypotheses concerning the value of a given scientific quantity.Our main goal consists in the explication of the concept of distance D(I, ) between an interval hypothesis I and a point hypothesis . In particular, we attempt to give an axiomatic foundation of this notion on the basis of a small number of adequacy conditions.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Cognitive-decision-making issues for software agents.Behrouz Homayoun Far & Romi Satria Wahono - 2003 - Brain and Mind 4 (2):239-252.
    Rational decision making depends on what one believes, what one desires, and what one knows. In conventional decision models, beliefs are represented by probabilities and desires are represented by utilities. Software agents are knowledgeable entities capable of managing their own set of beliefs and desires, and they can decide upon the next operation to execute autonomously. They are also interactive entities capable of filtering communications and managing dialogues. Knowledgeability includes representing knowledge about the external world, reasoning with it, and sharing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Cointegration: Bayesian Significance Test Communications in Statistics.Julio Michael Stern, Marcio Alves Diniz & Carlos Alberto de Braganca Pereira - 2012 - Communications in Statistics 41 (19):3562-3574.
    To estimate causal relationships, time series econometricians must be aware of spurious correlation, a problem first mentioned by Yule (1926). To deal with this problem, one can work either with differenced series or multivariate models: VAR (VEC or VECM) models. These models usually include at least one cointegration relation. Although the Bayesian literature on VAR/VEC is quite advanced, Bauwens et al. (1999) highlighted that “the topic of selecting the cointegrating rank has not yet given very useful and convincing results”. The (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Unit Roots: Bayesian Significance Test.Julio Michael Stern, Marcio Alves Diniz & Carlos Alberto de Braganca Pereira - 2011 - Communications in Statistics 40 (23):4200-4213.
    The unit root problem plays a central role in empirical applications in the time series econometric literature. However, significance tests developed under the frequentist tradition present various conceptual problems that jeopardize the power of these tests, especially for small samples. Bayesian alternatives, although having interesting interpretations and being precisely defined, experience problems due to the fact that that the hypothesis of interest in this case is sharp or precise. The Bayesian significance test used in this article, for the unit root (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Philosophy as conceptual engineering: Inductive logic in Rudolf Carnap's scientific philosophy.Christopher F. French - 2015 - Dissertation, University of British Columbia
    My dissertation explores the ways in which Rudolf Carnap sought to make philosophy scientific by further developing recent interpretive efforts to explain Carnap’s mature philosophical work as a form of engineering. It does this by looking in detail at his philosophical practice in his most sustained mature project, his work on pure and applied inductive logic. I, first, specify the sort of engineering Carnap is engaged in as involving an engineering design problem and then draw out the complications of design (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A note on Oscar Chisini mean value definition.Iurato Giuseppe - unknown
    Mainly on the basis of some notable physical examples reported in a 1929 Oscar Chisini paper, in this brief note it is exposed further possible historic-critical remarks on the definition of statistical mean which lead us towards the realm of Integral Geometry, via the Felix Klein Erlanger Programm.
    Download  
     
    Export citation  
     
    Bookmark  
  • Milton Friedman, the Statistical Methodologist.David Teira - 2007 - History of Political Economy 39 (3):511-28.
    In this paper I study Milton Friedman’s statistical education, paying special attention to the different methodological approaches (Fisher, Neyman and Savage) to which he was exposed. I contend that these statistical procedures involved different views as to the evaluation of statistical predictions. In this light, the thesis defended in Friedman’s 1953 methodological essay appears substantially ungrounded.
    Download  
     
    Export citation  
     
    Bookmark   1 citation