Results for 'Randomization'

27 found
Order:
  1. Randomization and Fair Judgment in Law and Science.Julio Michael Stern - 2020 - In Jose Acacio de Barros & Decio Krause (eds.), A True Polymath: A Tribute to Francisco Antonio Doria. College Publications. pp. 399-418.
    Randomization procedures are used in legal and statistical applications, aiming to shield important decisions from spurious influences. This article gives an intuitive introduction to randomization and examines some intended consequences of its use related to truthful statistical inference and fair legal judgment. This article also presents an open-code Java implementation for a cryptographically secure, statistically reliable, transparent, traceable, and fully auditable randomization tool.
    Download  
     
    Export citation  
     
    Bookmark  
  2. Auditable Blockchain Randomization Tool.Julio Michael Stern & Olivia Saa - 2019 - Proceedings 33 (17):1-6.
    Randomization is an integral part of well-designed statistical trials, and is also a required procedure in legal systems. Implementation of honest, unbiased, understandable, secure, traceable, auditable and collusion resistant randomization procedures is a mater of great legal, social and political importance. Given the juridical and social importance of randomization, it is important to develop procedures in full compliance with the following desiderata: (a) Statistical soundness and computational efficiency; (b) Procedural, cryptographical and computational security; (c) Complete auditability and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Decoupling, Sparsity, Randomization, and Objective Bayesian Inference.Julio Michael Stern - 2008 - Cybernetics and Human Knowing 15 (2):49-68..
    Decoupling is a general principle that allows us to separate simple components in a complex system. In statistics, decoupling is often expressed as independence, no association, or zero covariance relations. These relations are sharp statistical hypotheses, that can be tested using the FBST - Full Bayesian Significance Test. Decoupling relations can also be introduced by some techniques of Design of Statistical Experiments, DSEs, like randomization. This article discusses the concepts of decoupling, randomization and sparsely connected statistical models in (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  4. Public Policy Experiments without Equipoise: When is Randomization Fair?Douglas MacKay & Emma Cohn - 2023 - Ethics and Human Research 45 (1):15-28.
    Government agencies and nonprofit organizations have increasingly turned to randomized controlled trials (RCTs) to evaluate public policy interventions. Random assignment is widely understood to be fair when there is equipoise; however, some scholars and practitioners argue that random assignment is also permissible when an intervention is reasonably expected to be superior to other trial arms. For example, some argue that random assignment to such an intervention is fair when the intervention is scarce, for it is sometimes fair to use a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Government Policy Experiments and the Ethics of Randomization.Douglas MacKay - 2020 - Philosophy and Public Affairs 48 (4):319-352.
    Governments are increasingly using randomized controlled trials (RCTs) to evaluate policy interventions. RCTs are often understood to provide the highest quality evidence regarding the causal efficacy of an intervention. While randomization plays an essential epistemic role in the context of policy RCTs however, it also plays an important distributive role. By randomly assigning participants to either the intervention or control arm of an RCT, people are subject to different policies and so, often, to different types and levels of benefits. (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  6. Combining Optimization and Randomization Approaches for the Design of Clinical Trials.Julio Michael Stern, Victor Fossaluza, Marcelo de Souza Lauretto & Carlos Alberto de Braganca Pereira - 2015 - Springer Proceedings in Mathematics and Statistics 118:173-184.
    t Intentional sampling methods are non-randomized procedures that select a group of individuals for a sample with the purpose of meeting specific prescribed criteria. In this paper we extend previous works related to intentional sampling, and address the problem of sequential allocation for clinical trials with few patients. Roughly speaking, patients are enrolled sequentially, according to the order in which they start the treatment at the clinic or hospital. The allocation problem consists in assigning each new patient to one, and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Philosophical controversies in the evaluation of medical treatments : With a focus on the evidential roles of randomization and mechanisms in Evidence-Based Medicine.Alexander Mebius - 2015 - Dissertation, Kth Royal Institute of Technology
    This thesis examines philosophical controversies surrounding the evaluation of medical treatments, with a focus on the evidential roles of randomised trials and mechanisms in Evidence-Based Medicine. Current 'best practice' usually involves excluding non-randomised trial evidence from systematic reviews in cases where randomised trials are available for inclusion in the reviews. The first paper challenges this practice and evaluates whether adding of evidence from non-randomised trials might improve the quality and precision of some systematic reviews. The second paper compares the alleged (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Why Be Random?Thomas Icard - 2021 - Mind 130 (517):111-139.
    When does it make sense to act randomly? A persuasive argument from Bayesian decision theory legitimizes randomization essentially only in tie-breaking situations. Rational behaviour in humans, non-human animals, and artificial agents, however, often seems indeterminate, even random. Moreover, rationales for randomized acts have been offered in a number of disciplines, including game theory, experimental design, and machine learning. A common way of accommodating some of these observations is by appeal to a decision-maker’s bounded computational resources. Making this suggestion both (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  9. Assessing Randomness in Case Assignment: The Case Study of the Brazilian Supreme Court.Julio Michael Stern, Diego Marcondes & Claudia Peixoto - 2019 - Law, Probability and Risk 18 (2/3):97-114.
    Sortition, i.e. random appointment for public duty, has been employed by societies throughout the years as a firewall designated to prevent illegitimate interference between parties in a legal case and agents of the legal system. In judicial systems of modern western countries, random procedures are mainly employed to select the jury, the court and/or the judge in charge of judging a legal case. Therefore, these random procedures play an important role in the course of a case, and should comply with (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  10. Bayesian versus frequentist clinical trials.David Teira - 2011 - In Fred Gifford (ed.), Philosophy of Medicine. Boston: Elsevier. pp. 255-297.
    I will open the first part of this paper by trying to elucidate the frequentist foundations of RCTs. I will then present a number of methodological objections against the viability of these inferential principles in the conduct of actual clinical trials. In the following section, I will explore the main ethical issues in frequentist trials, namely those related to randomisation and the use of stopping rules. In the final section of the first part, I will analyse why RCTs were accepted (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  11. Spencer-Brown vs. Probability and Statistics: Entropy’s Testimony on Subjective and Objective Randomness.Julio Michael Stern - 2011 - Information 2 (2):277-301.
    This article analyzes the role of entropy in Bayesian statistics, focusing on its use as a tool for detection, recognition and validation of eigen-solutions. “Objects as eigen-solutions” is a key metaphor of the cognitive constructivism epistemological framework developed by the philosopher Heinz von Foerster. Special attention is given to some objections to the concepts of probability, statistics and randomization posed by George Spencer-Brown, a figure of great influence in the field of radical constructivism.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  12. The Confounding Question of Confounding Causes in Randomized Trials.Jonathan Fuller - 2019 - British Journal for the Philosophy of Science 70 (3):901-926.
    It is sometimes thought that randomized study group allocation is uniquely proficient at producing comparison groups that are evenly balanced for all confounding causes. Philosophers have argued that in real randomized controlled trials this balance assumption typically fails. But is the balance assumption an important ideal? I run a thought experiment, the CONFOUND study, to answer this question. I then suggest a new account of causal inference in ideal and real comparative group studies that helps clarify the roles of confounding (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  13. Causal inference in biomedical research.Tudor M. Baetu - 2020 - Biology and Philosophy 35 (4):1-19.
    Current debates surrounding the virtues and shortcomings of randomization are symptomatic of a lack of appreciation of the fact that causation can be inferred by two distinct inference methods, each requiring its own, specific experimental design. There is a non-statistical type of inference associated with controlled experiments in basic biomedical research; and a statistical variety associated with randomized controlled trials in clinical research. I argue that the main difference between the two hinges on the satisfaction of the comparability requirement, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  14. A Contractarian Solution to the Experimenter’s Regress.David Teira - 2013 - Philosophy of Science 80 (5):709-720.
    Debiasing procedures are experimental methods aimed at correcting errors arising from the cognitive biases of the experimenter. We discuss two of these methods, the predesignation rule and randomization, showing to what extent they are open to the experimenter’s regress: there is no metarule to prove that, after implementing the procedure, the experimental data are actually free from biases. We claim that, from a contractarian perspective, these procedures are nonetheless defensible since they provide a warrant of the impartiality of the (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  15. Experimental Design: Ethics, Integrity and the Scientific Method.Jonathan Lewis - 2020 - In Ron Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity. Springer. pp. 459-474.
    Experimental design is one aspect of a scientific method. A well-designed, properly conducted experiment aims to control variables in order to isolate and manipulate causal effects and thereby maximize internal validity, support causal inferences, and guarantee reliable results. Traditionally employed in the natural sciences, experimental design has become an important part of research in the social and behavioral sciences. Experimental methods are also endorsed as the most reliable guides to policy effectiveness. Through a discussion of some of the central concepts (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  16. On the impartiality of early British clinical trials.David Teira - 2013 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 44 (3):412-418.
    Did the impartiality of clinical trials play any role in their acceptance as regulatory standards for the safety and efficacy of drugs? According to the standard account of early British trials in the 1930s and 1940s, their impartiality was just rhetorical: the public demanded fair tests and statistical devices such as randomization created an appearance of neutrality. In fact, the design of the experiment was difficult to understand and the British authorities took advantage of it to promote their own (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  17. Herding QATs: Quality Assessment Tools for Evidence in Medicine.Jacob Stegenga - 2015 - In Huneman, Silberstein & Lambert (eds.), Herding QATs: Quality Assessment Tools for Evidence in Medicine. pp. 193-211.
    Medical scientists employ ‘quality assessment tools’ (QATs) to measure the quality of evidence from clinical studies, especially randomized controlled trials (RCTs). These tools are designed to take into account various methodological details of clinical studies, including randomization, blinding, and other features of studies deemed relevant to minimizing bias and error. There are now dozens available. The various QATs on offer differ widely from each other, and second-order empirical studies show that QATs have low inter-rater reliability and low inter-tool reliability. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  18. There is Cause to Randomize.Cristian Larroulet Philippi - 2022 - Philosophy of Science 89 (1):152 - 170.
    While practitioners think highly of randomized studies, some philosophers argue that there is no epistemic reason to randomize. Here I show that their arguments do not entail their conclusion. Moreover, I provide novel reasons for randomizing in the context of interventional studies. The overall discussion provides a unified framework for assessing baseline balance, one that holds for interventional and observational studies alike. The upshot: practitioners’ strong preference for randomized studies can be defended in some cases, while still offering a nuanced (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  19. Philosophy of Evidence Based Medicine (Oxford Bibliography: http://www.oxfordbibliographies.com/view/document/obo-9780195396577/obo-9780195396577-0253.xml).Jeremy Howick, Ashley Graham Kennedy & Alexander Mebius - 2015 - Oxford Bibliography.
    Since its introduction just over two decades ago, evidence-based medicine (EBM) has come to dominate medical practice, teaching, and policy. There are a growing number of textbooks, journals, and websites dedicated to EBM research, teaching, and evidence dissemination. EBM was most recently defined as a method that integrates best research evidence with clinical expertise and patient values and circumstances in the treatment of patients. There have been debates throughout the early 21st century about what counts as good research evidence between (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. The meaning of "cause" in genetics.Kate E. Lynch - 2021 - Combining Human Genetics and Causal Inference to Understand Human Disease and Development. Cold Spring Harbor Perspectives in Medicine.
    Causation has multiple distinct meanings in genetics. One reason for this is meaning slippage between two concepts of the gene: Mendelian and molecular. Another reason is that a variety of genetic methods address different kinds of causal relationships. Some genetic studies address causes of traits in individuals, which can only be assessed when single genes follow predictable inheritance patterns that reliably cause a trait. A second sense concerns the causes of trait differences within a population. Whereas some single genes can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. The use and limitations of null-model-based hypothesis testing.Mingjun Zhang - 2020 - Biology and Philosophy 35 (2):1-22.
    In this article I give a critical evaluation of the use and limitations of null-model-based hypothesis testing as a research strategy in the biological sciences. According to this strategy, the null model based on a randomization procedure provides an appropriate null hypothesis stating that the existence of a pattern is the result of random processes or can be expected by chance alone, and proponents of other hypotheses should first try to reject this null hypothesis in order to demonstrate their (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22.  19
    From Postal Scale to Psychological Apparatus A History of Experimental Psychology Through the Reconstruction of Peirce and Jastrow’s “On Small Differences of Sensation” (1885).Claudia Cristalli & Rebecca L. Jackson - 2023 - Nuncius 38:553–582.
    This paper describes our reconstruction of the apparatus used in C.S. Peirce and Joseph Jastrow’s 1885 psychophysical experiment, “On Small Differences of Sensation” and how it relates to persistent questions in scientific theories of measurement. We situate Peirce and Jastrow’s work in the broader context of nineteenth-century discussions about the status of psychology as a science and emphasize the role of measurement and experiment in determining that status. Through our re-enactment of the experiment, we analyze the experiment’s methodology, which features (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Studying Neutrosophic Variables.A. A. Salama & Rafif Alhabib (eds.) - 2020 - Los Angeles, CA, USA: Nova Science Publishers.
    We present in this paper the neutrosophic randomized variables, which are a generalization of the classical random variables obtained from the application of the neutrosophic logic (a new nonclassical logic which was founded by the American philosopher and mathematical Florentin Smarandache, which he introduced as a generalization of fuzzy logic especially the intuitionistic fuzzy logic ) on classical random variables. The neutrosophic random variable is changed because of the randomization, the indeterminacy and the values it takes, which represent the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Permutation of UTME multiple-choice test items on performance in use of English and mathematics among prospective higher education students.Bassey Asuquo Bassey, Isaac Ofem Ubi, German E. Anagbogu & Valentine Joseph Owan - 2020 - Journal of Social Sciences Research 6 (4):483-493.
    In an attempt to curtail examination malpractice, the Joint Admission and Matriculation Board (JAMB) has been generating different paper types with a different order of test items in the Unified Tertiary Matriculation Examination (UTME). However, the permutation of test items may compromise students’ performance unintentionally because constructive suggestions in theory and practice recommend that test items be sequenced in ascending order of difficulty. This study used data collected from a random sample of 1,226 SSIII students to ascertain whether the permutation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Intentional Sampling by Goal Optimization with Decoupling by Stochastic Perturbation.Julio Michael Stern, Marcelo de Souza Lauretto, Fabio Nakano & Carlos Alberto de Braganca Pereira - 2012 - AIP Conference Proceedings 1490:189-201.
    Intentional sampling methods are non-probabilistic procedures that select a group of individuals for a sample with the purpose of meeting specific prescribed criteria. Intentional sampling methods are intended for exploratory research or pilot studies where tight budget constraints preclude the use of traditional randomized representative sampling. The possibility of subsequently generalize statistically from such deterministic samples to the general population has been the issue of long standing arguments and debates. Nevertheless, the intentional sampling techniques developed in this paper explore pragmatic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. (1 other version)Swimming upstream: navigating ethical practices in the creation of a participatory youth media workshop.Myra Margolin - 2009 - Les ateliers de l'éthique/The Ethics Forum 4 (1):105-116.
    Despite the growing popularity of participatory video as a tool for facilitating youth empower- ment, the methodology and impacts of the practice are extremely understudied. This paper des- cribes a study design created to examine youth media methodology and the ethical dilemmas that arose in its attempted implementation. Specifically, elements that added “rigor” to the study (i.e., randomization, pre- and post-measures, and an intensive interview) conflicted with the fun- damental tenets of youth participation. The paper concludes with suggestions for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Review of For the Common Good: Philosophical Foundations of Research Ethics. [REVIEW]Douglas MacKay - 2022 - Kennedy Institute of Ethics Journal 32 (3):13-28.
    The principal goal of Alex John London's For the Common Good is to "articulate a new vision for the philosophical foundations of research ethics" which "moves issues of justice from the periphery of the field to the very center." At the core of this new vision is an understanding of research as a "collaborative social activity between free and equal persons," which aims to develop the knowledge public institutions require to establish and maintain a social order in which people may (...)
    Download  
     
    Export citation  
     
    Bookmark