Results for 'AI risk'

683 found
Order:
  1.  6
    AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.Brandon Perry & Risto Uuk - 2019 - Big Data and Cognitive Computing 3 (2):1-17.
    This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are created (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Why AI Doomsayers Are Like Sceptical Theists and Why It Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. New Developments in the Philosophy of AI.Vincent Müller - 2016 - In Fundamental Issues of Artificial Intelligence. Springer.
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  4. Editorial: Risks of Artificial Intelligence.Vincent C. Müller - 2016 - In Risks of artificial intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Risks of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Global Solutions Vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Assessing the Future Plausibility of Catastrophically Dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  10. Could Slaughterbots Wipe Out Humanity? Assessment of the Global Catastrophic Risk Posed by Autonomous Weapons.Alexey Turchin - manuscript
    Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  40
    First Human Upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Editorial: Risks of General Artificial Intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  13.  81
    Artificial Multipandemic as the Most Plausible and Dangerous Global Catastrophic Risk Connected with Bioweapons and Synthetic Biology.Alexey Turchin, Brian Patrick Green & David Denkenberger - manuscript
    Pandemics have been suggested as global risks many times, but it has been shown that the probability of human extinction due to one pandemic is small, as it will not be able to affect and kill all people, but likely only half, even in the worst cases. Assuming that the probability of the worst pandemic to kill a person is 0.5, and assuming linear interaction between different pandemics, 30 strong pandemics running simultaneously will kill everyone. Such situations cannot happen naturally, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Risks of Artificial General Intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  15. Making Metaethics Work for AI: Realism and Anti-Realism.Michal Klincewicz & Lily E. Frank - 2018 - In Mark Coeckelbergh, M. Loh, J. Funk, M. Seibt & J. Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and Public Space. Amsterdam, Netherlands: IOS Press. pp. 311-318.
    Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
    Download  
     
    Export citation  
     
    Bookmark  
  16. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Saliva Ontology: An Ontology-Based Framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18.  57
    Bioinformatics Advances in Saliva Diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19.  32
    Towards a Body Fluids Ontology: A Unified Application Ontology for Basic and Translational Science.Jiye Ai, Mauricio Barcellos Almeida, André Queiroz De Andrade, Alan Ruttenberg, David Tai Wai Wong & Barry Smith - 2011 - Second International Conference on Biomedical Ontology , Buffalo, Ny 833:227-229.
    We describe the rationale for an application ontology covering the domain of human body fluids that is designed to facilitate representation, reuse, sharing and integration of diagnostic, physiological, and biochemical data, We briefly review the Blood Ontology (BLO), Saliva Ontology (SALO) and Kidney and Urinary Pathway Ontology (KUPO) initiatives. We discuss the methods employed in each, and address the project of using them as starting point for a unified body fluids ontology resource. We conclude with a description of how the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Transparent, Explainable, and Accountable AI for Robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  21. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2018 - AI and Society:1-17.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Pharmaceutical Risk Communication: Sources of Uncertainty and Legal Tools of Uncertainty Management.Barbara Osimani - 2010 - Health Risk and Society 12 (5):453-69.
    Risk communication has been generally categorized as a warning act, which is performed in order to prevent or minimize risk. On the other side, risk analysis has also underscored the role played by information in reducing uncertainty about risk. In both approaches the safety aspects related to the protection of the right to health are on focus. However, it seems that there are cases where a risk cannot possibly be avoided or uncertainty reduced, this is (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   2 citations  
  23. Assessing Capability Instead of Achieved Functionings in Risk Analysis.Colleen Murphy & Paolo Gardoni - 2010 - Journal of Risk Research 13 (2):137-147.
    A capability approach has been proposed to risk analysis, where risk is conceptualized as the probability that capabilities are reduced. Capabilities refer to the genuine opportunities of individuals to achieve valuable doings and beings, such as being adequately nourished. Such doings and beings are called functionings. A current debate in risk analysis and other fields where a capability approach has been developed concerns whether capabilities or actual achieved functionings should be used. This paper argues that in (...) analysis the consequences of hazardous scenarios should be conceptualized in terms of capabilities, not achieved functionings. Furthermore, the paper proposes a method for assessing capabilities, which considers the levels of achieved functionings of other individuals with similar boundary conditions. The capability of an individual can then be captured statistically based on the variability of the achieved functionings over the considered population. (shrink)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  24.  33
    Report From a Socratic Dialogue on the Concept of Risk.Erik Persson - 2005 - In Kristina Blennow (ed.), Uncertainty and Active Risk management in Agriculture and Forestry. Alnarp, Sweden: SLU. pp. 35-39.
    The term ’risk’ is used in a wide range of situations, but there is no real consensus of what it means. ‘Risk ‘is often stipulatively defined as “a probability for the occurrence of a negative event” or something similar. This formulation is however not very informative, and it fails to capture many of our intuitions about the concept or risk. One way of trying to find a common definition of a term within a group is to use (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25.  71
    Risk in a Simple Temporal Framework for Expected Utility Theory and for SKAT, the Stages of Knowledge Ahead Theory’, Risk and Decision Analysis, 2(1), 5-32. Selten Co-Author.Robin Pope & Reinhard Selten - 2010/2011 - Risk and Decision Analysis 2 (1).
    The paper re-expresses arguments against the normative validity of expected utility theory in Robin Pope (1983, 1991a, 1991b, 1985, 1995, 2000, 2001, 2005, 2006, 2007). These concern the neglect of the evolving stages of knowledge ahead (stages of what the future will bring). Such evolution is fundamental to an experience of risk, yet not consistently incorporated even in axiomatised temporal versions of expected utility. Its neglect entails a disregard of emotional and financial effects on well-being before a particular (...) is resolved. These arguments are complemented with an analysis of the essential uniqueness property in the context of temporal and atemporal expected utility theory and a proof of the absence of a limit property natural in an axiomatised approach to temporal expected utility theory. Problems of the time structure of risk are investigated in a simple temporal framework restricted to a subclass of temporal lotteries in the sense of David Kreps and Evan Porteus (1978). This subclass is narrow but wide enough to discuss basic issues. It will be shown that there are serious objections against the modification of expected utility theory axiomatised by Kreps and Porteus (1978, 1979). By contrast the umbrella theory proffered by Pope that she has now termed SKAT, the Stages of Knowledge Ahead Theory, offers an epistemically consistent framework within which to construct particular models to deal with particular decision situations. A model by Caplin and Leahy (2001) will also be discussed and contrasted with the modelling within SKAT (Pope, Leopold and Leitner 2007). (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Greenwash and Green Trust: The Mediation Effects of Green Consumer Confusion and Green Perceived Risk[REVIEW]Yu-Shan Chen & Ching-Hsun Chang - 2013 - Journal of Business Ethics 114 (3):489-500.
    The paper explores the influence of greenwash on green trust and discusses the mediation roles of green consumer confusion and green perceived risk. The research object of this study focuses on Taiwanese consumers who have the purchase experience of information and electronics products in Taiwan. This research employs an empirical study by means of the structural equation modeling. The results show that greenwash is negatively related to green trust. Therefore, this study suggests that companies must reduce their greenwash behaviors (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  27. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2019 - Synthese:arXiv:1901.02918v1.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Risk Aversion and the Long Run.Johanna Thoma - 2018 - Ethics 129 (2):230-253.
    This article argues that Lara Buchak’s risk-weighted expected utility theory fails to offer a true alternative to expected utility theory. Under commonly held assumptions about dynamic choice and the framing of decision problems, rational agents are guided by their attitudes to temporally extended courses of action. If so, REU theory makes approximately the same recommendations as expected utility theory. Being more permissive about dynamic choice or framing, however, undermines the theory’s claim to capturing a steady choice disposition in the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  29.  53
    Principled Utility Discounting Under Risk.Kian Mintz-Woo - 2019 - Moral Philosophy and Politics 6 (1):89-112.
    Utility discounting in intertemporal economic modelling has been viewed as problematic, both for descriptive and normative reasons. However, positive utility discount rates can be defended normatively; in particular, it is rational for future utility to be discounted to take into account model-independent outcomes when decision-making under risk. The resultant values will tend to be smaller than descriptive rates under most probability assignments. This also allows us to address some objections that intertemporal considerations will be overdemanding. A principle for utility (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30.  30
    Measuring Belief and Risk Attitude.Sven Neth - 2019 - Electronic Proceedings in Theoretical Computer Science 297:252–272.
    Ramsey (1926) sketches a proposal for measuring the subjective probabilities of an agent by their observable preferences, assuming that the agent is an expected utility maximizer. I show how to extend the spirit of Ramsey's method to a strictly wider class of agents: risk-weighted expected utility maximizers (Buchak 2013). In particular, I show how we can measure the risk attitudes of an agent by their observable preferences, assuming that the agent is a risk-weighted expected utility maximizer. Further, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Bioeconomics, Biopolitics and Bioethics: Evolutionary Semantics of Evolutionary Risk (Anthropological Essay).V. T. Cheshko - 2016 - Bioeconomics and Ecobiopolitic (1 (2)).
    Attempt of trans-disciplinary analysis of the evolutionary value of bioethics is realized. Currently, there are High Tech schemes for management and control of genetic, socio-cultural and mental evolution of Homo sapiens (NBIC, High Hume, etc.). The biological, socio-cultural and technological factors are included in the fabric of modern theories and technologies of social and political control and manipulation. However, the basic philosophical and ideological systems of modern civilization formed mainly in the 17–18 centuries and are experiencing ever-increasing and destabilizing (...)-taking pressure from the scientific theories and technological realities. The sequence of diagnostic signs of a new era once again split into technological and natural sciences’ from one hand, and humanitarian and anthropological sciences’, from other. The natural sciences series corresponds to a system of technological risks be solved using algorithms established safety procedures. The socio-humanitarian series presented anthropological risk. Global bioethics phenomenon is regarded as systemic socio-cultural adaptation for technology-driven human evolution. The conceptual model for meta-structure of stable evolutionary strategy of Homo sapiens (SESH) is proposes. In accordance to model, SESH composed of genetic, socio-cultural and techno-rationalist modules, and global bioethics as a tool to minimize existential evolutionary risk. An existence of objectively descriptive and value-teleological evolutionary trajectory parameters of humanity in the modern technological and civilizational context (1), and the genesis of global bioethics as a system social adaptation to ensure self-identity (2) are postulated. -/- . (shrink)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  32. Action, Deontology, and Risk: Against the Multiplicative Model.Sergio Tenenbaum - 2017 - Ethics 127 (3):674-707.
    Deontological theories face difficulties in accounting for situations involving risk; the most natural ways of extending deontological principles to such situations have unpalatable consequences. In extending ethical principles to decision under risk, theorists often assume the risk must be incorporated into the theory by means of a function from the product of probability assignments to certain values. Deontologists should reject this assumption; essentially different actions are available to the agent when she cannot know that a certain act (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  33. A Value-Sensitive Design Approach to Intelligent Agents.Steven Umbrello & Angelo Frank De Bellis - 2018 - In Roman Yampolskiy (ed.), Artificial Intelligence Safety and Security. Boca Raton, FL: CRC Press. pp. 395-410.
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently provides (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  34. Is Risk Aversion Irrational?H. Orri Stefánsson - forthcoming - Synthese:1-13.
    A moderately risk averse person may turn down a 50/50 gamble that either results in her winning $200 or losing $100. Such behaviour seems rational if, for instance, the pain of losing $100 is felt more strongly than the joy of winning $200. The aim of this paper is to examine an influential argument that some have interpreted as showing that such moderate risk aversion is irrational. After presenting an axiomatic argument that I take to be the strongest (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Limited Aggregation and Risk.Seth Lazar - 2018 - Philosophy and Public Affairs 46 (2):117-159.
    Many of us believe (1) Saving a life is more important than averting any number of headaches. But what about risky cases? Surely: (2) In a single choice, if the risk of death is low enough, and the number of headaches at stake high enough, one should avert the headaches rather than avert the risk of death. And yet, if we will face enough iterations of cases like that in (2), in the long run some of those small (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2017 - In Vincent Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Berlin: Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Entitlement, Epistemic Risk and Scepticism.Luca Moretti - forthcoming - Episteme.
    Crispin Wright maintains that the architecture of perceptual justification is such that we can acquire justification for our perceptual beliefs only if we have antecedent justification for ruling out any sceptical alternative. Wright contends that this principle doesn’t elicit scepticism, for we are non-evidentially entitled to accept the negation of any sceptical alternative. Sebastiano Moruzzi has challenged Wright’s contention by arguing that since our non-evidential entitlements don’t remove the epistemic risk of our perceptual beliefs, they don’t actually enable us (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  38. Revisiting Risk and Rationality: A Reply to Pettigrew and Briggs.Lara Buchak - 2015 - Canadian Journal of Philosophy 45 (5):841-862.
    I have claimed that risk-weighted expected utility maximizers are rational, and that their preferences cannot be captured by expected utility theory. Richard Pettigrew and Rachael Briggs have recently challenged these claims. Both authors argue that only EU-maximizers are rational. In addition, Pettigrew argues that the preferences of REU-maximizers can indeed be captured by EU theory, and Briggs argues that REU-maximizers lose a valuable tool for simplifying their decision problems. I hold that their arguments do not succeed and that my (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  39. Why High-Risk, Non-Expected-Utility-Maximising Gambles Can Be Rational and Beneficial: The Case of HIV Cure Studies.Lara Buchak - 2016 - Journal of Medical Ethics (2):1-6.
    Some early phase clinical studies of candidate HIV cure and remission interventions appear to have adverse medical risk–benefit ratios for participants. Why, then, do people participate? And is it ethically permissible to allow them to participate? Recent work in decision theory sheds light on both of these questions, by casting doubt on the idea that rational individuals prefer choices that maximise expected utility, and therefore by casting doubt on the idea that researchers have an ethical obligation not to enrol (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  40.  13
    Risk and Luck in Medical Ethics.Donna Dickenson - 2003 - Polity.
    This book examines the moral luck paradox, relating it to Kantian, consequentialist and virtue-based approaches to ethics. It also applies the paradox to areas in medical ethics, including allocation of scarce medical resources, informed consent to treatment, withholding life-sustaining treatment, psychiatry, reproductive ethics, genetic testing and medical research. If risk and luck are taken seriously, it might seem to follow that we cannot develop any definite moral standards, that we are doomed to moral relativism. However, Dickenson offers strong counter-arguments (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  41. Risk and Disease.Peter H. Schwartz - 2008 - Perspectives in Biology and Medicine 51 (3):320-334.
    The way that diseases such as high blood pressure (hypertension), high cholesterol, and diabetes are defined is closely tied to ideas about modifiable risk. In particular, the threshold for diagnosing each of these conditions is set at the level where future risk of disease can be reduced by lowering the relevant parameter (of blood pressure, low-density lipoprotein, or blood glucose, respectively). In this article, I make the case that these criteria, and those for diagnosing and treating other “ (...)-based diseases,” reflect an unfortunate trend towards reclassifying risk as disease. I closely examine stage 1 hypertension and high cholesterol and argue that many patients diagnosed with these “diseases” do not actually have a pathological condition. In addition, though, I argue that the fact that they are risk factors, rather than diseases, does not diminish the importance of treating them, since there is good evidence that such treatment can reduce morbidity and mortality. For both philosophical and ethical reasons, however, the conditions should not be labeled as pathological.The tendency to reclassify risk factors as diseases is an important trend to examine and critique. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  42. The Ethics of Information: Absolute Risk Reduction and Patient Understanding of Screening.Peter H. Schwartz & Eric M. Meslin - 2008 - Journal of General Internal Medicine 23 (6):867-870.
    Some experts have argued that patients should routinely be told the specific magnitude and absolute probability of potential risks and benefits of screening tests. This position is motivated by the idea that framing risk information in ways that are less precise violates the ethical principle of respect for autonomy and its application in informed consent or shared decisionmaking. In this Perspective, we consider a number of problems with this view that have not been adequately addressed. The most important challenges (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  43. A Merton Model of Credit Risk with Jumps.Hoang Thi Phuong Thao & Quan-Hoang Vuong - 2015 - Journal of Statistics Applications and Probability Letters 2 (2):97-103.
    In this note, we consider a Merton model for default risk, where the firm’s value is driven by a Brownian motion and a compound Poisson process.
    Download  
     
    Export citation  
     
    Bookmark  
  44. The Social Nature of Engineering and its Implications for Risk Taking.Allison Ross & Nafsika Athanassoulis - 2010 - Science and Engineering Ethics 16 (1):147-168.
    Making decisions with an, often significant, element of risk seems to be an integral part of many of the projects of the diverse profession of engineering. Whether it be decisions about the design of products, manufacturing processes, public works, or developing technological solutions to environmental, social and global problems, risk taking seems inherent to the profession. Despite this, little attention has been paid to the topic and specifically to how our understanding of engineering as a distinctive profession might (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  45. STABLE ADAPTIVE STRATEGY of HOMO SAPIENS and EVOLUTIONARY RISK of HIGH TECH. Transdisciplinary Essay.Valentin Cheshko, Valery Glazko, Gleb Yu Kosovsky & Anna S. Peredyadenko (eds.) - 2015 - new publ.tech..
    The co-evolutionary concept of Three-modal stable evolutionary strategy of Homo sapiens is developed. The concept based on the principle of evolutionary complementarity of anthropogenesis: value of evolutionary risk and evolutionary path of human evolution are defined by descriptive (evolutionary efficiency) and creative-teleological (evolutionary correctly) parameters simultaneously, that cannot be instrumental reduced to others ones. Resulting volume of both parameters define the trends of biological, social, cultural and techno-rationalistic human evolution by two gear mechanism ˗ gene-cultural co-evolution and techno- humanitarian (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  46. Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Risk and Tradeoffs.Lara Buchak - 2014 - Erkenntnis 79 (S6):1091-1117.
    The orthodox theory of instrumental rationality, expected utility (EU) theory, severely restricts the way in which risk-considerations can figure into a rational individual's preferences. It is argued here that this is because EU theory neglects an important component of instrumental rationality. This paper presents a more general theory of decision-making, risk-weighted expected utility (REU) theory, of which expected utility maximization is a special case. According to REU theory, the weight that each outcome gets in decision-making is not the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  48.  83
    Risk Assessment Tools in Criminal Justice and Forensic Psychiatry: The Need for Better Data.Thomas Douglas, Jonathan Pugh, Illina Singh, Julian Savulescu & Seena Fazel - 2017 - European Psychiatry 42:134-137.
    Violence risk assessment tools are increasingly used within criminal justice and forensic psychiatry, however there is little relevant, reliable and unbiased data regarding their predictive accuracy. We argue that such data are needed to (i) prevent excessive reliance on risk assessment scores, (ii) allow matching of different risk assessment tools to different contexts of application, (iii) protect against problematic forms of discrimination and stigmatisation, and (iv) ensure that contentious demographic variables are not prematurely removed from risk (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. On Risk and Rationality.Brad Armendt - 2014 - Erkenntnis 79 (S6):1-9.
    It is widely held that the influence of risk on rational decisions is not entirely explained by the shape of an agent’s utility curve. Buchak (Erkenntnis, 2013, Risk and rationality, Oxford University Press, Oxford, in press) presents an axiomatic decision theory, risk-weighted expected utility theory (REU), in which decision weights are the agent’s subjective probabilities modified by his risk-function r. REU is briefly described, and the global applicability of r is discussed. Rabin’s (Econometrica 68:1281–1292, 2000) calibration (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  50. Individual Differences in Moral Behaviour: A Role for Response to Risk and Uncertainty?Colin J. Palmer, Bryan Paton, Trung T. Ngo, Richard H. Thomson, Jakob Hohwy & Steven M. Miller - 2013 - Neuroethics 6 (1):97-103.
    Investigation of neural and cognitive processes underlying individual variation in moral preferences is underway, with notable similarities emerging between moral- and risk-based decision-making. Here we specifically assessed moral distributive justice preferences and non-moral financial gambling preferences in the same individuals, and report an association between these seemingly disparate forms of decision-making. Moreover, we find this association between distributive justice and risky decision-making exists primarily when the latter is assessed with the Iowa Gambling Task. These findings are consistent with neuroimaging (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 683