Switch to: References

Add citations

You must login to add citations.
  1. Existential risk and the justice turn in bioethics.Paolo Corsico - forthcoming - Journal of Medical Ethics.
    ‘Who argues what’ bears a certain relevance in relation to what is being argued. We are entitled to know those personal circumstances which play a significant role in relation to the argument one supports, so that we can take those circumstances into consideration when evaluating their argument. This is why journals have conflict of interest declarations, and why we value reflexivity in the social sciences. We also often perform double-blind peer review. We recognise that the evaluation of certain statements of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial intelligence risks, attention allocation and priorities.Aorigele Bao & Yi Zeng - forthcoming - Journal of Medical Ethics.
    Jecker et al critically analysed the predominant focus on existential risk (X-Risk) in artificial intelligence (AI) ethics, advocating for a balanced communication of AI’s risks and benefits and urging serious consideration of other urgent ethical issues alongside X-Risk.1 Building on this analysis, we argue for the necessity of acknowledging the unique attention-grabbing attributes of X-Risk and leveraging these traits to foster a comprehensive focus on AI ethics. First, we need to consider a discontinuous situation that is overlooked in the article (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Address health inequities among human beings is an ethical matter of urgency, whether or not to develop more powerful AI.Hongnan Ye - forthcoming - Journal of Medical Ethics.
    In their article,1 Jecker et al highlight a widespread and hotly debated issue in the current application of artificial intelligence (AI) in medicine: whether we should develop more powerful AI. There are many perspectives on this question. I would like to address it from the perspective of the fundamental purpose of medicine. Since its inception, medicine has been dedicated to alleviating human suffering and ensuring health equity. For thousands of years, we have made great efforts and conducted many investigations to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI diagnoses terminal illness care limits: just, or just stingy?Leonard Michael Fleck - forthcoming - Journal of Medical Ethics.
    I agree with Jecker et al that “the headline-grabbing nature of existential risk (X-risk) diverts attention away from immediate artificial intelligence (AI) threats…”1 Focusing on very long-term speculative risks associated with AI is both ethically distracting and ethically dangerous, especially in a healthcare context. More specifically, AI in healthcare is generating healthcare justice challenges that are real, imminent and pervasive. These are challenges generated by AI that deserve immediate ethical attention, more than any X-risk issues in the distant future. Almost (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Sleight of Hand.Emma Tumilty - forthcoming - Journal of Medical Ethics.
    Jecker et al 1 offer a valuable analysis of risk discussion in relation to Artifical Intelligence (AI) and in the context of longtermism generally, a philosophy prevalent among technocrats and tech billionaires who significantly shape the direction of technological progress in our world. Longtermists accomplish a significant justificatory win, when they use a utilitarian calculation that pits all future humanity against concerns about current humans and societies. By making this argument, they are able to have abstract (and uncertain) benefits for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Stoking fears of AI X-Risk (while forgetting justice here and now).Nancy S. Jecker, Caesar Alimsinya Atuire, Jean-Christophe Bélisle-Pipon, Vardit Ravitsky & Anita Ho - forthcoming - Journal of Medical Ethics.
    We appreciate the helpful commentaries on our paper, ‘AI and the falling sky: interrogating X-Risk’.1 We agree with many points commentators raise, which opened our eyes to concerns we had not previously considered. This reply focuses on the tension many commentators noted between AI’s existential risks (X-Risks) and justice here and now. In ‘Existential risk and the justice turn in bioethics’, Corsico frames the tension between AI X-Risk and justice here and now as part of a larger shift within bioethics.2 (...)
    Download  
     
    Export citation  
     
    Bookmark