Switch to: References

Citations of:

Superintelligence as superethical

In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2. 0: New Challenges in Philosophy, Law, and Society. Oxford University Press. pp. 322-337 (2017)

Add citations

You must login to add citations.
  1. Is it good for them too? Ethical concern for the sexbots.Steve Petersen - 2017 - In John Danaher & Neil McArthur (eds.), Robot Sex: Social and Ethical Implications. MIT Press. pp. 155-171.
    In this chapter I'd like to focus on a small corner of sexbot ethics that is rarely considered elsewhere: the question of whether and when being a sexbot might be good---or bad---*for the sexbot*. You might think this means you are in for a dry sermon about the evils of robot slavery. If so, you'd be wrong; the ethics of robot servitude are far more complicated than that. In fact, if the arguments here are right, designing a robot to serve (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Fully Autonomous AI.Wolfhart Totschnig - 2020 - Science and Engineering Ethics 26 (5):2473-2485.
    In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or “utility function” with respect to which the appropriateness of its actions will be evaluated. From a philosophical perspective, this notion of autonomy seems oddly weak. For, in philosophy, the term is generally used to refer to a stronger capacity, namely the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Machines learning values.Steve Petersen - 2020 - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. Oxford University Press.
    Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: first, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial wisdom: a philosophical framework.Cheng-Hung Tsai - 2020 - AI and Society:937-944.
    Human excellences such as intelligence, morality, and consciousness are investigated by philosophers as well as artificial intelligence researchers. One excellence that has not been widely discussed by AI researchers is practical wisdom, the highest human excellence, or the highest, seventh, stage in Dreyfus’s model of skill acquisition. In this paper, I explain why artificial wisdom matters and how artificial wisdom is possible (in principle and in practice) by responding to two philosophical challenges to building artificial wisdom systems. The result is (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Computational Goals, Values and Decision-Making.Louise A. Dennis - 2020 - Science and Engineering Ethics 26 (5):2487-2495.
    Considering the popular framing of an artificial intelligence as a rational agent that always seeks to maximise its expected utility, referred to as its goal, one of the features attributed to such rational agents is that they will never select an action which will change their goal. Therefore, if such an agent is to be friendly towards humanity, one argument goes, we must understand how to specify this friendliness in terms of a utility function. Wolfhart Totschnig, argues in contrast that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moralische Roboter: Humanistisch-philosophische Grundlagen und didaktische Anwendungen.André Schmiljun & Iga Maria Schmiljun - 2024 - transcript Verlag.
    Brauchen Roboter moralische Kompetenz? Die Antwort lautet ja. Einerseits benötigen Roboter moralische Kompetenz, um unsere Welt aus Regeln, Vorschriften und Werten zu begreifen, andererseits um von ihrem Umfeld akzeptiert zu werden. Wie aber lässt sich moralische Kompetenz in Roboter implementieren? Welche philosophischen Herausforderungen sind zu erwarten? Und wie können wir uns und unsere Kinder auf Roboter vorbereiten, die irgendwann über moralische Kompetenz verfügen werden? André und Iga Maria Schmiljun skizzieren aus einer humanistisch-philosophischen Perspektive erste Antworten auf diese Fragen und entwickeln (...)
    Download  
     
    Export citation  
     
    Bookmark