Switch to: References

Add citations

You must login to add citations.
  1. Human-aligned artificial intelligence is a multiobjective problem.Peter Vamplew, Richard Dazeley, Cameron Foale, Sally Firmin & Jane Mummery - 2018 - Ethics and Information Technology 20 (1):27-40.
    As the capabilities of artificial intelligence systems improve, it becomes important to constrain their actions to ensure their behaviour remains beneficial to humanity. A variety of ethical, legal and safety-based frameworks have been proposed as a basis for designing these constraints. Despite their variations, these frameworks share the common characteristic that decision-making must consider multiple potentially conflicting factors. We demonstrate that these alignment frameworks can be represented as utility functions, but that the widely used Maximum Expected Utility paradigm provides insufficient (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Generating diverse plans to handle unknown and partially known user preferences.Tuan Anh Nguyen, Minh Do, Alfonso Emilio Gerevini, Ivan Serina, Biplav Srivastava & Subbarao Kambhampati - 2012 - Artificial Intelligence 190 (C):1-31.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • An approach to efficient planning with numerical fluents and multi-criteria plan quality.Alfonso E. Gerevini, Alessandro Saetti & Ivan Serina - 2008 - Artificial Intelligence 172 (8-9):899-944.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Selecting goals in oversubscription planning using relaxed plans.Angel García-Olaya, Tomás de la Rosa & Daniel Borrajo - 2021 - Artificial Intelligence 291 (C):103414.
    Download  
     
    Export citation  
     
    Bookmark