Switch to: References

Add citations

You must login to add citations.
  1. On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Natural Curiosity.Jennifer Nagel - forthcoming - In Artūrs Logins & Jacques Henri Vollet (eds.), Putting Knowledge to Work: New Directions for Knowledge-First Epistemology. Oxford: Oxford University Press.
    Curiosity is evident in humans of all sorts from early infancy, and it has also been said to appear in a wide range of other animals, including monkeys, birds, rats, and octopuses. The classical definition of curiosity as an intrinsic desire for knowledge may seem inapplicable to animal curiosity: one might wonder how and indeed whether a rat could have such a fancy desire. Even if rats must learn many things to survive, one might expect their learning must be driven (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can reinforcement learning learn itself? A reply to 'Reward is enough'.Samuel Allen Alexander - 2021 - Cifma.
    In their paper 'Reward is enough', Silver et al conjecture that the creation of sufficiently good reinforcement learning (RL) agents is a path to artificial general intelligence (AGI). We consider one aspect of intelligence Silver et al did not consider in their paper, namely, that aspect of intelligence involved in designing RL agents. If that is within human reach, then it should also be within AGI's reach. This raises the question: is there an RL environment which incentivises RL agents to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The evaluative mind.Julia Haas - forthcoming - In Mind Design III.
    I propose that the successes and contributions of reinforcement learning urge us to see the mind in a new light, namely, to recognise that the mind is fundamentally evaluative in nature.
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence Ethics and Safety: practical tools for creating "good" models.Nicholas Kluge Corrêa -
    The AI Robotics Ethics Society (AIRES) is a non-profit organization founded in 2018 by Aaron Hui to promote awareness and the importance of ethical implementation and regulation of AI. AIRES is now an organization with chapters at universities such as UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University, and the Pontifical Catholic University of Rio Grande do Sul (Brazil). AIRES at PUCRS is the first international chapter of AIRES, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ética e Segurança da Inteligência Artificial: ferramentas práticas para se criar "bons" modelos.Nicholas Kluge Corrêa - manuscript
    A AI Robotics Ethics Society (AIRES) é uma organização sem fins lucrativos fundada em 2018 por Aaron Hui, com o objetivo de se promover a conscientização e a importância da implementação e regulamentação ética da AI. A AIRES é hoje uma organização com capítulos em universidade como UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University e a Pontifícia Universidade Católica do Rio Grande do Sul (Brasil). AIRES na PUCRS é (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Extended subdomains: a solution to a problem of Hernández-Orallo and Dowe.Samuel Allen Alexander - 2021 - In Samuel Allen Alexander & Marcus Hutter (eds.), AGI.
    This is a paper about the general theory of measuring or estimating social intelligence via benchmarks. Hernández-Orallo and Dowe described a problem with certain proposed intelligence measures. The problem suggests that those intelligence measures might not accurately capture social intelligence. We argue that Hernández-Orallo and Dowe's problem is even more general than how they stated it, applying to many subdomains of AGI, not just the one subdomain in which they stated it. We then propose a solution. In our solution, instead (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The autonomous choice architect.Stuart Mills & Henrik Skaug Sætra - forthcoming - AI and Society:1-13.
    Choice architecture describes the environment in which choices are presented to decision-makers. In recent years, public and private actors have looked at choice architecture with great interest as they seek to influence human behaviour. These actors are typically called choice architects. Increasingly, however, this role of architecting choice is not performed by a human choice architect, but an algorithm or artificial intelligence, powered by a stream of Big Data and infused with an objective it has been programmed to maximise. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Action control, forward models and expected rewards: representations in reinforcement learning.Jami Pekkanen, Jesse Kuokkanen, Otto Lappi & Anna-Mari Rusanen - 2021 - Synthese 199 (5-6):14017-14033.
    The fundamental cognitive problem for active organisms is to decide what to do next in a changing environment. In this article, we analyze motor and action control in computational models that utilize reinforcement learning (RL) algorithms. In reinforcement learning, action control is governed by an action selection policy that maximizes the expected future reward in light of a predictive world model. In this paper we argue that RL provides a way to explicate the so-called action-oriented views of cognitive systems in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Human’s Intuitive Mental Models as a Source of Realistic Artificial Intelligence and Engineering.Jyrki Suomala & Janne Kauttonen - 2022 - Frontiers in Psychology 13.
    Despite the success of artificial intelligence, we are still far away from AI that model the world as humans do. This study focuses for explaining human behavior from intuitive mental models’ perspectives. We describe how behavior arises in biological systems and how the better understanding of this biological system can lead to advances in the development of human-like AI. Human can build intuitive models from physical, social, and cultural situations. In addition, we follow Bayesian inference to combine intuitive models and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Functional Contextual Account of Background Knowledge in Categorization: Implications for Artificial General Intelligence and Cognitive Accounts of General Knowledge.Darren J. Edwards, Ciara McEnteggart & Yvonne Barnes-Holmes - 2022 - Frontiers in Psychology 13.
    Psychology has benefited from an enormous wealth of knowledge about processes of cognition in relation to how the brain organizes information. Within the categorization literature, this behavior is often explained through theories of memory construction called exemplar theory and prototype theory which are typically based on similarity or rule functions as explanations of how categories emerge. Although these theories work well at modeling highly controlled stimuli in laboratory settings, they often perform less well outside of these settings, such as explaining (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Provably Safe Artificial General Intelligence via Interactive Proofs.Kristen Carlson - 2021 - Philosophies 6 (4):83.
    Methods are currently lacking to _prove_ artificial general intelligence (AGI) safety. An AGI ‘hard takeoff’ is possible, in which first generation _AGI 1 _ rapidly triggers a succession of more powerful _AGI n _ that differ dramatically in their computational capabilities (_AGI n _ _n_+1 ). No proof exists that AGI will benefit humans or of a sound value-alignment method. Numerous paths toward human extinction or subjugation have been identified. We suggest that probabilistic proof methods are the fundamental paradigm for (...)
    Download  
     
    Export citation  
     
    Bookmark