6 found
Order:
See also
Samuel Allen Alexander
Ohio State University (PhD)
  1. The Archimedean Trap: Why Traditional Reinforcement Learning Will Probably Not Yield AGI.Samuel Allen Alexander - 2020 - Journal of Artificial General Intelligence 11 (1):70-85.
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  83
    Reward-Punishment Symmetric Universal Intelligence.Samuel Allen Alexander & Marcus Hutter - forthcoming - In AGI-21.
    Can an agent's intelligence level be negative? We extend the Legg-Hutter agent-environment framework to include punishments and argue for an affirmative answer to that question. We show that if the background encodings and Universal Turing Machine (UTM) admit certain Kolmogorov complexity symmetries, then the resulting Legg-Hutter intelligence measure is symmetric about the origin. In particular, this implies reward-ignoring agents have Legg-Hutter intelligence 0 according to such UTMs.
    Download  
     
    Export citation  
     
    Bookmark  
  3. Can Reinforcement Learning Learn Itself? A Reply to 'Reward is Enough'.Samuel Allen Alexander - forthcoming - CIFMA 2021.
    In their paper 'Reward is enough', Silver et al conjecture that the creation of sufficiently good reinforcement learning (RL) agents is a path to artificial general intelligence (AGI). We consider one aspect of intelligence Silver et al did not consider in their paper, namely, that aspect of intelligence involved in designing RL agents. If that is within human reach, then it should also be within AGI's reach. This raises the question: is there an RL environment which incentivises RL agents to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4.  57
    Extending Environments To Measure Self-Reflection In Reinforcement Learning.Samuel Allen Alexander, Michael Castaneda, Kevin Compher & Oscar Martinez - manuscript
    We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent's hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment's outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus, an agent's self-reflection ability can be numerically estimated by running the agent through a battery (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Did Socrates Know How to See Your Middle Eye?Samuel Allen Alexander & Christopher Yang - 2021 - The Reasoner 15 (4):30-31.
    We describe in our own words a visual phenomenon first described by Gallagher and Tsuchiya in 2020. The key to the phenomenon (as we describe it) is to direct one’s left eye at the image of one's left eye, while simultaneously directing one's right eye at the image of one's right eye. We suggest that one would naturally arrive at this phenomenon if one took a sufficiently literal reading of certain words of Socrates preserved in Plato's Alcibiades. We speculate that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6.  48
    An Alternative Construction of Internodons: The Emergence of a Multi-Level Tree of Life.Samuel Allen Alexander, Arie de Bruin & D. J. Kornet - 2015 - Bulletin of Mathematical Biology 77 (1):23-45.
    Internodons are a formalization of Hennig's concept of species. We present an alternative construction of internodons imposing a tree structure on the genealogical network. We prove that the segments (trivial unary trees) from this tree structure are precisely the internodons. We obtain the following spin-offs. First, the generated tree turns out to be an organismal tree of life. Second, this organismal tree is homeomorphic to the phylogenetic Hennigian species tree of life, implying the discovery of a multi-level tree of life: (...)
    Download  
     
    Export citation  
     
    Bookmark