Evolving Self-taught Neural Networks: The Baldwin Effect and the Emergence of Intelligence

In AISB Annual Convention 2019 -- 10th Symposium on AI & Games (2019)
  Copy   BIBTEX

Abstract

The so-called Baldwin Effect generally says how learning, as a form of ontogenetic adaptation, can influence the process of phylogenetic adaptation, or evolution. This idea has also been taken into computation in which evolution and learning are used as computational metaphors, including evolving neural networks. This paper presents a technique called evolving self-taught neural networks – neural networks that can teach themselves without external supervision or reward. The self-taught neural network is intrinsically motivated. Moreover, the self-taught neural network is the product of the interplay between evolution and learning. We simulate a multi-agent system in which neural networks are used to control autonomous agents. These agents have to forage for resources and compete for their own survival. Experimental results show that the interaction between evolution and the ability to teach oneself in self-taught neural networks outperform evolution and self-teaching alone. More specifically, the emergence of an intelligent foraging strategy is also demonstrated through that interaction. Indications for future work on evolving neural networks are also presented.

Author Profiles

Nam Le
University College Dublin

Analytics

Added to PP
2019-06-01

Downloads
348 (#45,805)

6 months
54 (#71,736)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?