Evolving Self-taught Neural Networks: The Baldwin Effect and the Emergence of Intelligence
In AISB Annual Convention 2019 -- 10th Symposium on AI & Games (2019)
Abstract
The so-called Baldwin Effect generally says how learning,
as a form of ontogenetic adaptation, can influence the process of
phylogenetic adaptation, or evolution. This idea has also been taken
into computation in which evolution and learning are used as computational
metaphors, including evolving neural networks. This paper
presents a technique called evolving self-taught neural networks –
neural networks that can teach themselves without external supervision
or reward. The self-taught neural network is intrinsically motivated.
Moreover, the self-taught neural network is the product of the
interplay between evolution and learning. We simulate a multi-agent
system in which neural networks are used to control autonomous
agents. These agents have to forage for resources and compete for
their own survival. Experimental results show that the interaction between
evolution and the ability to teach oneself in self-taught neural
networks outperform evolution and self-teaching alone. More
specifically, the emergence of an intelligent foraging strategy is also
demonstrated through that interaction. Indications for future work on
evolving neural networks are also presented.
Categories
(categorize this paper)
PhilPapers/Archive ID
LEESN-3
Upload history
Archival date: 2019-06-01
View other versions
View other versions
Added to PP index
2019-06-01
Total views
188 ( #36,048 of 68,971 )
Recent downloads (6 months)
27 ( #30,560 of 68,971 )
2019-06-01
Total views
188 ( #36,048 of 68,971 )
Recent downloads (6 months)
27 ( #30,560 of 68,971 )
How can I increase my downloads?
Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.