How the Intrinsic Representation of Artiffcial Intelligence is Possible

Journal of Dialectics of Nature (No.247):41-47 (2019)
  Copy   BIBTEX

Abstract

Searle and some others believe that symbols’ meanings are derived from user’s assigning and interpretation, but not intrinsic to itself, and no object is a symbol by virtue of its physics. This criticism has brought about the "symbol grounding problem" in the philosophy of artificial intelligence. Computationalism believe that explicit symbols’ semantic contents are derived does not mean all the other kinds of symbols are also extrinsic. So, how is it possible that non-derivative, intrinsic representations are possible? By analyzing Peirce and some others’ meta-theories of representation, we can ffnd representation is essentially a multifunctional system of teleology. We try to analyze the mechanisms and forms of intrinsic representation and extrinsic representation by means of the structure of the system, benefft from which it can be drawn that intrinsic representations depend on self-organization, and extrinsic representations depend on the extrinsic purpose. It is necessary for intrinsic representational AI to construct a self-organizing computing system with intrinsic purpose.

Author's Profile

Guanghui Li
Huazhong University of Science & Technology

Analytics

Added to PP
2025-02-17

Downloads
302 (#84,344)

6 months
302 (#8,205)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?