Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond. Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI

Abstract

The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization techniques as practiced by Cantor, Turing, Gödel, and the advances, such as the Forcing techniques introduced by Paul Cohen and later investigators, b) Knowledge Hierarchies & Mapping Exercises, c) Discussions of IJ Good’s Speculations Concerning the First Ultraintelligent Machine, AGI, and Super- intelligence. Results suggest variability between major models like ChatGPT-4, Llama-3, Cohere, Sonnet and Opus. Results also point to strong dependence on users’ preexisting knowledge and skill bases. The paper should be viewed as ’raw data’ rather than a polished authoritative reference.

Author's Profile

Elan Moritz
Kent State University (PhD)

Analytics

Added to PP
2024-05-27

Downloads
0

6 months
0

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?