Abstract
The "hard problem" of consciousness has long been debated in philosophy, with mysterianism suggesting that it may be inherently unsolvable due to cognitive or epistemic limitations. This paper introduces a new argument for mysterianism, drawing on insights from the complexity of artificial neural networks. Using a simple multilayer neural network trained to classify images as an example, it is shown that even understanding a single artificial neuron’s role in information processing can be beyond our cognitive capabilities. When considering the complexity of biological neurons, which far exceed artificial neurons in intricacy, the challenges become even more pronounced. This raises questions about the feasibility of understanding consciousness, a vastly more complex phenomenon, by suggesting that our cognitive limitations extend to fundamental principles of interpreting complex systems. The paper emphasizes the challenges posed by layered abstractions, drawing parallels with other multi-level systems like microprocessors to argue that certain problems may be insurmountable.