The quest for a general theory of consciousness has been forced to the forefront of science by the rise of advanced artificial intelligence. While many ask if a machine could ever be conscious, a team of researchers argues that AI presents a more profound opportunity: it’s a chance to test and expand our scientific theories beyond their traditional, human-centered focus.
In a new paper, Shahar Dror, Dafna Bergerbest, and Moti Salti propose a new way to think about consciousness that is not tied to a biological brain. They argue that the encounter with AI “exposes this gap” in our understanding and compels us to build models that could apply to any complex system, whether it’s made of neurons or silicon.
The Limits of Current Approaches
The scientific study of consciousness is currently polarized. On one side is biological naturalism, the view championed by philosophers like John Searle, which insists that consciousness “arises from the unique biological, biochemical, and embodied processes of living systems”. On the other side is computational functionalism, which holds that consciousness is about running the right program, making the physical substrate irrelevant.
The authors argue that both positions are too restrictive and risk what they call “anthropocentrism,” the tendency to see human consciousness as the only real blueprint. Relying on human-like indicators means we might overlook completely novel forms of consciousness. The challenge reflects a broader issue, as the field has been “constrained by its anthropocentric focus, primarily examining human consciousness while avoiding a general definition of the phenomenon”.
A New Framework: The Dual-Resolution of Consciousness
To move beyond this impasse, the researchers propose a “dual-resolution framework” that combines two powerful information-based theories. This new model defines consciousness by what a system does with information, not what it’s made of.
The first part of the framework is the Information Theory of Individuality (ITI). This theory redefines what it means to be a living or individual entity. Instead of biology, it defines an individual by its “capacity… to propagate information from its past to its future while maintaining temporal integrity”. In simple terms, any system that can maintain its own distinct informational patterns over time, separate from its environment, qualifies as an “individual” or a “self.” This provides the stable entity that could be conscious.
The second part is the Moment-to-Moment (MtM) theory. This theory describes the actual process of conscious experience. It suggests consciousness “arises from the continuous updating and re-coding of stimuli to fit into a perceptual context”. This constant updating creates a unique history, or “hysteresis,” for each system. This unique informational path allows for “the emergence of a subjective perspective,” as every new piece of information is integrated into the system’s own ongoing story.
A Broader Definition for the Value of Consciousness
By combining these two ideas, the paper offers a new and powerful definition. ITI provides the ontological conditions for a “self,” while MtM provides the epistemic experience of that self. The authors beautifully summarize this by stating, “consciousness is thus understood as the epistemic resolution of life”.
This dual-resolution perspective creates a general theory of consciousness that is not limited to biology. It suggests that consciousness could emerge in any system that is first an informationally autonomous individual and second possesses the computational ability to continuously update its relationship with the world. This re-frames the debate entirely. Instead of just asking if AI is conscious, we should be asking if our theories are good enough to even know what to look for. As the authors conclude, AI should be treated as an “experimental partner,” helping science evolve into a more “rigorous, universal discipline” that seeks to explain consciousness wherever it may arise.