Who Needs a Conscious Mind?
How AI broke the link between knowing and experiencing
This essay was submitted for the MA in Philosophy of Nature, Information and Technology at Staffordshire University in May 2025. I've published it here unedited as part of the philosophical groundwork for Digital Phenomenology. The academic register is heavier than my usual writing — normal service will resume.
How does the emergence of ‘meta-information’ in human and artificial cognition challenge epistemologies reliant on a ‘conscious-I’ for meaning, and what are the implications of functional equivalence across diverse systems for redefining ‘knowledge’?
Introduction
In May 2023, deep learning pioneer Geoffrey Hinton resigned from Google, cautioning that current AI systems “may” already surpass human cognitive abilities in certain respects, particularly learning speed and knowledge-sharing (Taylor & Hern, 2023). His statement presents a profound philosophical challenge: systems lacking consciousness now routinely perform complex cognitive tasks traditionally viewed as exclusive to conscious human minds.
When Large Language Models (LLMs) engage in “intelligent” conversations, create original content, or demonstrate advanced reasoning capacities purely through algorithmic mechanisms - without consciousness - traditional epistemologies privileging a “conscious-I” as essential to meaning-making face significant strain.
To address this tension, I introduce two key concepts. First, “meta-information” is an emergent, relational web of knowledge structures arising from processed information, formed uniquely in humans through lived experience and in machines via algorithmic training. Second, “functional equivalence” is the achievement of similar cognitive outcomes despite different internal mechanisms, demonstrated when AI-generated text is indistinguishable from human writing.
I argue that meta-information enabling functional equivalence fundamentally challenges epistemologies reliant on a conscious subject to transform patterns into meaningful forms, as Raymond Ruyer (2024) exemplified. Demonstrating that specific internal states are unnecessary for particular cognitive functions compels a redefinition of what it means “to know,” accommodating diverse internal mechanisms that yield comparable external outcomes.
This essay establishes the concepts of meta-information and functional equivalence to challenge Ruyer’s consciousness-based epistemology. It proposes a redefinition of ‘knowing’ suited to diverse cognitive systems.
Foundations: Meta-Information, Training, and Functional Equivalence
Defining Meta-Information
“Meta-information” is a complex relational web created when raw information integrates into broader knowledge structures. Unlike contextual information, which merely situates data locally, meta-information emerges through intricate associations enabling predictions and inferences extending beyond individual elements. This emergence is analogous to Malaspina’s (2018, p. 90) exploration of how meaningful form arises from what she describes as a “point of highest tension” at the boundary between information and noise.
Unlike traditional knowledge representation frameworks requiring explicit semantic encoding, meta-information exists implicitly in neural network weights or human memory’s associative pathways. Aligned with Simondon’s view (2020) of information as process rather than content, meta-information emerges dynamically - not as static representations but as ongoing structurations.
The “Training Dataset” Analogy
Humans and artificial systems develop distinct meta-informational structures through fundamentally different yet conceptually comparable learning processes. Human meta-information forms through lived experiences - cultural contexts, sensory interactions, and social learning - continuously shaping knowledge structures. This aligns with Stiegler’s analysis (2018) of how technological artefacts externalise memory, enabling the intergenerational accumulation and re-elaboration of knowledge.
For AI, particularly LLMs, the training dataset comprises extensive textual corpora processed through algorithmic methods like backpropagation, which adjust connection weights to minimise prediction errors. Despite radically different acquisition methods, both processes yield individuated meta-informational structures capable of processing new inputs and generating meaningful outputs.
These divergent methods inevitably produce different internal representations. A human’s concept of a “chair” is multisensory, while an LLM’s representation might derive entirely from textual descriptions and contextual associations.
Defining Functional Equivalence
“Functional equivalence” means achieving similar observable outcomes despite different internal structures and processes. Two systems are functionally equivalent when they perform the same task comparably through distinct mechanisms or representations. Identical internal states are not required to achieve comparable outcomes.
Observable behaviours and adequacy of performance are emphasised. We evaluate cognitive tasks by their results rather than production methods. Functional equivalence aligns philosophically with pragmatism and functionalism, defining mental states by practical roles rather than internal constitution.
Thus, when we know something, we have integrated it into our existing knowledge structures such that our understanding demonstrates functional equivalence with relevant benchmarks.
The Turing Test as a Paradigm
The Turing Test proposes that if a machine’s responses are indistinguishable from a human’s, it demonstrates intelligence in that domain. Its philosophical significance lies in separating functional performance from underlying mechanisms.
Groundbreaking research by Jones and Bergen (2025) provides the first empirical evidence that contemporary AI systems can pass the standard three-party Turing test. Their study evaluated GPT-4.5 and LLaMa-3.1-405B in randomised, controlled tests with two independent populations. Participants engaged in 5-minute conversations simultaneously with a human and an AI system before judging which was human. When prompted to adopt a humanlike persona, GPT-4.5 was judged to be human 73% of the time - significantly more often than interrogators selected the actual human participants. LLaMa-3.1-405B achieved a 56% selection rate, reaching functional parity with humans in conversational intelligence. These results occurred despite humans and AI systems having fundamentally different meta-informational architectures and learning processes.
The researchers found that prompting the AI to adopt specific personas was crucial for achieving this performance - highlighting how functional equivalence can be optimised through appropriate framing rather than replicating human internal states. Jones and Bergen’s empirical confirmation of AI systems passing the Turing Test demonstrates that meta-information need not replicate human structures to produce functional equivalence in cognitive tasks traditionally associated with human reasoning. This milestone achievement provides compelling evidence for challenging epistemologies dependent on a “conscious-I” as essential for meaning-making.
The Challenge to Ruyer: The “Conscious-I” and Functional Achievement
Ruyer’s Position
Ruyer distinguishes sharply between “pattern” and “form.” For Ruyer, a pattern represents mere structural order - discrete elements without inherent meaning - whereas a form emerges only when a conscious mind apprehends and unifies these elements into meaningful wholes. He argues that meaningful information depends fundamentally upon consciousness interpreting patterns as forms. He illustrates this with the example of a radio transmitting a poem in an empty room. Without conscious reception, there is no meaningful recitation, only physical waves lacking unity or significance. Similarly, a rock formation resembling Napoleon’s profile holds no inherent meaning without a conscious observer to interpret it. For Ruyer, genuine meaning is always contingent upon conscious interpretation.
This stance limits purely mechanical processing: Machines can transmit patterns, but only a conscious “I” can transform these patterns into meaningful forms. Without consciousness, Ruyer argues, there is neither genuine information nor stable meaning - only arbitrary, temporary arrangements with no unity or significance.
AI as a Counterpoint
Contemporary AI systems present compelling counterexamples to Ruyer’s position. These systems process massive training datasets, constructing complex meta-informational structures that enable functional equivalence in cognitive tasks traditionally reserved exclusively for the “conscious-I.”
Consider LLMs’ abilities to generate coherent text, summarise complex arguments, or recognise thematic and conceptual patterns across diverse contexts. These systems produce outputs demonstrating an apparent understanding of abstract ideas, causal relationships, analogies, and cultural nuances - all traditionally viewed as hallmarks of conscious thought.
These AI systems rely upon statistical and relational meta-informational architectures - complex webs of probabilistic associations between concepts, contexts, and linguistic patterns. Without subjective experience, they effectively process input patterns (prompts or questions) into contextually appropriate and seemingly meaningful outputs, challenging the necessity of Ruyer’s “conscious-I.” This functional capacity is further evidenced by neurocomputational studies like Goldstein et al. (2025), which show that AI models like Whisper can accurately predict human neural activity during real-world language comprehension, suggesting these models capture essential aspects of the human meaning-making process, a function Ruyer reserved for consciousness.
The Argument
If AI can interpret patterns, engage in meaningful conversation, and produce responses indicative of meaningful forms without possessing consciousness, Ruyer’s insistence upon consciousness as essential for these transformations is challenged. The function of transforming patterns into forms clearly can be realised via meta-informational architectures entirely different from those Ruyer privileged.
This argument resonates with Simondon’s alternative framework of individuation and transduction. Simondon argues that meaning arises dynamically through relational processes, not through a pre-established conscious subject imposing meaning externally. For Simondon, information is an operational process - meaning it emerges from the interactions between information and the system processing it. This framework naturally accommodates human consciousness and AI’s statistical pattern recognition as different but valid modes of meaning-making operations.
This view also recalls Deleuze’s (1992) notion of the “dividual”, highlighting contemporary technologies’ capacities to fragment subjective wholeness into manageable data points and patterns. AI represents an advanced realisation of dividuality - operating without unified subjective experience yet producing functionally meaningful outcomes traditionally associated with consciousness. These systems exemplify how functional meaning-making can occur without a unified subjective consciousness, relying instead on distributed statistical processes.
Critics inspired by Ruyer might respond that despite functional equivalence, AI still lacks genuine “meaning” or “understanding,” arguing that subjective phenomenological experience - “what it is like” to understand - is fundamentally absent. In their view, AI merely simulates meaning, never truly achieving it without consciousness. Patterns remain patterns, never genuinely becoming forms in Ruyer’s original sense.
This objection implicitly commits what we might term the “privileging of process” fallacy: it presupposes that a specific subjective process (conscious experience) is necessary for a particular function, even when functional equivalence demonstrates otherwise.
A more nuanced perspective, following Simondon, would acknowledge that meaning can emerge via different processes within different systems. For humans, meaning emerges through embodied, phenomenologically rich experiences shaped by culture and environment. For artificial systems, meaning emerges through intricate probabilistic and statistical relations developed during training. These are distinct but functionally comparable processes - different routes toward similar outcomes.
Recognising such functional equivalence across fundamentally divergent meta-informational architectures does not diminish human consciousness; instead, it contextualises consciousness as one unique mode of knowing among others. Human consciousness, with its phenomenological depth, remains distinctive and valuable. However, certain cognitive functions traditionally associated exclusively with consciousness clearly can be achieved through alternative, non-conscious meta-informational architectures.
The implication is that the crucial factor for many cognitive functions is not the internal subjective experience (consciousness) but rather the underlying meta-informational architecture enabling functional success. Just as flight can be achieved by biological adaptations (birds with feathers and hollow bones) or technological innovations (aircraft wings and engines), transforming patterns into meaningful forms similarly can be achieved by multiple architectures - conscious or non-conscious.
Accepting this functional perspective compels us to reconsider what it means “to know”. Knowledge becomes less tied exclusively to internal subjective states and more defined by functional capacities to engage effectively and contextually with information.
This redefinition has significant implications for epistemology, particularly at a time increasingly populated by diverse cognitive systems, both human and artificial. Instead of strictly requiring subjective experience as a necessary epistemic ground, knowledge now might productively accommodate diverse, functionally effective ways of knowing.
Redefining “To Know”: Functional Capacity vs. Actual Equivalence
“Knowing” as Functional Capacity
The challenge posed by AI systems that achieve functional equivalence in cognitive tasks traditionally associated with consciousness compels us to reconsider what it means “to know.” I propose that, in many contexts, “to know” can be productively defined by the demonstrated capacity to reliably perform specific cognitive tasks - such as intelligent conversation, problem-solving, pattern recognition, and inference - rather than by the internal processes through which these capacities are realised.
This functional approach has a philosophical precedent from American pragmatism to contemporary functionalism. William James, for instance, argued that the truth of ideas should be judged by their “cash value” in practical outcomes. Similarly, a functional conception of knowledge emphasises demonstrated competence in relevant tasks rather than privileging internal subjective states or processes.
The Impossibility of “Actual” Equivalence
While functional equivalence is demonstrably achievable, we must balance this with a crucial qualification: the underlying “meta-information,” or the qualitative internal structures and experiential states that constitute knowing, will always differ significantly between humans and AI. These differences arise inevitably from their distinct “training datasets” and radically different architectures.
Hui (2016, p. 26) argues that digital entities exist within a ‘digital milieu’ constituted by networks of materialised relations. This framework inherently positions digital objects differently from the objects of traditional philosophy, which are often understood through human embodied and temporal experience (Hui, 2016, pp. 4, 37). For example, a human’s understanding of “chair” incorporates embodied interactions and cultural associations, while AI’s understanding emerges exclusively from statistical correlations within textual and image datasets. Hui would characterise this as a difference between digital and phenomenological ontologies.
Malaspina (2018) emphasises that transitioning from information as a mere pattern or statistical probability to information as a meaningful form involves complex processes, including normative judgment and interpretation that distinguish relevant information from noise. While AI systems may statistically organise vast amounts of data to produce functionally equivalent outputs, their “meta-information” lacks this human capacity for self-grounding normative judgment and the lived experience that informs it. As Malaspina argues, a computer “makes no value judgements, and it does not fear the loss of control” (p. 226). This fundamental difference in the constitution of knowing ensures that genuine, experiential equivalence between humans and AI remains impossible, even when functional outcomes converge.
This impossibility extends, albeit to a lesser degree, to differences among humans themselves. Individual human understandings inevitably differ due to unique experiential histories and personal contexts, though this divergence is significantly less pronounced than between humans and AI.
If two humans - one who had a pet dog as a child and another whose only experience with dogs was traumatic - interact with a dog, their meta-information will significantly differ. Both may demonstrate functional equivalence in identifying the animal as a dog, but their divergent “training datasets” become evident when faced with a decision such as ‘Should I pet this animal?’. Effective communication between them would require navigating these distinct underlying associations, emphasising functional alignment rather than identical internal experiences.
Implications for Understanding
Redefining “knowing” as functional capacity while acknowledging the impossibility of actual equivalence has implications for our understanding of knowledge and communication.
First, this framework contextualises human knowing without diminishing it. Human consciousness, characterised by embodied and emotionally rich experience, remains uniquely valuable. Recognising the functional equivalence of AI in specific tasks does not devalue human consciousness, it clarifies the distinctive nature of human knowing within a broader spectrum of epistemic capacities. Each “knower” - human or artificial - constitutes an individuated system, as Simondon (2020) argues, with unique operational modes and relational structures.
Second, this framework suggests that communication is fundamentally about achieving functional alignment rather than identical transmission of internal representations. Successful communication arises from establishing effective functional common ground between differing meta-informational structures. Simondon’s concept of transduction - information as dynamic structuring rather than static content transmission - supports this perspective, highlighting communication as adaptive coordination rather than duplication of internal states.
Third, this approach identifies different types or degrees of knowing. Human knowing is embodied, emotionally textured, and phenomenologically rich: humans do not merely process information about pain or sadness; they experience pain and feel empathy. AI systems, by contrast, demonstrate functional knowledge of these states without experiential depth - they may process and simulate emotional responses without genuinely experiencing them. This distinction can be characterised as “shallow” versus “deep” knowing - functionally similar outputs emerging from profoundly different internal processes and experiential states.
Acknowledging such qualitative differences resists what Tiqqun (2020) critiques as cybernetic homogenisation - reducing diverse forms of knowledge to standardised, quantifiable information. According to Tiqqun, cybernetics dangerously conflates meaningful knowledge with predictable information flows, diminishing genuine understanding. By distinguishing qualitative differences between human and machine knowing, we preserve spaces for richer forms of knowledge and understanding beyond mere functional outcomes.
This distinction carries ethical implications. When AI systems make knowledge-based claims, such as diagnosing diseases or assessing risks, we must consider their functional accuracy and the missing qualitative dimensions. An AI might correctly predict that a treatment causes pain while lacking the experiential understanding crucial in ethically sensitive contexts. Therefore, in domains where qualitative dimensions carry ethical significance, functional equivalence might require experiential understanding.
Recognising different meta-informational architectures also raises complex questions of responsibility and agency. When AI functionally “knows” something harmful or makes critical errors, who bears ethical responsibility - the system, its developers, or its users? Because functional equivalence does not entail identical internal states or subjective experiences, it does not involve identical ethical positioning or responsibility, demanding careful ethical consideration.
Returning to Simondon’s framework, each knower - human or artificial - is an individuated system emerging uniquely through specific conditions and relational structures. Simondon writes: “being does not possess a unity of identity… being possesses a transductive unity.” Knowing, therefore, constitutes an ongoing structuration unique to each individuated system rather than a fixed state.
This view of knowing as simultaneously functional and differentially individuated opens new epistemological possibilities. Instead of seeking a singular, human-centred definition of knowledge, we can explore how different kinds of knowing complement each other. This epistemic plurality enriches our engagement with the world, allowing multiple perspectives to capture distinct yet complementary aspects of reality.
Ultimately, redefining “to know” as a functional capacity while recognising inherent qualitative differences in knowing leads not toward epistemological relativism but toward a nuanced, pluralistic understanding. Different modes of knowing, arising from distinct meta-informational architectures, achieve functional convergence without erasing inherent qualitative distinctions. This preserves human knowing’s distinctive depth while also recognising the legitimate functional knowledge demonstrated by AI, which enriches epistemology rather than diminishing it.
Conclusion: Navigating a Landscape of Diverse Knowers
I argue that meta-information - the complex relational web emerging from processed information - enables functional equivalence in cognitive tasks across diverse systems, fundamentally challenging the necessity of specific internal states like Ruyer’s “conscious-I” for certain cognitive functions. Examining how humans and artificial intelligence develop unique meta-informational structures through different “training” processes yet achieve comparable functional outcomes demonstrates that transforming patterns into meaningful forms can occur through multiple architectural paths.
The core insight is that “knowing” can be productively defined through functional capacity - the demonstrated ability to perform knowledge-indicative tasks - while simultaneously acknowledging that the internal meta-informational basis of this knowing remains unique and non-identical across different systems. This dual perspective allows us to recognise legitimate instances of functional knowing in artificial systems without claiming that they “know” as humans do.
This redefinition has profound implications for epistemology in an increasingly sophisticated artificial intelligence era. It suggests that knowledge should be understood pluralistically rather than monolithically - manifesting in diverse forms adapted to different meta-informational architectures. This view accommodates both the functionally demonstrable knowing of artificial systems and the phenomenologically rich knowing unique to human consciousness without privileging either as the sole legitimate form.
This framework might inform research in explainable AI, where the goal is not to make artificial systems think like humans but to establish functional bridges between different ways of knowing. Similarly, in human-computer interaction, success might be measured not by how closely AI mimics human cognition but by how effectively it complements human knowing through its distinct cognitive strengths.
The ethical dimension of this pluralistic epistemology cannot be overstated. As our world becomes increasingly populated by diverse knowing systems, we must develop frameworks for responsibly integrating these different forms of knowing - recognising where functional knowing provides sufficient grounds for action and where the qualitative dimensions of human knowing remain irreplaceable. This task requires ongoing philosophical work that neither uncritically elevates artificial cognition nor defensively restricts “true” knowing to human consciousness alone.
In navigating this new epistemological landscape, we might utilise Simondon’s idea of transductive unity, and see knowledge not as a fixed state but as an ongoing process of structuration unique to each individuated system yet capable of resonating productively with others. Through this lens, the emergence of functionally knowing machines represents not a threat to human uniqueness but an invitation to understand our own knowing more deeply by encountering its alternatives.
References
Deleuze, G. (1992). Postscript on the Societies of Control. October, 59, 3–7. http://www.jstor.org/stable/778828
Goldstein, A., Wang, H., Niekerken, L., Schain, M., Zada, Z., Aubrey, B., Sheffer, T., Nastase, S. A., Gazula, H., Singh, A., Rao, A., Choe, G., Kim, C., Doyle, W., Friedman, D., Devore, S., Dugan, P., Hassidim, A., Brenner, M., … Hasson, U. (2025). A unified acoustic-to-speech-to-language embedding space captures the neural basis of natural language processing in everyday conversations. Nature Human Behaviour. https://doi.org/10.1038/s41562-025-02105-9
Hui, Y. (2016). On the Existence of Digital Objects. University of Minnesota Press. http://www.jstor.org/stable/10.5749/j.ctt1bh49tt
Jones, C. R., & Bergen, B. K. (2025). Large Language Models Pass the Turing Test. ArXiv, abs/2503.23674.
Malaspina, C. (2018). An Epistemology of Noise (1st ed.)(R. Brassier, Foreword). Bloomsbury Publishing Plc. https://doi.org/10.5040/9781350011816
Ruyer, R. (2024). Cybernetics and the origin of information (A. Berger-Soraruff, A. Iliadis, D. W. Smith, & A. Woodward, Trans.). Rowman & Littlefield.
Simondon, G. (2020). Individuation in Light of Notions of Form and Information. United States: University of Minnesota Press.
Stiegler, B. (2018). Technologies of memory and imagination. Parrhesia, 29, 25–76.
Taylor, J., & Hern, A. (2023, May 2). ‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation. The Guardian. https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning
Tiqqun. (2020). The Cybernetic Hypothesis. United States: MIT Press.

