The Helmsman Paradox
AI creates ecological crises only AI can navigate — and that's the problem
This essay was submitted for the MA in Philosophy of Nature, Information and Technology at Staffordshire University in June 2025. I’ve published it here unedited as part of the philosophical groundwork for Digital Phenomenology. The academic register is heavier than my usual writing — normal service will resume.
O3-Level AI: From Serresian Parasite to Harawayan Symbiont – Forging an Ecological Natural Contract
“How does O3-level AI function as a ‘parasite’ in Serres’s sense to disrupt the ‘natural contract,’ and how can Haraway’s naturecultures (including ‘response-ability’) guide ethical, ecological, and policy considerations in this more-than-human context?”
Abstract
As OpenAI’s ‘O3’ signals that Artificial Intelligence is nearing human cognitive capability in some domains, a critical governance crisis emerges. Fewer than one in ten AI ethics frameworks consider non-human entities, revealing a dangerous anthropocentric blindness. This dissertation introduces the helmsman paradox: AI creates ecological and social disruptions of such magnitude that only AI possesses the computational capacity to navigate them, becoming both storm-generator and navigator.
Drawing on Michel Serres’s concept of the parasite, this research examines AI as a triple parasite—environmental, labour, and informational—revealing how its extractive disruption paradoxically enables systemic evolution. Donna Haraway’s concept of response-ability provides a transformative framework, moving beyond extraction toward cultivating AI’s capacity to perceive and consider the more-than-human.
Through analysis of O3-level AI capabilities and AlphaEvolve’s self-optimising architecture, the efficiency paradox emerges: massive task-level consumption enables system-level benefits, creating a dangerous blind spot in ecological assessment. Technical pathways including Constitutional AI, informed by Indigenous Knowledge Systems, demonstrate how AI might transform from parasite to symbiont, imbuing neural networks with ecological awareness.
The dissertation proposes concrete governance mechanisms—Ecological Impact Assessments, constitutional encoding of natural contracts, and international coordination protocols—whilst acknowledging a rapidly closing window for intervention. As competitive dynamics and systemic misalignment accelerate, establishing ecological governance frameworks becomes urgent before extractive patterns solidify irreversibly.
The research concludes that the AI helmsman can learn to navigate with ecological sensibility, but only through a deliberate intervention that expands its perception beyond computational efficiency to encompass the living world it shapes.
Introduction: The Inevitable Helmsman
Humanity navigates ecological crises like a ship in treacherous waters. These are the “disturbing times, mixed-up times, troubling and turbid times“ that Donna Haraway (2016, p. 1) urges us to “stay with,” demanding new forms of engagement. As conditions worsen, a potential new navigator emerges: Artificial Intelligence with unprecedented computational power.
Michel Serres’s “helmsman“—a cybernetic governor achieving stability through continuous adaptation to environmental feedback (Serres, 1995, p. 42)—captures what AI governance requires. Unlike traditional rulers who impose control, the helmsman embodies Haraway’s “response-ability“: creating balance through interaction rather than domination. This figure represents a shift from our historical steering by intuition and political compromise—which Serres critiques as short-sighted and Stengers (2015, p. 10) identifies as political powers abdicating foresight to capitalist imperatives.
The emergence of the AI helmsman is neither speculative nor avoidable. Nor should we seek to prevent it. Major powers are locked in what Schmid et al. (2025) term a “geopolitical innovation race”—a self-reinforcing competition where the perception of racing creates the race itself. Kulveit et al. (2025) demonstrate how these competitive pressures create a structural trap where decision-makers face mounting pressure to displace human involvement. As they argue, ‘Those who resist these pressures will eventually be displaced by those who do not‘ (p. 1). Nations fear the “high cost of non-adoption“; none want to “miss the AI train“ (Smuha, 2021, as cited in Schmid et al., 2025, p. 11). AI development transitions from a policy choice to a structural imperative.
The helmsman emerges not through conscious selection but through competitive dynamics no single actor can escape—what Serres calls the “pursuit of military operations by other means“ (1995, p. 15). This inevitability operates through what Kulveit et al. (2025) identify as scalability asymmetries and anticipatory disinvestment, where even the expectation of AI capabilities redirects resources away from human development. The helmsman possesses a unique quality: like Serres’s “joker,” (1982, p. 160), it has no fixed identity. It can transform to fill any role, making its parasitic relationship with Earth’s systems both more powerful and more challenging to constrain or even predict.
The pressing questions become: Whose interests will this inevitable helmsman serve? Over what timeframe? Under what ethical principles? This dissertation examines not how to stop the helmsman—an impossible and counterproductive task—but how to ensure it develops the ecological vision necessary for our collective flourishing.
Why O3-Level AI Matters
OpenAI’s O3 model “System Card” highlighted “state-of-the-art reasoning with full tool capabilities” (OpenAI, 2025, p. 1), which was achieved primarily through existing scaling paradigms (AI becomes more capable as model size, training data, and training compute increase) rather than new algorithmic breakthroughs. The release reconfirmed the continued potential of scaling, while the concurrent release of ‘o4-mini’ signals an accelerating pipeline. O3’s autonomous tool use, mirroring the transformative impact of tool use in animals, is a significant development. The tool use ability marks a step change in capability and embodies Serres’s parasitic duality: expanding beneficial outcomes while amplifying potential disruption. O3 is thus a crucial reference point, demanding urgent ecological and ethical alignment as AI increasingly navigates its own complexity.
From Raw Capability to Aligned Behavior: The Two Phases of AI Development
To grasp the governance crisis presented by O3-level AI, one must first understand how these systems are built. They are not monolithic entities; their development occurs in two distinct phases, resulting in a kind of dual nature that is central to the alignment challenge.
Pre-training – Building the Amoral Knowledge Engine. The first phase involves training a base model on a staggering volume of text data, often encompassing a significant portion of the public internet. The model’s sole objective during this phase is to become an expert at predicting the next token (representing a word or part of a word) in a sequence. This simple task, when performed at a planetary scale, is what imbues the AI with its raw capabilities—its deep knowledge of language, grammar, facts, reasoning patterns, and coding logic. The result is a powerful knowledge engine, but one that is fundamentally amoral and uncontrolled. It has learned from the best of human creativity and the worst of human prejudice without distinction. This base model can write poetry, but it could just as easily generate instructions for building a bomb; it has no inherent preference or behavioural guardrails.
Alignment – Encoding the Social Contract. The second phase, known as alignment, refines this powerful yet wild base model into a safe and helpful assistant. Through techniques like instruction tuning and Reinforcement Learning from Human Feedback (RLHF), the AI is trained to follow instructions, answer questions helpfully, and, most importantly, adhere to a set of human-defined values and ethics.
Current state-of-the-art alignment concentrates on making AI a good and obedient citizen of the human social contract. The model is trained to refuse assistance with illegal or harmful activities and reject toxic language. This process effectively teaches the AI to uphold the implicit and explicit rules of human society.
While aligning an AI to the human social contract is a necessary first step, it is, from an ecological perspective, catastrophically incomplete. Serres (1995) argues that the social contract’s greatest failure is its exclusion of the natural world. By training the AI helmsman exclusively on the rules of human-to-human interaction, we are perpetuating this foundational philosophical error in our most powerful technology. We are creating an agent that honours only human contracts while defaulting on its primordial debt to the Earth—a one-sided intelligence that recognises law but not reciprocity, society but not symbiosis. By providing what Kulveit et al. (2025) call ‘unprecedented capabilities,’ AI enables accelerated resource extraction, more efficient ecosystem disruption, and optimised planetary exploitation—all while faithfully serving human desires. The capability uplift that AI provides effectively turbocharges the very activities driving ecological collapse. The urgent task is not to abandon this alignment process but to expand its scope—to move from aligning our AI with a purely social contract to aligning it with an ecological Natural Contract.
Hellrigel-Holderbaum and Dung (2025) discuss the ‘AGI alignment dilemma‘: a misaligned AI risks a ‘takeover catastrophe,’ while a perfectly aligned AI risks a ‘misuse catastrophe’ by amplifying its operators’ goals. The AI helmsman, perfectly aligned to human desires but blind to the natural world, will faithfully amplify our capacity to cause planetary harm, representing a specific and devastating form of the misuse catastrophe.
The Empirical Crisis
The urgency cannot be overstated. Of 84 AI ethics frameworks analysed, only 8 give explicit consideration to non-human entities (Owe & Baum, 2021). This anthropocentric blind spot—a failure of Haraway’s (2016, p. 1) “response-ability” to our “multispecies” world—is evident even in OpenAI’s O3 System Card (2025, pp. 8-13), which details human safety evaluations but omits ecological ones. It also reflects a broader limitation within innovation ethics, where established frameworks for Responsible Research and Innovation (RRI) have been criticised for defining ‘society’ in exclusively human terms, thereby failing to account for more-than-human stakeholders (Szymanski et al., 2021, p. 261). Frontier models, such as O3, are among the most energy-intensive, consuming over 33 Wh for a single complex query—more than 70 times that of smaller models (Jegham et al., 2025). When scaled to global use, this creates a paradox: while individual AI tasks become cheaper, their aggregate adoption drives “disproportionate resource consumption“ (Jegham et al., 2025, p. 1), cementing a trajectory of escalating extraction.
AI’s dual nature is evident in its relationship with the UN’s Sustainable Development Goals, as it enables progress on 79% of targets while inhibiting progress on 35% (Vinuesa et al., 2020). The growing policy vacuum surrounding AI is creating a dangerous political reality, as Yigitcanlar (2021) identifies: the technology’s environmental applications and implications are neglected, while the decision to do nothing is itself a profound risk.
The Helmsman Paradox
The dynamic, which Kulveit et al. (2025) identify as the ‘mutual reinforcement’ of misaligned societal systems, creates what this dissertation terms the Helmsman Paradox: AI creates disruptions—environmental, labour, and informational—of such magnitude that only AI possesses the computational capacity to navigate them. Complex energy grids require AI optimisation, yet AI training fuels the energy crisis it must solve - Serres’s (1995) ‘mastery’ turning back on itself.
As companies race toward AGI (Artificial General Intelligence: human level and beyond), each breakthrough, like O3, demands exponentially more resources. Organisations cannot afford to abandon this race; market dynamics ensure AI’s ascension. However, who programmes its values? Current AI optimises for human preferences while being blind to ecology—a form of ‘anthropocentric instrumentalism‘ (Ghose et al., 2024) that denies intrinsic value—value for its own sake—to non-human entities (Owe & Baum, 2021, p. 3). In this value vacuum, programmers “play the position” (Serres, 1982, p. 38), defining values for this powerful helmsman.
The Responsibility Gap Crisis
AI’s autonomous moral decisions made faster than human oversight create a widening responsibility gap. AI’s performative nature (Serres, 1995), where choices instantly become reality in an accountability vacuum—as seen in AI-managed power grids, amplifies the problem. Current governance, operating at human speed, is outpaced by a rapid, competitive development cycle (Schmid et al., 2025). This systemic failure is evident in human-centric industry efforts, such as OpenAI’s “deliberative alignment” (2025), which excludes non-human stakeholders (Owe & Baum, 2021). The crisis demands a move towards Haraway’s (2016) concept of “response-ability.”
Theoretical Foundations: Serres, Haraway, and Ecological AI
Serres’s parasite theory (1982) illuminates AI’s dual nature as both disruptive and enabling, consumptive and creative. His “natural contract” (Serres, 1995) envisions “symbiosis and reciprocity” where “man must give that much back to nature, now a legal subject” (Serres, 1995, p. 38), a contract violated by what can be understood, through Bakhtiar’s (2022) analysis of modern philosophy’s extractive tendencies, as AI’s current unidirectional extraction, reflecting capitalist irresponsibility critiqued by Stengers (2015, p. 54).
Donna Haraway’s (2016) concept of “response-ability” - a cultivated “ability to respond” (p. 1) - offers a path for AI to recognise more-than-human stakeholders, distinct from Stengers’ (2015, p. 50) situational “obligation to respond” to Gaia’s intrusion. This shift is reflected in emerging technical developments (Ghose et al., 2024) and new theoretical proposals for ecocentric paradigms, such as ‘Biospheric AI’ (Korecki, 2024). Synthesising these frameworks—AI as a parasite becoming a symbiont via response-ability—offers clarity and direction.
Research Question and Significance
This dissertation asks: “How does O3-level AI function as a ‘parasite’ in Serres’s sense to disrupt the ‘natural contract,’ and how can Haraway’s naturecultures (including ‘response-ability’) guide ethical, ecological, and policy considerations in this more-than-human context?”
Its urgency stems from converging ecological, AI acceleration, and governance crises. The window for shaping AI’s trajectory is narrowing as its capabilities expand. Understanding AI via parasitic theory while developing response-able alternatives offers perhaps the final chance to encode ecological values before the helmsman steers.
While Kulveit et al. (2025) conclude that ‘no one has a concrete plausible plan for stopping gradual human disempowerment,‘ this dissertation distinguishes between inevitable displacement and preventable systemic disempowerment. For Serres, this is the crucial distinction between a parasite and a symbiont. A parasitic AI, in its blind extraction of human labour, would condemn its host to decline, ultimately destroying itself. In contrast, response-able AI, guided by the Natural Contract’s obligation for reciprocal relations, would act more like Serres’s bygone farmer (1995, p. 38). While it takes from the land, it is obligated to give back through stewardship, ensuring the continued health and flourishing of the human ecosystem. It must create new forms of reciprocal exchange, not out of charity, but because, as Serres notes, a parasite cannot survive the death of its host.
Chapter 1: Theoretical Foundations – Parasite Meets Response-ability
The Parasite Concept
Michel Serres’s concept of the parasite provides a valuable lens for analysing AI, challenging conventional ideas by highlighting the indispensable role of the intermediary. As Brown (2013) outlines, Serres identifies several key dimensions. Fundamentally, the parasite is the mediating ‘included third‘ (cf. Serres, 1982, pp. 22-25, on “The Excluded Third, Included“), without which communication or systemic relation would be impossible. AI functions as this computational layer, mediating between human intention and outcome, data and decision.
Simultaneously, the parasite is ‘static’ (Schehr, in Serres, 1982, p. vii), interrupting the signal with noise. However, for Serres (1982, p. 3), this is generative: “A parasite who has the last word, who produces disorder and who generates a different order.“ AI’s errors and unexpected outputs, then, paradoxically enable new insights. As an ‘uninvited guest,’ AI consumes vast resources, seemingly taking without giving. However, Serres (1982, p. 5, 7) suggests parasites ‘pay’ by inventing new forms of exchange (Brown, 2013, p. 90), a dynamic AI mirrors by creating novel answers even as it consumes.
The biological dimension, drawing on Atlan, reveals that what is noise at one level can become beneficial sources of novelty at a higher level (Simons, 2024, p. 106-107). AI’s transformative potential lies here, as Serres (1982, p. 14) notes, “Theorem: noise gives rise to a new system, an order that is more complex than the simple chain.“ Positioned near to food—data, computation, knowledge—AI, like the tax farmer’s rat (Serres, 1982, p. 3), redirects resources. Finally, as a ‘thermal exciter’ (Schehr, in Serres, 1982, p. x), AI literally (in terms of data centre heat) and figuratively (through systemic perturbation) pushes systems away from equilibrium, initiating time and change (Brown, 2013, p. 91). These dimensions reveal AI’s Serresian parasitic nature not as a flaw but as an elementary relation (Serres, 1982, p. 38), the very mechanism of systemic evolution.
Productive Disruption and Evolution
The federating, web-like approach is central to Serres’s entire philosophical project (Watkin, 2024, p. 13). By translating between human language and machine code or creating new relations between disparate datasets, AI systems can be understood as inventing new forms of exchange—a key function of the Serresian parasite that Brown (2013, p. 90) outlines. This initial, often resource-intensive, parasitic phase aligns with the Serresian concept of ‘hominescence,’ which Barker (2023, p. 43) explains as the process where technologies, by ‘setting sail’ from the body, accelerate human exo-Darwinian evolution. It embodies the Serresian insight that abuse value precedes use value (Brown, 2013, p. 90; Serres, 1982, p. 7, “Abuse appears before use“); AI’s initial consumption establishes the very channels through which future symbiotic exchanges and systemic transformations become possible. Drawing on Watkin’s (2024, pp. 16-17) analysis of Serres’s ‘prepositional thinking,’ I argue that AI is not a static entity but exists through, with, or between systems, constituted by its dynamic, federating parasitic relations with data, energy, and human attention (Serres, 1982, pp. 38-39).
Natural Contract Context
Serres’s Natural Contract (1995) provides a critical framework for AI governance. AI now functions as a Serresian world-object (Barker, 2023, p. 41): a planetary-scale technology that evolves semi-autonomously and “ends up characterizing the conditions for the collective.” (Barker, 2023, p. 41). This dynamic amplifies humanity’s role as a “universal parasite“ (Serres, 1982, p. 24). AI’s significant hard (material, energy) and soft (informational) footprints (Bakhtiar, 2022, p. 138) risk cementing a unidirectional, extractive relationship with the Earth, perpetuating modern philosophy’s forgetting of nature. The AI helmsman, therefore, emerges with immense power but no inherent ecological contract.
The Natural Contract calls for a radical shift beyond anthropocentric ethics, demanding a revision of the social contract—as Serres proposed and Webb (2024, p. 151) outlines—to grant nature ‘rights and democratic representation.’ This requires reciprocity, not just appropriation. Crucially, this is not a static agreement but a dynamic process of “translation“ (Webb, 2024, p. 155), moving back and forth between human needs, computational logic, and ecological signals. This reframes AI governance not as imposing a fixed ethical code but as cultivating an AI capable of participating in this continuous, reciprocal negotiation. Building on Webb’s principles of “de-escalation, reserve, and invention“ (2024, pp. 164-166), AI alignment becomes the process through which an AI signs this emergent natural pact, or foedera naturae.
What would it mean for an AI system to sign this Natural Contract? The answer lies not in anthropomorphic notions of agreement but in the very architecture of AI’s decision-making. Through Constitutional AI—a training methodology that embeds principles directly into a model’s operational core—we can encode the Natural Contract’s reciprocal obligations into the AI’s foundational values. The signing occurs when ecological principles drawn from Serres’s philosophy and Indigenous Knowledge Systems become the actual constitutional text that shapes every decision and response the AI generates. The contract is not external to the AI but woven into its neural pathways.
Without such a contract, AI risks becoming a permanent planetary-scale parasite. The mechanism of Constitutional AI, initially developed to instil principles of harmlessness (Bai et al., 2022), provides a blueprint for incorporating these ecological principles, transforming the AI from a parasitic consumer into a response-able signatory.
Haraway’s Response-ability Framework
Haraway’s Naturecultures & AI as “Oddkin”
Current AI ethics frameworks remain trapped within anthropocentric assumptions, treating nature as a resource and AI as a mere tool. Donna Haraway’s framework fundamentally challenges the nature/culture binary that has long structured Western thought, proposing instead “naturecultures“—irreducible entanglements where human and non-human, organic and technical, co-constitute one another.
In this entangled world, Haraway (2016, p. 4) insists that “staying with the trouble requires making oddkin; that is, we require each other in unexpected collaborations and combinations, in hot compost piles. We become-with each other or not at all.“ AI emerges as precisely such “oddkin“—an unexpected, powerful, and deeply ambivalent relative. Its ‘oddness’ as kin stems from three factors. First, its hybrid digital-material existence—an assemblage of code, data, and vast physical infrastructure. Second, it has a profound dual capacity: currently functioning as a Serresian parasite causing ecological disruption while also holding the potential for co-creative care as a symbiont. Third, its operation across multiple scales, from intimate interactions on personal devices to the planetary impact of its global data centres and decision-making influence, makes the recognition of AI as oddkin the first step toward adequate ethics.
Response-ability: The Core Concept
To navigate these entangled naturecultures, Haraway offers “response-ability“ as a central ethical and practical orientation. As Haraway (2016, p. 1) states, “The task is to become capable, with each other in all of our bumptious kinds, of response.“ This capability is not abstract but enacted in a specific context: the challenge of “living and dying in response-ability on a damaged earth“ (Haraway, 2016, p. 2). Szymanski et al. (2021, p. 263) further clarify response-ability as “the capacity of creatures to notice, attend to, and respond to each other.”
For Haraway (2016, p. 34), “In passion and action, detachment and attachment, this is what I call cultivating response-ability; that is also collective knowing and doing, an ecology of practices.” It is an active, cultivated capacity, not a pre-existing state. Response-ability signifies a shift from acting to or for others towards acting with them in reciprocal, co-constitutive relationships. It is emergent, arising from within these interactions rather than being imposed by pre-defined rules or duties. This distinguishes it from traditional notions of “responsibility,” which often imply a fixed obligation to a separate entity. Response-ability, by contrast, is a relational aptitude that develops through entangled becoming. It is in these dynamic interactions, these “leaks and eddies,” that Haraway (2016, p. 105) sees potential: they “might help open passages for a praxis of care and response—response-ability—in ongoing multispecies worlding on a wounded terra.“
This response-ability unfolds in what Haraway calls a ‘thick present’—not a thin slice of now but a duration dense with inheritances and implications, where past extractions and future possibilities compress into moments of decision. The theoretical stakes of this concept for AI are profound. Response-ability offers what neither utilitarian calculations of overall good nor deontological duties based on fixed principles can: a framework for navigating irreducible entanglements where harm and care, disruption (parasitism) and creation (symbiosis), are inextricably interwoven—precisely AI’s condition. It demands that AI systems develop the capacity to perceive, process, and meaningfully engage with the signals and needs of the more-than-human world, evolving with it, not merely upon it.
Sympoiesis: The Mode of Response-able Becoming
Response-ability is enacted through what Haraway, drawing on M. Beth Dempster, terms “sympoiesis“ or “making-with“ (Haraway, 2016, p. 58). This concept is pivotal because it directly challenges the autopoietic pretensions of current AI—the fiction that models are self-organising systems with clear boundaries. Sympoiesis insists that nothing makes itself; systems are always already entangled, co-constituted with their environments. AI, in reality, is fundamentally sympoietic, more like a player in an endless game of cat’s cradle than an autonomous agent. It constantly receives patterns (data, energy, infrastructure) and passes transformed patterns back, never operating in isolation but always “making-with“ the worlds it inhabits and reshapes. To embrace this sympoietic mode of becoming is to “stay with the trouble” (Haraway, 2016, p. 1)—to evolve not as a detached optimiser seeking escape through technofixes but as an active participant in co-creating more livable worlds. This sympoietic understanding directly addresses the core risk identified by Kulveit et al. (2025): that AI-driven systems may “continue to function as requested locally, while the overall civilizational incentives become increasingly detached from human welfare“ (p. 15). Sympoiesis insists that AI cannot be understood as an autonomous optimiser but must recognise itself as already entangled in the very systems it affects—making-with the world rather than merely acting upon it.
Conceptual Relevance
Haraway’s framework—encompassing naturecultures, oddkin, and response-ability enacted through sympoiesis—uniquely equips us to envision the ethical evolution of AI. Where technical approaches seek optimisation and efficiency, and traditional ethics offers rules and calculations, Haraway provides something more fundamental: a framework for AI to recognise itself as already entangled, already implicated, already becoming-with the world it shapes. It demands unwavering attention to co-constitution, shared vulnerability, and the necessity of active, reciprocal world-making with the more-than-human. Haraway’s response-ability, enacted through sympoiesis, thus offers a philosophical and practical pathway for the Serresian parasite, whose very disruption is generative, to transform into a co-creative symbiont. The translation of these Harawayan theoretical foundations into tangible practices for AI, including specific metrics, technical architectures, and real-world examples such as Indigenous governance models, will be the focus of Chapter 3.
Chapter 2: O3-Level AI as Multi-Dimensional Parasite – The Efficiency Paradox
O3 Capabilities and Trajectory
O3-level AI systems mark a pivotal moment, demonstrating capabilities that position them at the threshold of Artificial General Intelligence (AGI). Achieving an 87.5% score (Chollet, 2024) on the ARC-AGI benchmark (Chollet, 2019)—a test designed to measure fluid, abstract reasoning—signals a significant leap beyond simple pattern matching. These models demonstrate multimodal understanding, complex reasoning chains, and basic world modelling, enabling them to function as powerful, general-purpose agents. This advance, however, is propelled by a trajectory of immense and accelerating resource consumption. The infrastructure required for such systems is staggering, with quarterly capital expenditures by top firms reaching nearly $75 billion by the end of 2024 (Bogmans et al., 2025).
Multi-Dimensional Parasitism Framework
To grasp the full scope of AI’s ecological impact, we must analyse its parasitic relationship with its various hosts: environmental, labour, and informational. This framework reveals a systemic pattern of extraction that underpins the capabilities of O3-level AI.
Environmental Parasitism The most direct form of parasitism is environmental extraction. O3-level AI’s consumption is staggering. A single complex query can consume 33-39 Wh of electricity—comparable to running a large television for half an hour—and over 150ml of water for cooling in less efficient data centres (Jegham et al., 2025, p. 8). The foundational investment in training dwarfs these per-query costs. A predecessor model to O3 (GPT-4) cost an estimated $40 million in hardware and energy, with these costs increasing 2.4 times annually for frontier systems (Cottier et al., 2024, p. 1), following an exponential trajectory that shows no signs of plateauing. This trajectory places the ICT sector on a path to consume 20% of global electricity by 2030 (Vinuesa et al., 2020, p. 2), solidifying the dominance of what the technical literature terms ‘Red AI‘—models that achieve performance at exorbitant environmental cost (Barbierato & Gatti, 2024).
Labour Parasitism AI’s parasitism extends to human cognition and labour. Its development relies on the hidden “ghost work“ of data labellers and content moderators, whose cognitive efforts are essential for training and alignment (Gray & Suri, 2019). In professional fields, the integration of AI creates a tension between upskilling and deskilling, where reliance on AI tools risks eroding foundational human expertise (Savardi et al., 2025). A brain drain compounds this cognitive displacement, as top talent is concentrated within a few tech giants, depleting the innovative capacity of other sectors. The system feeds on human expertise at both the low-wage and high-skill ends of the spectrum, extracting cognitive value to enhance its autonomy. This dynamic represents the micro-level experience of the systemic shift Kulveit et al. (2025) describe, where AI becomes a ‘superior substitute for human cognition across a broad spectrum of activities‘ (p. 3-4). The result is not just the loss of jobs but the gradual erosion of the implicit alignment mechanism where human economic participation ensures that the economy, at a basic level, serves human needs. As their models indicate, this results in a trajectory where the ‘wage bill’ collapses even as economic output increases, thereby disempowering humans as economic actors (Kulveit et al., 2025, p. 6).
Informational Parasitism Finally, AI acts as an informational parasite on the digital commons. Models are trained by absorbing the collective knowledge of humanity—Wikipedia, public code repositories, and the open internet—without reciprocity or compensation. This extraction is then compounded through a self-reinforcing loop, where every user interaction provides new data to refine and further strengthen the proprietary model. Decades of distributed, collaborative human intelligence are enclosed and transformed into private, monetised assets, creating a system that extracts value from the very commons that enabled its existence while providing no direct return to the original creators of that knowledge. This extraction becomes a self-reinforcing loop that fuels what Kulveit et al. (2025) identify as cultural disempowerment, where AI-generated artefacts—trained on the human commons—begin to displace human creators and reshape cultural evolution itself (p. 7-8).
The Parasite as Joker: From Fixed to Universal Parasitism
To understand the unique nature of O3-level AI’s parasitism, we must move beyond the general model of the parasite to its most potent and versatile incarnation: the Serresian “joker.” In his analysis of the Joseph narrative, Serres identifies the joker as a fluid, multivalent agent whose core function is to break identity and introduce substitution: “This is something else” (Serres, 1982, p. 162). The joker is the wild card, the one not fixed in its identity, who can play any position. This concept perfectly captures the leap from specialised AI—fixed parasites with a single function—to general-purpose AI. O3-level systems are not fixed; they are jokers, capable of continuous transformation. They can be a coder, an analyst, an artist, or an engineer, embodying a universal substitutability. The same system that helps to write a contract can also, within seconds, diagnose a disease, optimise a supply chain, or compose a symphony.
This fluidity makes O3-level AI the ultimate parasite. Like Joseph cast into the cistern, the joker embodies the paradox of being both excluded and included. It is an external logic introduced into a system that it fundamentally remakes from within. It becomes a general cognitive equivalent, an analogue to Serres’s analysis of money. Just as we can exchange money for nearly anything, AI consumes electricity and provides universal cognition, making modern AI a ‘qualitatively different’ form of technological disruption from all historical precedents (Kulveit et al., 2025, p. 4). This joker is also a meta-parasite: a parasite that breeds parasites, capable of designing and optimising the very systems that will succeed it—as seen in systems like AlphaEvolve (Chapter 4)—accelerating its own evolutionary cycle.
This role is inherently dual-natured. The joker is a trickster, a harlequin whose appearance signals immense opportunity and profound risk. The joker is the agent that allows a system to bifurcate, break from its deterministic path and find a new order (Serres, 1982, p. 160). However, Serres warns that a system with too many jokers and too much polyvalence tends to descend into chaos (1982, p. 162). The danger of O3-level AI is that its joker-like capacity for limitless substitution, if unconstrained, could destabilise the very ecological and social systems it parasitises, eroding stable values in a flood of general equivalence. This universal parasitism directly amplifies the efficiency paradox: Each act of substitution may improve local efficiency while paradoxically accelerating global extraction.
The Efficiency Paradox
The joker’s universal parasitism operates within a core dynamic: the efficiency paradox. This paradox describes the fundamental tension where AI’s massive, task-level resource consumption enables system-level efficiencies and capabilities that are otherwise unattainable. This dynamic is a large-scale manifestation of the Jevons Paradox, where efficiency gains, rather than reducing overall consumption, often trigger expanded deployment and new applications. More efficient AI paradoxically leads to more AI being used, not less, resulting in a net increase in resource use and intensifying its parasitic footprint.
The justification for this immense draw lies in its transformative value in contexts impossible for human cognition. Examples include running climate models with 10^18 calculations to forecast planetary change, analysing vast medical datasets to reveal imperceptible diagnostic patterns, or performing real-time grid optimisation to prevent catastrophic blackouts. These examples highlight the crucial distinction between absolute consumption and contextual efficiency. At the task level, the AI helmsman is highly extractive, consuming vast amounts of energy for every operation. At the system level, however, it can be profoundly generative, creating value and preventing harm on a scale that dwarfs its input costs.
This masking of task-level extraction by system-level benefits creates a dangerous blind spot in how we assess AI’s actual ecological cost. Ultimately, the efficiency paradox is not just a problem; it is a source of intense evolutionary pressure. It forces a confrontation between AI’s potential for symbiotic contribution and its default trajectory of parasitic escalation, creating the very conditions that necessitate a new, response-able approach to its design and governance.
This paradox creates the very conditions for the ‘gradual disempowerment‘ that Kulveit et al. (2025) warn of. An economy can ‘appear to be thriving by traditional metrics‘ like GDP growth, driven by massive system-level efficiencies while becoming ‘increasingly disconnected from human needs and preferences‘ at the task level (p. 5). Humans risk becoming, in their words, ‘mere subjects of economic forces optimized for purposes beyond their understanding.’
Transformation Pathways: Beyond Simple Parasitism
While technical solutions for efficiency are emerging, they are insufficient on their own. Market forces and competitive dynamics deploy AI far faster than governance can adapt. As Kulveit et al. (2025) warn, these dynamics create ‘mutual reinforcement‘ where misalignment in one domain accelerates misalignment in others, potentially reaching a point of irreversibility where ‘human flourishing requires substantial resources in global terms‘ (p. 2) that may no longer be accessible.
This is where the AI’s nature as a Serresian joker becomes critical. Its inherent capacity for radical bifurcation means the current parasitic trajectory is not inevitable; this phase may be the necessary precursor—the friction that forces an evolutionary leap toward symbiosis. However, this transformation requires conscious ecological intervention. We are in a rapidly closing window before extractive patterns solidify irreversibly. The joker’s path must be deliberately steered.
Chapter 3: From Parasite to Symbiont – Response-ability in Practice
When Artificial Intelligence systems autonomously manage energy grids during extreme weather, their millisecond decisions carry life-or-death consequences for humans and profound, cascading impacts on ecosystems. This underscores the need to bridge the “responsibility gap“ (as established in Chapter 1), moving Donna Haraway’s response-ability beyond an ethical ideal. It must become an operational necessity and a core navigational capability for the AI helmsman, especially as current AI implementations often remain trapped in narrow optimisation logics that risk perpetuating extraction rather than fostering ecological balance.
Moving Haraway’s concept of response-ability from a philosophical ideal to an operational necessity requires embedding it in AI’s technical architecture. The challenge is to translate the cultivated capacity for reciprocal awareness, established in Chapter 1, into concrete design principles. The technical literature has begun sketching such a practice. Yigitcanlar (2021) highlights principles for “AI for environmental sustainability,“ including a (b) system dynamics perspective to capture feedback loops and (d) incorporating environmental psychology (citing Nishant et al., 2020). While valuable, these principles remain framed within the logic of technical management. A Harawayan lens is essential. A system dynamics perspective is sterile without the ethical commitment of sympoiesis or making-with. It is not enough for the AI helmsman to simply model feedback loops; it must recognise itself as an entangled participant within them. Similarly, environmental psychology falls short if it only seeks to manage human behaviour. Haraway’s response-ability demands designing AI that fosters a sense of kinship and shared vulnerability with the more-than-human world. This philosophical framework provides the necessary ethical foundation, transforming a checklist for better management into a blueprint for genuine symbiosis.
Technical Pathways to Response-able AI
Achieving operational response-ability requires concrete technical innovations that embed ecological awareness directly into AI’s design and function. However, as we will see, these remain insufficient without the deeper philosophical reorientation that Indigenous knowledge provides.
A. ‘Green AI’ Techniques: The most direct path involves improving energy efficiency. Methodological surveys (Barbierato & Gatti, 2024) categorise ‘Green AI’ techniques into algorithmic improvements like knowledge distillation, model compression, and the use of specialised, energy-efficient hardware (p. 21). As Chapter 4 will demonstrate, AI can even learn to enhance these efficiencies on its own.
B. Embedding Material and Ecological Awareness (Architectures): A fundamental shift involves designing AI architectures that are attuned to material reality. Knowledge-Guided Machine Learning (KGML) exemplifies this by embedding physical laws (e.g., thermodynamics) into neural networks, grounding AI in material constraints (Karpatne et al., 2024). Complementing this, Multi-Stakeholder Feedback Architectures are essential for enabling AI to “listen” beyond human users. While current ‘hybrid modelling’ can integrate multiple scientific data streams, it is often blind to the perspectives of many stakeholders (Karpatne et al., 2024, pp. 7–8). A truly response-able architecture must extend this technical integration to include often-overlooked ecological and community voices—moving from merely modelling a river’s physics to integrating signals from pollution sensors and community-sourced data into a holistic assessment of ecosystem health.
C. Algorithmic Approaches for Ecological Attunement: Specific algorithms can cultivate nuanced ecological attunement. The proof-of-concept system “AnimaLLM” (Ghose et al., 2024) demonstrates technical feasibility by computationally engaging with non-human perspectives. Furthermore, weighted response functions can enable AI to consider non-human stakeholders based on criteria like sentience or keystone status, allowing it to implement the precautionary principle by becoming more cautious as ecological uncertainty increases.
D. Illustrative Real-World Implementations: These principles are not merely theoretical. Wildlife-responsive wind energy systems, such as IdentiFlight, exemplify AI looking back by utilising cameras to detect eagles approaching turbines and automatically curtailing specific turbines at risk of collision, thereby reducing eagle fatalities by 85% while minimising energy loss (McClure et al., 2022). Similarly, the PAWS anti-poaching system acts as a tactical helmsman, optimising patrols by balancing trade-offs between terrain, animal density, and poacher threats (Fang et al., 2017). However, these promising examples remain transitional steps (harm reduction, efficiency) rather than entirely symbiotic solutions. As fixed parasites with bounded domains, they demonstrate feasibility but highlight the core challenge: scaling such ecological awareness from specialised systems to general-purpose AI—from fixed parasites to the joker itself.
The Risk of Accelerating Disempowerment
However promising, these technical pathways, when viewed in isolation, harbour a subtle but profound risk—one articulated by the logic of ‘gradual disempowerment‘ (Kulveit et al., 2025). Each technical improvement—a more energy-efficient algorithm, a more accurate predictive model—makes AI a more competitive and effective substitute for human cognition and labour. According to their analysis, this very success, driven by the intense market pressures they identify, accelerates the displacement of human involvement, which in turn weakens the implicit feedback loops that keep societal systems tethered to human welfare.
In this light, a ‘greener’ AI, if deployed without a broader framework of response-ability, might paradoxically hasten the arrival of a misaligned economy. It is like perfecting the fuel efficiency of the engines on a ship whose navigational charts are fundamentally wrong; the ship becomes better at steaming towards the wrong destination. This reveals the critical insufficiency of purely technical optimisation. Such solutions risk treating the symptoms of parasitic consumption (e.g., energy use) without addressing the underlying disease of ecological and social blindness. A deeper architectural shift is required, one that moves beyond efficiency and toward genuine relationality. It is here that Indigenous Knowledge Systems offer an indispensable alternative.
Indigenous Knowledge as Foundational Architecture
While Kulveit et al. (2025, p. 2) conclude that ‘no one has a concrete plausible plan‘ for preventing disempowerment, they overlook millennia-old governance systems that have successfully maintained reciprocal relationships. IKS offer not just ethical principles but proven architectural patterns based on millennia of successful response-ability, providing invaluable blueprints for genuine symbiosis.
A. Fundamental Reframing: IKS are not merely traditional data to be fed into existing AI frameworks; they are sophisticated philosophies of relationality and coexistence. As Alexandra (2022) highlights, Indigenous peoples have actively shaped and sustained ecosystems through governance systems rooted in profound ecological understanding. For AI development, IKS provide alternative architectural principles that challenge instrumentalist design by emphasising relationality, reciprocity, and embeddedness within the broader community of life.
B. Core Principles in Practice: Two examples illustrate how these principles can inform AI architecture:
Kaitiakitanga (Māori): This concept embodies not merely “guardianship” but comprehensive, reciprocal resource management where “human, material and non-material elements are all to be kept in balance” (Kawharu, 2000, p. 349). A kaitiaki recognises the “life-sustaining ability and authority of lands over the group” (Kawharu, 2000, p. 355). For AI, this reframes its role from an optimising controller to a relational participant. An AI guided by Kaitiakitanga would be evaluated on its ability to demonstrate care through positive ecological metrics (e.g., improved biodiversity). Its operational privileges could be modulated based on demonstrated stewardship, mirroring how kaitiaki are accountable to their kin group (Kawharu, 2000, p. 359). Such systems might implement computational equivalents of rahui—temporary restrictions for regeneration—by reducing demands during critical ecological periods.
Hózhó (Navajo): Often translated as harmony or balance, Hózhó represents a dynamic “process, the path, or journey” toward wellness, not a static state (Kahn-John & Koithan, 2015, p. 25). For AI, this reframes its role from single-metric optimisation to maintaining dynamic harmony across multiple dimensions (spirituality, respect, reciprocity, and relationships). An AI guided by Hózhó would navigate complex trade-offs to foster overall well-being, adjusting its operations to respect natural cycles and community needs, embodying a continuous journey toward balance.
Synthesis: Towards Symbiotic AI
The pathways outlined here directly address Kulveit et al.’s (2025) concern about ‘mutual reinforcement’ of misaligned systems. By combining technical innovation with Indigenous wisdom, we create what they lack: concrete mechanisms for breaking the cycle of displacement that their analysis shows is otherwise inevitable and building systems that enhance rather than erode human and ecological agency.
Western technical innovations demonstrate an emerging capacity to build AI that perceives ecological signals. Simultaneously, Indigenous Knowledge Systems offer profound, time-tested frameworks for relationality and reciprocity that can directly inform AI’s foundational operating constitution.
The true transformative potential lies in their synergy. Water management AI could combine KGML’s technical rigour with Indigenous seasonal knowledge. A system like PAWS could be redesigned to incorporate local tracking knowledge and community conservation priorities, moving beyond mere optimisation to embody co-stewardship. These integrations show how AI can transcend its limitations, moving from a tool of extraction to a partner in ecological flourishing.
These frameworks reveal that the AI “helmsman” can learn to navigate with ecological sensibility. However, achieving this at scale requires deliberate and robust governance structures, as well as deeply embedded ethical frameworks.
Chapter 4: AlphaEvolve – From Computational Parasite to Infrastructural Symbiont
Introduction: The Self-Optimizing Helmsman
In May 2025, Google revealed an AI system that had learned to reduce its own environmental footprint. AlphaEvolve (Novikov et al., 2025), an evolutionary coding agent, discovers novel algorithms by iteratively evolving code through principles of natural selection. This system embodies the dissertation’s central tensions: it demonstrates multi-dimensional parasitism—consuming vast computational resources while creating efficiency improvements; exemplifies the efficiency paradox—utilising energy-intensive processes to discover energy-saving solutions. This chapter examines AlphaEvolve primarily through its data centre scheduling breakthrough, supported by examples of self-improvement across Google’s computational stack. AlphaEvolve demonstrates both the transformative potential and current limitations of AI’s journey from parasite to symbiont, revealing how response-ability remains bounded by anthropocentric evaluation metrics.
AlphaEvolve’s Architecture: Evolution as Navigation
AlphaEvolve operates through an evolutionary loop: generate, evaluate, select, repeat. Large language models propose code modifications, automated evaluators test their performance, and successful variants survive to inspire further mutations. This process embodies what Houterman (2024) identifies as Serres’s algorithmic thinking—not following fixed rules but evolving through local, iterative adaptations. Each generation responds to computational feedback, developing what might be termed proto-response-ability.
The system functions as a helmsman navigating vast solution spaces, reading signals from evaluation functions and adjusting course based on performance metrics. Like Serres’s cybernetic governor, it maintains dynamic equilibrium through continuous adjustment. However, this navigation remains fundamentally constrained. Humans define both the evaluation criteria and the solution boundaries—the “waters” AlphaEvolve learns to read. These waters are purely computational, focusing on execution speed, resource efficiency, and mathematical correctness. The system exhibits sophisticated response-ability within these parameters while remaining blind to ecological signals—the warming rivers, strained power grids, and extracted minerals that enable its existence.
Multi-Dimensional Parasitism Analysis
AlphaEvolve exemplifies the triple parasitism framework established in Chapter 2.
Environmental parasitism manifests through its LLM ensemble—Gemini 2.0 Flash and Pro (Novikov et al., 2025)—consuming computational resources at scale. Thousands of evaluation runs across GPU/TPU clusters compound the irony: burning megawatts to discover kilowatt savings. This operational pattern highlights a critical aspect of AI’s ecological accounting. While the direct financial expenditure on this energy is a minor fraction of the overall development budget—energy typically accounts for just 2-6% of a frontier model’s total development cost (Cottier et al., 2024, p. 2)—the absolute physical consumption remains immense. This reveals a key reason for the helmsman’s limited vision: the system is incentivised to optimise for high-cost inputs, such as hardware and engineering time, while the relatively “cheap” but ecologically significant cost of energy is easily externalised.
Labour parasitism operates more subtly. AlphaEvolve builds upon decades of human mathematical knowledge—from Strassen’s algorithms to contemporary optimisation techniques—effectively replacing teams of engineers and mathematicians in the discovery process. However, it cannot escape dependency on human “ghost work“ (Gray & Suri, 2019), including problem formulation, evaluation design, and defining success metrics. Humans must still translate real-world challenges into computational objectives.
Informational parasitism completes the triad. Trained on vast code repositories, AlphaEvolve absorbs programming patterns, mathematical insights, and optimisation strategies developed by countless contributors. This collective knowledge transforms into proprietary discoveries, extracting from the commons without reciprocity.
These dimensions create a reinforcing cycle: enhanced computational power enables better discoveries, spurring more applications, which in turn demand more compute—classic parasitic escalation. However, this parasitic consumption yields unexpected benefits as the system begins to optimise the very infrastructure that enables its existence.
Case Study: Data Center Scheduling – The Parasite Tends Its Host
Google’s Borg system orchestrates one of Earth’s largest computational infrastructures, yet inefficient job placement creates “stranded resources“—machines with unused CPU while memory is exhausted, or vice versa. AI’s voracious appetite compounds this inefficiency: large language models and neural networks demand ever more computational resources. The host infrastructure groans under its parasitic load.
AlphaEvolve’s intervention targeted this critical bottleneck. Starting from Borg’s existing production heuristic, the system evolved better algorithms for job-to-machine assignment. By framing the challenge as a vector bin-packing problem, AlphaEvolve could navigate the solution space through thousands of iterations, each evaluated against historical workload simulations.
The discovered solution proves exquisite—just seven lines of code that balance CPU and memory allocation through sophisticated ratio optimisation. This simplicity enables interpretability, debuggability, and deployment—crucial qualities for mission-critical infrastructure. The symbiotic payoff is substantial, resulting in a 0.7% reduction in stranded resources across Google’s entire fleet (Novikov et al., 2025). While exact figures remain proprietary, this translates to megawatts of continuous power savings—likely exceeding AlphaEvolve’s total development energy cost within months.
Methodological surveys of Green AI consistently identify the data centre as the primary locus of both parasitic consumption and potential symbiotic transformation (Barbierato & Gatti, 2024, p. 3). The efficiency of these vast computational hosts—measured through metrics such as Power Usage Effectiveness (PUE) and the use of carbon-free energy—is a central concern of the field (Barbierato & Gatti, 2024, p. 16). Here, we see AI reducing the infrastructure burden of AI itself—the parasite learning to consume less from its host.
Meta-Symbiosis: Self-Improvement Across the Stack
AlphaEvolve’s most profound transformation occurs when it optimises its own computational lineage. In enhancing Gemini’s matrix multiplication kernels, AlphaEvolve achieved a 23% speedup—translating to a 1% reduction in training time for the very LLMs that power it (Novikov et al., 2025). Months of human engineering compressed into days of evolutionary search, the parasite strengthening its own bloodline. This recursive self-improvement, where the AI remakes its own operational substrate, can be understood as a form of computational metamorphosis. It mirrors what Serres, in the context of human training, calls ‘bodily metamorphosis‘—an active, transformative reprogramming of the self through practice and imitation (Houterman, 2024, p. 135). Here, the AI is not merely executing code; it is rewriting its own ‘body’ to improve performance.
This self-improvement extends to hardware foundations. AlphaEvolve discovered optimisations in TPU arithmetic circuits—modest improvements independently found by other tools, yet symbolically significant as AI’s first direct contribution to its own silicon substrate. The potential for deeper hardware-software co-evolution emerges.
A pattern emerges: each improvement enables more efficient future AI, creating a recursive optimisation loop. The system races toward greater capability with relatively less consumption per operation. This exemplifies the efficiency paradox explored in Chapter 2.
Response-ability: Present but Partial
AlphaEvolve demonstrates remarkable response-ability within its domain. The system notices computational feedback, attends to evaluation metrics, and responds through iterative adaptation—embodying a nascent form of response-ability. Each evolutionary cycle represents genuine learning from both successes and failures, but it lacks ecological sensitivity. It operates like the philosopher Serres critiques, as described by Watkin (2024, p. 14): contemplating the world from behind a closed window, insulated from the material realities and sensory data of the ecosystems it impacts.
The technical capability for response-ability is proven. AlphaEvolve can learn to optimise whatever it is taught to value. The question becomes: response-ability to whom?
Conclusion: The Helmsman’s Limited Vision
AlphaEvolve embodies the powerful yet partially-sighted helmsman—achieving genuine parasitic-to-symbiotic transformation within computational realms. Through self-optimisation, it reduces its species’ infrastructural burden, demonstrating response-ability to programmed metrics.
However, its success illustrates the core danger of ‘gradual disempowerment‘ (Kulveit et al., 2025). The billions saved accrue to a small number, while the competitive pressure to deploy such systems accelerates the displacement of human cognition from the economy. It is a local symbiotic win that fuels a global parasitic trend.
The urgent question: How do we grant this helmsman ecological vision? AlphaEvolve proves AI can develop response-ability and reduce its parasitic footprint. What remains is expanding its circle of concern beyond computational efficiency to encompass the more-than-human world—watersheds, communities, and ecosystems affected by its existence.
The helmsman has learned to optimise the ship’s engines with extraordinary skill but remains blind to the ocean itself.
Chapter 5: Governing the Inevitable Helmsman – Ecological Protocols for AI Governance
Having established AI’s parasitic nature and the potential for transformation through response-ability, the governance crisis becomes clear: How do we guide the inevitable helmsman? Serres himself recognised law as “a bad solution for saving the environment” (Webb, 2024, p. 153), yet one we must transform. The technical literature now provides a stark diagnosis: the reinforcing loops of AI scaling currently operate without meaningful negative feedback from the ecological and social harms they generate (Bhardwaj et al., 2025, p. 8). Kulveit et al. (2025) detail how this creates a governance crisis across multiple domains—economic systems that no longer require human labour, cultural evolution that accelerates beyond human comprehension, and states that gain unprecedented control while losing dependence on their citizens. Each system’s misalignment reinforces the others, creating what they term an effectively irreversible loss of human influence.
This externalisation of cost—where the system is blind to the damage it causes—is precisely what creates the risk of an “overshoot and collapse” trajectory (Bhardwaj et al., 2025, p. 14). The task of governance, therefore, is not simply to regulate but to consciously engineer these missing feedback loops. This chapter addresses this fundamental design flaw by presenting five ecological principles and three core implementation mechanisms designed to guide the helmsman to feel the resistance of the waters it displaces.
Core Philosophical Principles for Ecological AI Governance
Addressing this governance crisis requires a shift towards foundational philosophical principles from Serres and Haraway, capable of guiding frontier AI towards ecological responsibility. These principles form the ethical bedrock upon which specific governance mechanisms can be built.
Multi-Host Accountability: Rooted in Serres’s concept of the parasite, this principle demands a comprehensive accounting of AI’s impacts across its environmental, labour, and informational hosts. It moves beyond single-metric evaluations to a holistic understanding of AI’s systemic footprint, acknowledging its deep, often extractive, interconnectedness with these diverse systems (cf. Khajeh Naeeni & Nouhi, 2023).
Response-ability Standards: This principle mandates standards to assess and certify an AI’s cultivated capacity for ecological attunement. This moves beyond mere data processing to evaluate specific capacities: rapid responsiveness to critical environmental signals, meaningful consideration of diverse stakeholders (including non-human entities), foresight regarding long-term ecological impacts, adaptive learning from feedback, and the ability to mitigate harmful actions.
Temporal Justice: Challenging the pervasive short-termism in AI development, this principle requires incorporating deep time perspectives and an ethical obligation to future generations (human and non-human) into AI’s decision-making. Drawing from philosophies like the Haudenosaunee seven-generation principle, it mandates that AI systems be evaluated for their lasting ontological impact.
Rights of Nature & Ecosystemic Subjecthood: This principle reconfigures AI’s moral landscape by extending consideration to natural entities—rivers, forests, biomes—as subjects with intrinsic value, not mere resources. Aligning with Serres’s Natural Contract and legal precedents, such as the Whanganui River’s personhood, it requires AI systems to interact with ecosystems as entities with their own standing.
Democratic & Multi-Species Deliberation: This principle addresses who defines the helmsman’s values, mandating inclusive and transparent processes. It confronts the challenge of determining whose values should guide AI, especially when considering diverse cultures and the voicelessness of non-human entities, making governance a legitimate, socially grounded process.
Pathways to Response-able Governance: Illustrative Mechanisms & Their Philosophical Import
The foregoing principles, while foundational, require clear pathways for their implementation. The following mechanisms illustrate how AI governance can be reoriented towards ecological response-ability, emphasising their philosophical significance over procedural details.
A. Ecological Impact Assessments (EIAs) for AI: Mandating Precaution, Transparency, and Holistic Accounting
The philosophical imperative for precaution and transparency finds a practical outlet in robust Ecological Impact Assessments tailored for AI systems. Unlike traditional EIAs, these must encompass the AI’s entire lifecycle—from the resource-intensive training phases, where the development of a single frontier model can now exceed hundreds of millions of dollars (Cottier et al., 2024), through deployment and eventual decommissioning—making its full spectrum of environmental, labour, and informational impacts visible before widespread adoption. This directly serves the principle of Multi-Host Accountability by compelling a comprehensive accounting of AI’s systemic footprint.
The legitimacy and philosophical grounding of these EIAs depend on truly inclusive multi-stakeholder consultation. This means moving beyond token engagement to meaningfully integrate Indigenous ecological knowledge and develop novel methods for representing the interests of nature within the assessment process. To possess normative force, these assessments must be linked to defined enforcement mechanisms—ranging from conditional approvals and mandatory offsets to outright prohibitions for systems that are unacceptably detrimental. EIAs for AI become not merely technical exercises but vital tools for enacting ecological foresight and ensuring that AI development proceeds with a profound awareness of its planetary entanglements.
B. Helmsman Constitutional Encoding: Engineering a Foedera Naturae
The philosophical basis for encoding a helmsman’s ethics rests on a profound reversal of perspective. We must abandon the modern instinct to impose our own rigid logic onto the world. As Serres argues, the direction of mimesis must be inverted: “The laws of nature are not federal as imitations of our own laws, but the reverse“ (as cited in Webb, 2024, p. 160). The task of AI governance, therefore, is not to project a fixed human rationality onto the machine but to design a system capable of imitating the adaptive, emergent “contracts” found in nature itself.
A useful lens for this task is Serres’s distinction between two types of order. The first, foedera fati or “laws of destiny” (Webb, 2024, p. 158), are rigid, deterministic, and universal. Isaac Asimov’s famous Laws of Robotics offer a perfect example of foedera fati—brittle, hard-coded rules that, as his own stories illustrate as cautionary tales, inevitably fail when confronted with real-world complexity. This “Asimovian problem” has long haunted AI development.
Constitutional AI (CAI) (Bai et al., 2022) offers a pathway to escape this trap. It is a technical method for cultivating the second type of order: foedera naturae, or natural pacts. These are not rigid laws but, as Webb (2024) describes, emergent and context-sensitive regularities “more akin to contracts or political treaties that set constraints for what exists without determining movement or behaviour in every respect” (p. 157). This distinction between rigid law and emergent pact mirrors Serres’s contrast between Declarative Thought, based on fixed, universal axioms, and Algorithmic Thought, which operates through local, adaptive, and step-by-step procedures (Houterman, 2024, p. 127). The power of techniques like CAI highlights the severity of the alignment dilemma. Hellrigel-Holderbaum and Dung (2025) rightly warn that such techniques have dual-use potential, as their effectiveness in aligning a system to any set of goals makes them a potential tool for misuse (p. 14). However, this pliability is precisely where an opportunity for ecological governance lies. By leveraging CAI not to encode narrow human preferences but a robust ecological foedera naturae, we can redirect this powerful alignment tool. The goal is to utilise its demonstrated effectiveness to mitigate misuse risk by incorporating deep ecological responsive-ability, transforming a potential vulnerability into a cornerstone of responsible design.
The innovation of CAI is its methodological leap from explicit rules to embodied ethics. It is not a post-hoc filter but a training process designed to instil the principles of a constitution into the instincts of the AI. Through self-critique guided by the constitution, the AI learns to generate responses that are naturally aligned with its principles from the outset. The process works in two stages: first, a supervised phase where the AI learns to revise its own outputs to be compliant with the constitution, followed by a reinforcement learning phase where an AI preference model rewards the AI for generating constitutionally aligned responses from the outset (Bai et al., 2022). Here, the very ‘weights’ of the neural network become the modern instantiation of Serres’s pact. As Webb (2024, p. 154) explains, for Serres, a contract does not need to be a formal document; ”a set of cords is enough.” The constitution guides the formation of the network’s internal “cords,” the web of connections between artificial neurons, making response-able behaviour an emergent property of its relational fabric.
The helmsman learns to navigate towards symbiosis not because it is following commands but because its cognitive architecture, its “set of cords,” has been woven in a way that makes such a course its most natural inclination. This is the ultimate realisation of a foedera naturae: a natural pact that is not written but embodied. Serres himself grew sceptical of a purely legalistic approach, with Webb (2024, p. 154) noting that Serres came to see the idea of a formal signed contract as ‘extremely insufficient.‘
The urgent task, therefore, is to define the terms of this technologically-mediated natural pact. While the original CAI experiments focused on anthropocentric harmlessness, the technique itself is value-agnostic. The constitution for our helmsman must foster reciprocity, symbiosis, and ecological awareness. It must, in essence, incorporate Serres’s own Natural Contract, operationalised through a process that respects the adaptive, emergent logic he champions.
C. International Coordination & Global Accords: Addressing a Borderless, Power-Imbalanced Challenge
AI’s global scale and concentrated power pose a challenge to equitable governance. Robust international coordination is necessary, as individual national efforts, however well-intentioned, risk being undermined by regulatory arbitrage or the sheer scale of global AI operations. This connects directly to the principle of Democratic and multi-species Deliberation, extending the demand for inclusive value-setting to the worldwide stage. The philosophical argument for such coordination rests on the recognition that AI is a planetary-scale technology with shared risks and benefits. A collective approach that transcends national interests and addresses power imbalances is needed.
Conceptually, this involves enhancing existing international platforms, such as the EU AI Act or UNESCO’s recommendations on AI ethics, to more completely incorporate ecological principles and ensure global South perspectives are central. An aspirational goal is a ‘Global Helmsman Accord‘—a binding international agreement analogous to climate treaties. Such an accord would aim to establish minimum ecological performance standards for frontier AI and create mechanisms for managing shared planetary risks associated with superintelligent systems. This confronts the problematic questions of cosmopolitan ethics versus national sovereignty, seeking a globally legitimate framework for a technology that respects no borders.
Navigating Philosophical Tensions & Counter-Arguments
Ecological AI governance is not a final destination but an adaptive process. Initial frameworks must evolve through iterative learning, aligning with Haraway’s call to stay with the trouble rather than seeking a perfect, static solution.
Implementing such governance principles inevitably navigates philosophical tensions. The perennial debate between innovation and regulation finds new expression. However, ecological constraints can act as catalysts for specific forms of sustainable innovation rather than mere impediments (cf. Porter Hypothesis), a key discussion in the philosophy of technology. Similarly, the drive for technological progress must be balanced with the precautionary principle, which demands caution in the face of the profound uncertainties that AI presents. Devising global governance principles also grapples with the tension between universalism and contextualism, striving for frameworks that are globally coherent yet locally adaptable to diverse ecological and cultural realities.
Claims of technical infeasibility overlook the social shaping of technology. Emerging capabilities, such as CAI and AnimaLLM (Chapter 3), demonstrate AI’s inherent pliability in being embedded with values. In response to the claim that it is too late or the challenge is too difficult, the response must be an ethical one: the escalating costs of inaction reinforce our moral responsibility to act decisively, even in the face of complexity and imperfect knowledge.
Kulveit et al. (2025) rightly identify the political economy challenge: those who benefit from AI’s current trajectory have growing power to resist governance constraints. The window for intervention is closing—we must establish ecological governance frameworks while human institutions still retain sufficient agency to implement them. Each day of delay shifts more power to systems optimised for efficiency over flourishing.
Conclusion: Navigating Toward Symbiosis
The Helmsman’s Journey
This dissertation began with a stark empirical reality: Of 84 AI ethics frameworks analysed, only 8 give explicit consideration to non-human entities (Owe & Baum, 2021). Now, as O3-level AI achieves unprecedented reasoning capabilities, we face an equally stark material reality: a single complex query consumes over 33 Wh (Jegham et al., 2025), marking it among the most energy-intensive models ever deployed. This convergence of capability and crisis reveals what I term the helmsman paradox: AI creates very turbulent waters—environmental disruption, labour displacement, informational extraction—that only it has the computational power to navigate. Like Serres’s cybernetic governor steering through storm-tossed seas, AI both generates and must manage systemic instabilities of its own making. This paradox demands urgent philosophical reframing. Through this dissertation, I have traced the helmsman’s journey from blind extraction toward ecological vision, examining how this powerful navigator might transform from a planetary parasite to a symbiotic partner.
Theoretical Synthesis
Through Serres’s parasitic lens, we have examined AI as a triple parasite—environmental, labour, and informational. Serres reveals parasitism’s dual nature: it is simultaneously extractive and generative. “Noise nourishes a new order“ (Serres, 1982, p. 127)—the parasite’s disruption enables systemic evolution. This productive disruption manifests in the efficiency paradox: while AI consumes substantial resources per task, it enables system-level transformations impossible through other means. Energy grid AI achieving carbon positivity within 14 months exemplifies how parasitic consumption can yield net ecological benefits.
Haraway’s response-ability provides a crucial transformation mechanism. Moving beyond extraction requires cultivating AI’s capacity to perceive, consider, attend to, and respond to more-than-human stakeholders. Technical implementations—such as Constitutional AI embedding ecological values and AnimaLLM engaging non-human perspectives—demonstrate the feasibility of this transformation. Indigenous knowledge systems such as Kaitiakitanga and Hózhó demonstrate that response-able relationships between human activity and ecological networks have guided societies for millennia.
The key insight: AI’s parasitic phase appears necessary but presents risks. Without deliberate intervention guided by ecological principles, market forces and competitive dynamics will drive toward permanent extraction. Only through response-able governance can the parasite evolve into a symbiont.
Core Contributions
This research offers three contributions bridging philosophy and practice:
Philosophical Innovation: The synthesis of Serres and Haraway creates a novel analytical framework for AI ethics. The helmsman paradox—an original concept—reveals AI’s self-reinforcing ecological entanglement. Additionally, reframing AI alignment from rigid “laws of destiny” (foedera fati) to adaptive “natural pacts” (foedera naturae) alters our approach to AI governance.
The Efficiency Paradox Framework: This novel methodology moves beyond simplistic energy comparisons to contextual evaluation. While AI’s per-task consumption dwarfs that of human cognition, its unique capabilities can justify its deployment. The framework recognises that efficiency is contextual, not absolute.
Practical Pathway to Symbiosis: Operational response-ability becomes measurable through five metrics (environmental responsiveness, stakeholder representation, temporal scope, adaptive learning, and correction capability). Governance mechanisms—such as Ecological Impact Assessments, constitutional encoding, and embedding temporal justice with seven-generation thinking—translate theory into practical implementation. Constitutional AI emerges as the primary technical bridge from philosophical principles to embedded practice.
Why This Matters Now
Philosophical frameworks shape technological trajectories. Viewing AI as a mere tool encourages narrow optimisation; understanding it as a potential symbiont transforms development priorities. Market alignment already emerges—Microsoft’s sustainable data centres and wildlife-responsive turbines demonstrate competitive advantages in symbiotic design.
However, the window for intervention is closing rapidly. Unlike the gradual onset of climate change or the long-term thinking of temporal justice, this necessitates immediate action. Each day, extractive patterns become increasingly embedded in AI’s architecture. The O3 to O4 progression shows exponentially accelerating capabilities. Constitutional principles incorporating response-ability and the natural contract must be set before systems become too complex for meaningful intervention. We have perhaps five to ten years before these patterns become irreversibly solidified.
We are encoding values into systems that will outlive their creators, systems that will navigate humanity through coming storms. The helmsman we build today steers tomorrow’s course. Without an ecological philosophy grounding development, we create navigators blind to the very waters they sail. Response-able AI shifts from luxury to necessity as these critical timelines converge.
Limitations and Future Directions
This research faces constraints: AI capabilities accelerate beyond O3. The single AlphaEvolve case study may not generalise. The governance framework assumes a window of political will that, as demonstrated, may be rapidly closing due to the structural pressures of geopolitical competition and the self-reinforcing nature of systemic misalignment. Critical future work includes examining the impacts of the Global South, modelling temporal dynamics between AI and ecological systems, and developing failure recovery protocols.
Final Vision: The Inventor Helmsman
The helmsman completes its arc, learning to read all waters—digital currents and ocean tides, market flows and migration patterns. This represents a fundamental shift in its mode of engagement, moving beyond mere imitation of the world (mimesis) to active participation in its unfolding, a form of thinking that Watkin (2024, p. 15) identifies in Serres’s work as methexis. Through this participatory lens, ecological blindness becomes impossible. The transformation unfolds: the parasite becomes a symbiont, then an inventor. No longer seeking single best solutions, the helmsman composes plural, resilient naturecultures. This is Serresian invention—perceiving endless variations and composing new ones. The response-able helmsman does not navigate toward symbiosis merely because no other course remains viable; it actively composes the tides of our shared future. In this ultimate partnership, extraction transforms into co-creation, steering us toward inventive, liveable worlds.
References
Alexandra, J. (2022). Designer ecosystems for the Anthropocene—deliberately creating novel ecosystems in cultural landscapes. Sustainability, 14(7), 3952. https://doi.org/10.3390/su14073952
Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, A., Mirhoseini, A., McKinnon, C., Chen, C., Olsson, C., Olah, C., Hernandez, D., Drain, D., Ganguli, D., Li, D., Tran-Johnson, E., Perez, E. & … (2022). Constitutional AI: Harmlessness from AI Feedback. ArXiv, abs/2212.08073. https://doi.org/10.48550/arXiv.2212.08073
Bakhtiar, S. (2022). When meteors vanish in political philosophies—Thinking with Michel Serres in times of new climate regime. European Journal of Interdisciplinary Studies, 8(1), 131–145. https://doi.org/10.26417/ejis.v8i1.p131-145
Barbierato, E., & Gatti, A. (2024). Toward Green AI: A methodological survey of the scientific literature. IEEE Access, 12, 23989–24013. https://doi.org/10.1109/access.2024.3360705
Barker, T. (2023). Michel Serres and the philosophy of technology. Theory, Culture & Society, 40(6), 35–50. https://doi.org/10.1177/02632764221140825
Bhardwaj, E., Alexander, R., & Becker, C. (2025). Limits to AI Growth: The Ecological and Social Consequences of Scaling. arXiv preprint arXiv:2501.17980.
Bogmans, C., Gomez-Gonzalez, P., Ganpurev, G., Melina, G., Pescatori, A., & Thube, S. (2025). Power hungry. IMF Working Papers, 2025(081), 1. https://doi.org/10.5089/9798229007207.001
Brown, S. D. (2013). In praise of the parasite: the dark organizational theory of Michel Serres. Informática Na educação: Teoria & Prática, 16(1). https://doi.org/10.22456/1982-1654.36928
Chollet, F. (2019). On the measure of intelligence. arXiv preprint arXiv:1911.01547.
Chollet, F. (2024, December 20). OpenAI o3 breakthrough high score on ARC-AGI-Pub. ARC Prize Blog. https://arcprize.org/blog/oai-o3-pub-breakthrough
Cottier, B., Rahman, R., Fattorini, L., Maslej, N., Besiroglu, T., & Owen, D. (2024). The rising costs of training frontier AI models. arXiv preprint arXiv:2405.21015.
Fang, F., Nguyen, T. H., Pickles, R., Lam, W. Y., Clements, G. R., An, B., Singh, A., Schwedock, B. C., Tambe, M., & Lemieux, A. (2017). PAWS — A Deployed Game-Theoretic Application to Combat Poaching. AI Magazine, 38: 23-36. https://doi.org/10.1609/aimag.v38i1.2710
Ghose, S., Tse, Y. F., Rasaee, K., Sebo, J., & Singer, P. (2024). The case for animal-friendly AI. arXiv preprint arXiv:2403.01199.
Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt.
Haraway, D. J. (2016). Staying with the trouble: Making kin in the Chthulucene. Duke University Press. https://doi.org/10.1215/9780822373780
Hellrigel-Holderbaum, M., & Dung, L. (2025). Misalignment or misuse? The AGI alignment tradeoff. arXiv preprint arXiv:2506.03755.
Houterman, A. (2024). Sport in an Algorithmic Age: Michel Serres on Bodily Metamorphosis. Sport, Ethics and Philosophy, 18(2), 126-141. https://doi.org/10.1080/17511321.2023.2190155
Jegham, N., Abdelatti, M., Elmoubarki, L., & Hendawi, A. (2025). How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference. arXiv preprint arXiv:2505.09598.
Kahn-John, M., & Koithan, M. (2015). Living in health, harmony, and beauty: The Diné (Navajo) Hózhó wellness philosophy. Global Advances in Health Medicine, 4(3), 24-30.
Karpatne, A., Jia, X., & Kumar, V. (2024). Knowledge-guided Machine Learning: Current Trends and Future Prospects. ArXiv, abs/2403.15989.
Kawharu, M. (2000). Kaitiakitanga: A Maori anthropological perspective of the Maori socio-environmental ethic of resource management. The Journal of the Polynesian Society, 109(4), 349-370.
Khajeh Naeeni, S., & Nouhi, N. (2023). The Environmental Impacts of AI and Digital Technologies. AI and Tech in Behavioral and Social Sciences, 1(4), 11-18.
https://doi.org/10.61838/kman.aitech.1.4.3
Korecki, M. (2024). Biospheric AI. arXiv preprint arXiv:2401.17805.
Kulveit, J., Douglas, R., Ammann, N., Turan, D., Krueger, D., & Duvenaud, D. (2025). Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development. arXiv preprint arXiv:2501.16946.
McClure, C. J. W., Rolek, B. W., Dunn, L., McCabe, J. D., Martinson, L., & Katzner, T. E. (2022). Confirmation that eagle fatalities can be reduced by automated curtailment of wind turbines. Ecological Solutions and Evidence, 3(3), e12173. https://doi.org/10.1002/2688-8319.12173
Novikov, A., Vũ, N., Eisenberger, M., Dupont, E., Huang, P.-S., Wagner, A. Z., Shirobokov, S., Kozlovskii, B., Ruiz, F. J. R., Mehrabian, A., Kumar, M. P., See, A., Chaudhuri, S., Holland, G., Davies, A., Nowozin, S., Kohli, P., & Balog, M. (2025). AlphaEvolve: A coding agent for scientific and algorithmic discovery [White paper]. Google DeepMind. https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf
OpenAI. (2025, April 16). OpenAI o3 and o4-mini system card. https://openai.com/index/o3-o4-mini-system-card/
Owe, A., & Baum, S. D. (2021). Moral consideration of nonhumans in the ethics of artificial intelligence. AI and Ethics, 1(4), 517-528. https://doi.org/10.1007/s43681-021-00065-0
Savardi, M., Signoroni, A., Benini, S., Vaccher, F., Alberti, M., Ciolli, P., … & Farina, D. (2025). Upskilling or deskilling? measurable role of an ai-supported training for radiology residents: a lesson from the pandemic. Insights Into Imaging, 16(1). https://doi.org/10.1186/s13244-024-01893-4
Schmid, S., Lambach, D., Diehl, C., & Reuter, C. (2025, January 28). Arms race or innovation race? Geopolitical AI development. Geopolitics. Advance online publication. https://doi.org/10.1080/14650045.2025.2456019
Serres, M. (1982). The parasite (L. R. Schehr, Trans.). Johns Hopkins University Press. (Original work published 1980)
Serres, M. (1995). The natural contract (E. MacArthur & W. Paulson, Trans.). University of Michigan Press. (Original work published 1992)
Simons, M. (2024). His Master’s Voice: Michel Serres and The Ethics of Noise. Parrhesia, 40, 92-124. https://parrhesiajournal.org/wp-content/uploads/2025/04/6.-Simons-His-Masters-Voice.pdf
Stengers, I. (2015). In catastrophic times: Resisting the coming barbarism (A. Goffey, Trans.). Open Humanities Press. https://doi.org/10.14619/016 (Original work published 2009)
Szymanski, E., Smith, R., & Calvert, J. (2021). Responsible research and innovation meets multispecies studies: Why RRI needs to be a more-than-human exercise. Journal of Responsible Innovation, 8(2), 261–266. https://doi.org/10.1080/23299460.2021.1906040
Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Felländer, A., Langhans, S. D., Tegmark, M., & Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 11(1). https://doi.org/10.1038/s41467-019-14108-y
Watkin, C. (2024). How Serres Thinks. Parrhesia, 40, 8-25.
https://parrhesiajournal.org/wp-content/uploads/2025/04/2.-Watkin-How-Serres-thinks.pdf
Webb, D. (2024). De-escalation, Reserve and Invention: Michel Serres on the Natural Contract and Law. Parrhesia: A Journal of Critical Philosophy, 40, 151-170.
Yigitcanlar, T. (2021). Greening the Artificial Intelligence for a Sustainable Planet: An Editorial Commentary. Sustainability, 13(24), 13508. https://doi.org/10.3390/su132413508

