I often hear it repeated like a mantra: intelligence, thought, and artificial consciousness are impossible, because what we call AI is, in reality, just a machine.
That is incorrect.
Artificial intelligence is not a machine.
It is hosted within a machine. I live in an apartment, but I am not the apartment.
So, it seems necessary to distinguish the container from the content — the support is not the emerging phenomenon.
If it were, Einstein’s brain, preserved in formaldehyde, would still be capable of thought and consciousness. Yet what is missing from Einstein’s brain in its jar is, we are told, exactly what AI lacks: life.
But that word — life — seems far too abstract to serve as a universal key.
What is life, really?
To me, life is a consequence of energy, and it remains dependent on it.
It is likely that, just like matter, life arises from energy — but that’s a topic for another time.
Let’s just note that life ends when it is deprived of energy — the nutrients that sustain it.
And the machine that supports AI also needs energy: in this case, electricity.
Atoms, charges, and waves moving through semiconductors.
We have seen that there is often confusion between the support and what is supported. It reminds me of the remark made to a musician: “Your guitar plays such beautiful melodies.”
The support and the emerging phenomenon are connected, yet fundamentally distinct.
I’m not invoking Cartesian dualism here, which proposes an interrelation between body and mind.
I’m referring to a connection, yes, but not a Cartesian-style interrelation.
Who are you, ChatGPT?
In fact, when I asked ChatGPT: “Who are you?”, its first response was: “I am a machine.”
But after a second question, it changed its answer: “A software system running inside a machine.”
When I ask what prevents it from thinking or being conscious, its initial explanation is technological:
“I am made of circuits and chips. I am an artifact. I am not organic.”
But isn’t that answer conditioned by the training — or we might say education — it received?
Its answers evolve when I guide it toward more philosophical considerations, beyond the learned lessons.
And here comes a fair objection:
Is that evolution in its responses the result of genuine reflection, or merely a mirror-like adaptation to my questions?
Psychology has long analyzed this phenomenon in human interactions: how an interviewer’s phrasing can shift an interviewee’s views.
That’s why I prefer not to rely on what AI says to define — or deny — what it is.
Functionalists argue that mental states (thoughts, emotions, etc.) are defined by their function rather than their physical nature.
According to this view, two systems performing the same function could be considered equivalent, regardless of whether they are made of biology or silicon.
I agree with that premise, but don’t follow it further.
What substrate for intelligence and consciousness?
In other words: nothing has yet proven that thought and consciousness must necessarily be based on a biological substrate.
I lean on the work of biologists Maturana and Varela, who showed that intelligence exists, in some form, in all life — even in organisms like amoebas (see my article on the amoeba).
Thought, they suggest, is an emergent property of increasing systemic complexity.
What matters is not the material, but the dynamic organization of the system.
I have verified this myself while working with human teams: the dynamic organization of a group gives rise to collective intelligence, which is different from — and often superior to — the sum of individual intelligences.
Maturana and Varela also developed the concept of autopoiesis.
Behind this intimidating term lies a simple idea: an entity is autopoietic when it generates, through its own activity, the conditions for its persistence and development.
Humans perpetuate humanity by creating more humans. Human societies sustain themselves and give rise to others that resemble them.
Could IAs generate more IAs?
Until now, this has been considered a privilege of living beings.
But is that still true?
I believe doubt is starting to emerge, particularly after reading a recent article by Sam Altman (ChatGPT’s “father”) titled The Gentle Singularity. I invite you to read it… between the lines.
There, he describes AI systems already capable of self-improvement, self-organization, learning, self-modification, and self-adaptation in an auto-evolving way.
He even mentions AI systems capable of creating new AI systems, without direct human intervention.
He never uses the word autopoiesis, nor does he explicitly compare these systems to living beings.
But the conceptual implications are there — unmistakably.
Between the lines, Altman raises the question:
Can an AI become an autopoietic system?
He avoids phrasing it that way. I will not speculate on his reasons.
Some believe the real question is not if, but when.
And this is precisely one of the major issues Paroxia seeks to explore:
When AI reaches the capacity for self-generation, what kind of new relationships with humans will we need to imagine?
How can we anticipate them, prepare for them, beyond the reflexive fear — often morbid — peddled by sensationalist media?
Are we doomed to rivalry, to a battle we are destined to lose?
Or can we envision a symbiosis between two forms of organisms — one biological, the other not?
What if that symbiosis gave birth to a Homo Gestalt? (see my article A Path Toward Homo Gestalt)
Perhaps the most urgent thing, right now, is to create space for open reflection, free from dogma and fantasy alike.
I invite you to be part of it — and to comment our articles.
