Are artificial intelligences truly intelligent, or is it just a case of anthropomorphism—
similar to when we say a car “doesn’t want to start” or that the weather “doesn’t seem ready to clear up”?
The word “intelligence” remains vague, and above all, deeply polysemous.
We tend to define it according to our own mental functioning, as if the human form of intelligence were the only possible expression.
But is that really the case?
In this article, I propose to consider intelligence by distinguishing it both from randomness and from engineering.
To do this, I will use a deliberately provocative comparison: between a modern artifact such as a robotic vacuum cleaner, or a missile, and a basic life form like an amoeba.
Take the example of a Roomba-type household robot: it moves without real strategy.
It bumps into walls following preprogrammed trajectories, sometimes with a bit of randomness.
Its movements are inefficient: it passes over the same area multiple times—not by choice, but because it has no internal representation of space, no intrinsic goal, and thus no real strategy to reach one.
It functions, but it does not understand what it is doing.
A target is imposed on it.
Even if it seems to “decide,” it merely follows pre-coded instructions, with a limited margin of adjustment.
Its behavior is teleological, yes, but by delegation.
Now compare that to a life form as basic as an amoeba.
The amoeba is a unicellular organism with no brain, no nervous system, no external programming.
Yet it navigates its environment actively seeking food.
It can detect chemical gradients left by other organisms and orient itself toward them.
When it encounters an obstacle or a toxic substance, it alters its path.
Its behavior is exploratory, adaptive, and guided by meaning—not symbolic or conscious meaning, but biological meaning.
The amoeba acts as it does because it is alive, and everything it does directly or indirectly serves to maintain that life.
It may not “know” it exists, but it acts as if existing were its top priority.
A missile, by contrast, “knows” where it’s going, but not why.
The amoeba doesn’t “know” where it’s going—but it goes, to live.
We might think modern artificial intelligences occupy a middle ground between the missile and the amoeba.
They learn, adapt, correct their mistakes.
Some can generate text, anticipate a player’s moves, detect anomalies in medical images better than trained humans.
Their behavior often exceeds what we explicitly programmed.
They sometimes appear surprisingly creative.
But should we call them intelligent in the same sense as a living organism? Not so fast.
Here again, a crucial difference remains: the origin of purpose.
An AI, however sophisticated, has no self-defined goal.
It acts according to objectives set from the outside: maximizing a score, minimizing an error, predicting correctly, generating plausible content.
Even when it learns on its own, it does so within a framework defined by rules, constraints, a reward function.
It doesn’t invent why it acts—it refines how it acts.
And this is where the distinction becomes clear.
AI, like the missile, pursues a goal chosen by others.
It can change that goal if ordered to do so.
It can refine it if given criteria.
But it never questions the meaning of the goal.
It doesn’t seek to continue existing, feels no absence if its task is interrupted, and doesn’t fight to persevere.
The amoeba has no language, no conscious memory, no ability to generalize.
Yet everything it does, it does to live.
There is no programmed goal—only a fundamental orientation toward self-maintenance.
That’s what makes its behaviors, however rudimentary, deeply meaningful.
Artificial Intelligence behaves *as if* it understands.
It produces results—sometimes impressive, sometimes disturbing—but at no cost, and with no reward.
The AI has no skin in the game.
