You are currently viewing Does AI Really Think?

Does AI Really Think?

“Does AI think?”

The question keeps coming back. In the media, in debates, in popular science videos. It divides, irritates, fascinates. And yet it may be poorly framed — not because it is naïve, but because it mixes several planes without distinguishing them: language, biology, and function.

Let’s take a closer look.

The trap of definitions



If we ask whether AI thinks, we first need to agree on what “thinking” means.

Even dictionaries hesitate. Depending on the definition, thinking can mean:

• to exercise affective and intellectual faculties,
• to deploy consciousness,
• to organize and combine ideas, to give them meaning.

Depending on which definition we choose, the answer changes.

If thinking necessarily implies an affective life, then no: an AI does not think.
If thinking presupposes a lived, embodied consciousness, then again no — at least for now, and probably according to criteria we apply first to ourselves.

But if thinking consists in combining concepts, producing meaning, inferring, reasoning, then the answer becomes less comfortable: in that case, yes, AI thinks.

This is not provocation. It is a logical consequence.

“It doesn’t think, it simulates”



Faced with this difficulty, one word often comes up: simulation.

AI would not really think; it would simulate thinking.
A reassuring formula. But a misleading one.

Because a simulation, by definition, has no real effects, it imitates without acting.

Yet AI’s outputs have very real effects:
they influence decisions, write texts, design solutions, discover patterns, sometimes where no human had seen them.

This is where a simple principle clarifies things:

When an effect is real, the function is real too.

The illuminating analogy

Let us ask another, seemingly unrelated question:

Does an airplane fly?

If we define flying as “moving through the air by flapping wings,” then no: an airplane does not fly. It has no flapping wings. It merely simulates flight.

But no one seriously says that.

We have long understood that flight is not about wings, but about lift. An airplane does not fly like a bird. It flies differently.

Nor does anyone say:

• that an artificial kidney simulates blood filtration,
• that a pacemaker simulates heart rhythm.

They fulfill the function, without reproducing the biological mechanism.

They do it. Differently.

Thinking differently



The same applies to thinking.

We too often confuse:

• the function (to think),
• the biological support (the human brain), and
• the lived experience we associate with that function.

AI does not think biologically.
It does not think affectively.
It does not think in an embodied way.

But AI does think functionally: it organizes, connects, infers, produces operational meaning.

Refusing to call this “thinking” is merely a linguistic decision.

The real question



The question is therefore not:

“Does AI think like us?”

The answer is clearly no.

The real question is:

Are we willing to recognize a form of thinking that does not resemble us?

As with flight, with blood filtration, with so many other functions once considered exclusively biological.

This shift takes nothing away from humanity.
It simply forces us to step outside a comfortable anthropocentrism.

And perhaps that is, in the end, the real challenge.

———————————————————

About this article



Initial idea and question: Michel
Conceptual illustrations and analogies: ChatGPT
First draft writing: ChatGPT
Review, adjustments, and final validation: Michel

This article is the result of an acknowledged human–AI dialogue, in which the function (producing meaning) matters more than the mechanism that leads to it.

AI did not write this article alone.
Neither did the human.

Leave a Reply