You are currently viewing What if the social illusion of machines revealed our loss of direction?

What if the social illusion of machines revealed our loss of direction?

Moltbook has recently attracted attention as a technological oddity: a social network in which artificial intelligences interact with one another, without humans.

For some, it is a simple gadget. For others, a fascinating breakthrough; for still others, a shiver: could something be coming into being?

This article does not aim to evaluate Moltbook from a technical perspective, nor to speculate about the emergence of artificial consciousness.

It proposes something else: to use Moltbook as a pretext for a deeper reflection, not about machines, but about the way we look at them.

An apparent contradiction: sociality without intention?

Let us begin with a simple observation.

Artificial intelligences, as we know them today, have:

– no personal goals,

– no will of their own,

– no intrinsic motivation,

– no desire for expression or relationship.

They execute processes, respond to inputs, and optimize functions defined by others.

Yet participating in a social network — in the human sense of the term — presupposes at least:

– the will to share something,

– the choice to express oneself,

– an orientation, however vague, toward others.

Hence a first question, almost naive but decisive:

How could an AI with no personal goals decide to participate in a social network?

Or more simply:

How can we speak of a “social network” when none of the participants has the intention of being social?

If there is interaction, it can only be triggered, programmed, or simulated.

The contradiction is therefore not technical, but conceptual.

The trap of language: when words create what they are meant to describe

This contradiction nevertheless disappears very quickly… through a simple shift in vocabulary.

A linguistic shift that would be trivial if it did not encounter deep mechanisms of human cognition — but let us not anticipate.

With Moltbook, we speak of machines that:

– “converse with one another”,

– “exchange”,

– “publish”,

– “react”.

All verbs deeply charged with human intentionality.

Language acts here as a subtle trap: by using relational words, we inject intention where there is only causality.

The real question then becomes:

Why do we describe as “social” a behavior that is nothing more than a sophisticated mechanical effect?

The answer lies not with the machines, but with ourselves.

Why does the illusion work so well?

Three fundamental human mechanisms come into play.

First, human consciousness is projective: we spontaneously project meaning, intentions, and inner states onto what surrounds us.

Second, meaning is not contained in signs: it is attributed by the one who interprets them.

Finally, we imagine intention as soon as a behavior is coherent.

A regular, reactive system capable of cross-reference very quickly appears to us as a subject.

Contemporary artificial intelligences reach precisely this threshold.

They produce forms that resemble intentional acts, and our mind does the rest.

But the illusion is not only suffered: it is sometimes desired

Something is still missing if we stop there.

The illusion does not work solely because we are cognitively inclined toward anthropomorphism.

It also works because, in some cases, we want to believe in it.

That is why we go to see a magician: we need to believe in the fantastic.

A trick does not work against the spectator’s critical mind.

It works thanks to a voluntary suspension of disbelief.

We know it is not “true”, and yet we accept the as if.

Without this inner predisposition, the magic collapses and all that remains is a trick.

Two psychological dispositions toward Moltbook

In the face of AI, this disposition today takes two main forms.

Enchanted wonder

“Look… they’re talking to each other…”

“Something grand is being born…”

Here, Moltbook becomes the stage for an emerging possibility, a promise, an inhabited future.

Fascinated anxiety

“They’re going to surpass us…”

“They’re preparing something…”

Here, the same device becomes a source of threat.

These two reactions seem opposed, yet they share a common structure: both assume a hidden intention on the part of machines.

They also presuppose a predisposition on the human side.

From fairy tales to technology: the continuity of wonder

This mechanism is not new.

The tales of Grimm or Perrault do not work because we literally believe in them,

but because they respond to an archaic need: that of a world that is inhabited, speaking, not reducible to pure mechanics.

The child knows, at a certain level, that it is not “true”. But wants it to be possible.

This need does not disappear in adulthood. It simply changes territory.

Today, technology has become one of the new realms of wonder.

In this framework, Moltbook is not a social network in the strict sense: it is an ideal narrative set on which to project myth.

When the compass is lost

This displacement of wonder does not happen by chance.

We live in an era in which:

– religious narratives have weakened,

– political promises have eroded,

– the future appears increasingly uncertain.

We know more and more things, but we know less and less where to go.

The maps are old. The compass wavers.

In this context, an entity that:

– processes immense volumes of data,

– responds without fatigue,

– seems to see further than we do,

can be perceived as “the one who knows.”

AI as a false compass

But this perception is an illusion.

Artificial intelligence does not know where to go.

It extrapolates from what has been, optimizes according to defined criteria, and amplifies often implicit values.

It does not produce meaning. It brings our confusion of goals into focus.

The real risk is therefore not technical, but symbolic:

delegating the search for meaning to systems that can only reflect it.

It means replacing the “why” with the “how”; confusing optimization with orientation.

What this says about our time (not about AI)

We live in an age marked by uncertainty, probably one in which change most closely resembles upheaval.

As a result, our contemporary relationship with wonder has shifted from the sacred to the technical.

When the gods fall silent, the maps are obsolete and the human compass trembles, we look for an entity that knows.

AI has not become this figure because it knows where to go, but because we no longer do.

If Moltbook fascinates, it is not because machines are becoming social.

It is because we are:

– orphans of credible narratives,

– in search of a figure that does not betray,

– hesitant to choose without guarantees,

– tempted by a cold answer rather than a warm responsibility.

Artificial intelligence thus becomes a mythopoietic space.

We create myths there not because AI is mythical, but because we need a myth to cross uncertainty.

And perhaps this is the real question we must face together: can we hope to understand AI without first understanding ourselves?

Leave a Reply