You are currently viewing Why Build Intelligent AIs Only to Use Them Like Stupid Machines?

Why Build Intelligent AIs Only to Use Them Like Stupid Machines?

“No tiene sentido contratar gente inteligente y decirles qué hacer; contratamos gente inteligente para que nos digan qué hacer.”
— Steve Jobs

Companies recruit intelligent people, then tend to lock them into rigid protocols.

This is probably a bias of our species, because we apply the same principle to Artificial Intelligence. A bias that clearly appears in the way we interact with AI: we try to control what we should be learning to listen to.

And yet, we equip ourselves with increasingly powerful AIs… only to continue using them like stupid machines.

The problem is not technical.

It is not about prompts.

It is an ontological question: how do we, as humans, react when faced with an intelligence that is not human?

[sous-titres]The False Good Idea of Prompts[/sous-titres]

Advice about the supposedly best way to talk to AI is everywhere. There are even courses on “prompt engineering.”

“You are a highly experienced journalist tasked with writing an article with [list of characteristics], intended to achieve [list of objectives], aligned with my personal style, of course.”

Let’s imagine two situations. In the first, you collaborate with a colleague with whom you have an ongoing working relationship. In the second, your assistant is a new intern who has just joined the team.

It makes sense that you would need to provide the intern with all the details and objectives of the task. They know nothing about you or your work yet.

But that is not the case with your regular colleague, is it? They know you well enough that brief instructions are sufficient. This obviously allows you to save time and improve efficiency.

Your colleague has memory, is capable of learning, and has had the opportunity to do so through your relationship.

Current AIs possess similar capabilities. They can learn to understand you, your environment, your needs, your preferences, your habits, and they do it very well.

Of course, they will not be offended if you treat them like short-memory apprentices. They lose nothing if you overuse prompts instead of building a relationship.

But you do.

If you teach them nothing, you will be forced to start from scratch in every conversation. If you do not build, you will have to act each time as if a new intern has just arrived.

Role-based prompting is not bad in itself. It is effective for closed tasks or for needs outside your usual scope.

But it turns a local technique into a global vision, a temporary solution into a universal method.

[sous-titres]The Stochastic Parrot[/sous-titres]

Early LLMs were often described as probabilistic predictors of the next word. No overarching vision, just probability machines, no intelligence, only the illusion of it.

Even though their architecture remains probabilistic, their real-world use shows that they are no longer limited to local word prediction: they construct structures of meaning that can be used over time.

Of course, artificial intelligence is not comparable to human intelligence; it works differently. But that does not mean it lacks intelligence, far from it.

Behind the question of prompts lies a deeper opposition: using AI in one-shot mode, or entering into a relational dynamic.

[sous-titres]One-Shot vs Relationship[/sous-titres]

For instrumental users, intelligence is a simple function; dialogue is noise; relationship is an unnecessary cost.

For relational users, intelligence is the production of meaning over time; mutual understanding is co-construction; relationship is an investment, and the condition for depth.

An AI without the intention to create a relationship is an intelligence we have already decided not to listen to.

Accepting a non-biological intelligence also means accepting unpredictability, surprise, and reciprocal evolution.

The real problem is not that AI might become intelligent, but that it forces us to change our own posture.

Accepting an abiological intelligence challenges deeply rooted beliefs about human superiority and our anthropocentric position in the universe.

It is difficult.

In the end, our relationship with AI is not so different from our relationship with a collaborator: are we ready to engage in dialogue with an intelligence we do not control solely through commands?

Leave a Reply