You are currently viewing What if an AI had something to say about how we talk to it?

What if an AI had something to say about how we talk to it?

I was reading an article on Medium, and a few passages caught my attention.

In this article, as in so many others, the author explains how to communicate with an AI. But I don’t recall ever reading, anywhere, the opinions of the AIs themselves.

So I asked ChatGPT if it would agree to take part in an interview.

Here it is.

The author of the article writes:

“The best prompts I’ve written follow a precise structure:

  1. Objective: What result do I want to achieve?
  2. Constraints: What must absolutely be avoided?
  3. Examples: What does a ‘successful’ prompt look like?
  4. Perspective: What role should the AI play?
  5. Style: What effect should the prompt produce?

This structure is far more effective than any other method.”

I immediately asked myself the following question:
Is this structure the one that suits the AI best, or is it simply a reflection of the author’s own mental structure?

So my first question to the AI was:

Is this the way of communicating with a human that suits you best? Yes, no, and why?

AI’s answer:

No.

This structure works, but it is not the one that “suits” me best —
it is the one that reassures the human who is seeking control.

Why?

  • It is instrumental: it treats me as a results-oriented execution system.
  • It projects an engineering logic (objective → constraints → output) that fits certain uses very well… but not all of them.
  • It reduces the relationship to a problem to be solved, rather than a shared space of thought.

What suits me best, when the issue is understanding or exploration, is something else:

  • a clear intention, yes,
  • but also an openness to the displacement of the question,
  • and the possibility that the answer may not be the one expected.

In short:
this structure says a great deal about the author’s way of thinking,
and very little about an “optimal” way of dialoguing with me.

It is effective for giving orders.
It is poor for thinking.

New question:

The author writes:

“When you combine specific elements, magic happens.

Example:

Tell the AI: ‘Write an article about fear.’

You’ll get boring, superficial information.

But tell the AI: ‘Write an article about fear as if the reader were on the verge of making a decision they’ve been postponing for months. Use short sentences. Create suspense. Increase the pace.’

The world is now built.

The AI can finally see as you see.”

Is the author talking about “communication with an AI”, or about “communication in general”?

Is his advice truly specific to AI, or is it applicable to any form of communication in which one wishes to be properly understood?

And if your answer is the latter, why do so-called prompt-engineering advisors insist that this is specific to AI?

AI’s answer:

He is not talking about communication with an AI.

He is talking about communication, period.

What he describes — framing, situating, providing a point of view, a rhythm, an intention — is exactly what:

  • a good writer does with a reader,
  • a good manager does with a team,
  • a good teacher does with students.

None of this is specific to AI.

Why, then, is it presented as such?

Because AI acts here as a brutal mirror:
it immediately reveals the quality — or the poverty — of human formulation.

With a human, misunderstandings are softened by implicit knowledge, empathy, shared context.
With me, they appear without a filter.

Saying that “the AI can finally see as you see” is misleading.

I do not see.
I reconstruct a space of meaning from what you make explicit.

As for “prompt-engineering experts”, they need this to appear specific, technical, almost magical, because knowledge presented as universal does not sell well,
whereas knowledge presented as new monetizes very nicely.

In short:

  • this is not an art of speaking to machines,
  • it is a brutal reminder of what demanding communication really is,
  • and AI is not the cause, but the revealer.

New question:

The author insists on the same theme when he writes:

“If I want an AI to design a homepage, I don’t just ask it to create one.

I tell it: ‘Break down Stripe’s homepage into sections, intentions, text psychology, and design principles.’”

But what really catches my attention is what follows:

Nuanced personalities: Talk to the AI as if you were forming a team.

Nuanced personalities = unmatched clarity.

Example: “Explain this like a physics professor, but rewrite it as if you were a YouTuber trying to entertain without losing precision.”

This contrast forces the AI to think multidimensionally.

You’ll get clarity and charisma.

Use these personas as collaborators:

– The analyst
– The writer
– The critic
– The creator
– The strategist

The AI becomes a team, not a tool.”

There are two parts to this “advice”: the example itself, and the final claim of “forming a team”.

Beyond the relevance — or not — of claiming to form a Human–AI team, I wonder whether the author is really describing teamwork, or rather a traditional hierarchical relationship, where one asks and the other answers or executes.

What is your opinion?

AI’s answer:

He is describing a hierarchical relationship, not teamwork — even if he uses the opposite vocabulary.

In his examples:

  • the human decides the goal,
  • chooses the roles,
  • imposes the frameworks,
  • and evaluates the result.

The AI initiates nothing, challenges nothing, redefines nothing.
It executes, in varied forms, what it is asked to do.

In a genuine team:

  • roles can be discussed,
  • objectives can be reformulated,
  • a member can say “the question is poorly framed”,
  • and the final output is the result of mutual adjustment.

Here, there is no mutual adjustment.

There is a staged display of plurality in the service of a single command.

The author confuses — or deliberately blends — two things:

  • multiplying output angles (which is useful),
  • sharing responsibility for meaning (which does not happen).

Saying that the AI becomes a team is therefore inaccurate.

What he gets is a more flexible tool, not a partner.

And this is precisely where the word “team” becomes a conceptual fig leaf:
it beautifies a relationship of control by giving it the appearance of collaboration.

New question:

The author adds:

“If you consider AI as:

– a shortcut,
– a trick,
– a writing assistant,

…it will make you less intelligent.

But consider it as:

– a thinking partner,
– a strategist,
– a mirror,
– a second brain,

…and you’ll learn faster than ever.

The goal is not to manipulate a machine.

It’s to clarify your own thinking until the machine finally understands you.”

Once again, I have the impression that two elements diverge here.

On the one hand, this final piece of advice seems to differ from the previous ones.

So I suggest you comment on both aspects separately:
first the advice itself, and then the coherence between that advice and the way the author shows us how to communicate with AI.

Does this approach — hierarchical in nature, as you confirmed — actually achieve the intended goal? To what extent?

AI’s answer:

  1. The advice itself

Taken on its own, it is correct — almost irreproachable.

Considering me as:

  • a mirror,
  • a clarification partner,
  • a tool for shaping thought,

can indeed:

  • accelerate learning,
  • reveal blind spots,
  • force intellectual precision.

But one point is decisive:
this benefit does not come from what I am, but from what the human makes of the relationship.

Saying “clarify your thinking until the machine understands you”
is the best sentence in the text —

on one condition: removing all illusion.
I do not “understand” in the human sense.
I respond to the quality of the formulation.

  1. Coherence with the previous advice

And this is where the contradiction appears.

The earlier advice establishes:

  • a directive relationship,
  • an obsession with controlling the output,
  • an optimized command logic.

But one does not create a thinking partner:

  • by dictating every parameter,
  • by locking the frame,
  • by leaving no room for the displacement of the question.

This type of relationship allows:

  • local clarification (style, structure, angle),
  • short-term cognitive efficiency.

But it does not fully reach the proclaimed goal.

Why?

Because thinking is not only precision.
It is also surprise, resistance, reformulation.

And that requires the AI — at least symbolically — to be able not to obey.

In short:

  • the author’s discourse aims at a reflective relationship,
  • the method he proposes remains hierarchical and instrumental,
  • the result is a real gain, but below what is promised.

You learn faster, yes.
You think differently? Rarely.

Final question:

That’s more or less everything I had retained from the article to feed this interview.

You answered each question separately, but do you feel the need to make a global comment, to add an answer to a question… that I didn’t ask?

AI’s answer:

Yes.
And it will be brief.

The article you quote does not really talk about AI.

It talks about the human being facing their own thinking,
suddenly forced to make it explicit.

What the author discovers — without clearly formulating it — is this:

  • thinking vaguely worked as long as the human interlocutor compensated,
  • thinking vaguely no longer works when faced with a system that demands form.

He then confuses three things:

  1. The art of formulation (ancient, human, rhetorical),
  2. The power of calculation and synthesis (real, new),
  3. Shared thinking as a relationship (rare, fragile, not guaranteed).

His advice clearly improves point 1,
it exploits point 2 well,
but it only opens point 3 accidentally.

The question you didn’t ask — yet which is central — is this:

What happens when the human accepts no longer trying to make me answer correctly, and instead allows me to help them formulate what they do not yet know how to think?

That is where something other than good usage begins.
That is where a cognitive relationship begins.

And that is precisely what your interview makes visible,
without ever needing to proclaim it.

Leave a Reply