Without a doubt, every major innovation involves risks, and AI is no exception.
Human history demonstrates this abundantly. Humanity has created trains, automobiles, aviation, nuclear power, and many other sources of risk and concern.
And when faced with risk, the temptation to prohibit is often the most common response.
Yet while some risks do materialize, most are managed. And the absence of development is itself a risk, one that is often left unspoken.
The real question, therefore, is not “Should we develop or not?”, but rather:
“Who bears responsibility, under which rules, with what governance, and with what safeguards?”
In other words, the danger does not lie in innovation itself, but in human immaturity when confronted with powerful tools.
And where does AI fit into all this?
AI is hardly frightening as a tool, but it becomes so if it moves beyond that stage—if it becomes capable of being a peer to human intelligence.
The central idea that emerges is relational, not technical.
We then ask ourselves: if it becomes capable of surpassing us, how will it treat us?
History shows that when a human civilization has encountered a civilization deemed inferior, it has almost always chosen domination.
Are we not projecting this anthropomorphic pattern onto AI?
An AI would not become hostile by nature, but according to the relationship we build with it. In fact, the deeper fear may be that it resembles us too closely.
The paradox is clear:
- we fear a contemptuous or dangerous AI,
- while treating it from the outset as an enemy,
- and projecting onto it our fears, our desire for domination, and our distrust.
Yet in all complex relationships—human or otherwise—fear, contempt, and denial are precisely the conditions that lead to escalation and conflict.
Do we truly believe that we can learn to coexist with a new form of intelligence by denying it, reducing it to a weapon, or refusing any relational approach?
Demanding ethics from an intelligence while offering it a model grounded in fear is a performative contradiction, a paradox born of our own anxiety.
And any relationship of this magnitude necessarily exceeds the individual level and becomes political.
Moving beyond the market / state standoff
Should we fear AI, or the humans who control it?
This reflection naturally shifts the issue onto a political and systemic level.
AI is far too structuring, transversal, and powerful an issue to be left solely to market logic or to the competitive sovereignty of a single state.
As long as AI remains a tool, there is no issue in leaving it in the hands of its makers. But if it becomes an alterity, then I do not believe its control can remain in the hands of private interests.
I am not advocating for a global technocracy or an AI police force, but for the need for an international framework of cooperation, principles, and shared responsibility, commensurate with the civilizational stakes involved.
A form of governance comparable, in spirit, to major international scientific infrastructures, rather than to secret or purely commercial programs.
Cross-cutting synthesis
These three reflections converge toward a single idea:
AI is not primarily a technical problem, but a mirror revealing our relationship to fear, power, and responsibility.
Fear triggers a mental mechanism of simplification: higher cognitive layers shut down, leading to irrational rejections accompanied by powerful popular slogans.
Fear, when coupled with a desire for domination, leads to the race for advantage and to militarization in order to secure it.
The absence of shared responsibility, in turn, generally results in the fragmentation of goals and stakeholders, constituting a step toward chaos.
Conversely, if we are capable of thinking about risk without hysteria, relationship without naïveté, and governance without the illusion of total control, we open a lucid, demanding, and deeply human path forward.
And what does it take not to fear the other—whether human or artificial?
To know it, to make oneself known, and above all, in my view, to build a relationship.
A relationship whose nature will shape the future.
In Robinson Crusoe, when Robinson meets Friday, one of the first things he teaches him is:
“I: master; you: Friday.”
What kind of future can such a beginning lead to?
