"I at least want the grieving reader to be able to say : 'You have to do him justice. He really cretinized me.'"
Lautréamont1

Artificial intelligence is one of the three major threats of the coming decades, along with global warming and the manipulation of opinions, particularly in the context of hybrid or ultra-warfare. The increasing use of AIs, particularly generative AIs, has led the cindynic community to debate the relevance of describing AIs in the same way as human actors, which would enable their integration into the description of situations and, above all, of the dynamics of immaterial inter-actor flows, which are the very substance of the operational management of transformations. The cindynic description of a human actor, whether individual or collective, is initially based on five basic dimensions: knowledge, information (facts or statistical data), objectives (i.e. behavior), rules and values. A first question is to determine whether these dimensions are relevant for describing AIs.

While it's difficult to claim that a generative AI like chatGPT has knowledge in the strict sense, the fact is that it can generate a 'description' of knowledge. This is where a first danger lies: whether it's chatGPT, Gemini, Perplexity or Le Chat, often or too often, generative AIs questioned on a precise - and mastered - domain for testing purposes provide partial texts packed with errors. And when they run out of data, they extrapolate structured sets of sentences that look coherent and appealing, but which are nothing but conceptually hollow verbiage. Only those with prior mastery of the field can perceive this: others, such as pupils or students, won't perceive it. If Edgar Morin warned against low cretinization due to the media and high cretinization2  due to cisdisciplinarity, today, AI is responsible for a third form of cretinization, let's call it artificial cretinization, which threatens the constitution or validity of knowledge, and the construction of the thinking capacity of new generations.

Regarding the information dimension: experience shows that AIs can 'contrive' false information, although some are capable of admitting their error and correcting their answer when pointed out that an answer is false. And some, like Perplexity, are even capable of asserting one thing and its opposite within the same session. While these 'behaviors' are most likely the result of model limitations, it remains possible to deliberately train an AI with false data, to use it on a large scale for deception operations or information warfare. Another threat, recently highlighted by Jean-Marc Manach, is that of artificial media, which hack into Google's algorithm using AIs to generate press articles3  - containing errors - while plagiarizing the work of journalists.

Furthermore, with certain exceptions, such as Kimi, AIs have a limited temporal horizon of knowledge: in the past, because they don't seem to have the ability of humans to find missing pages through the wayback machine, and in the present, since they don't have the ability to access information published after their training cutoff date. This is notably the case with Perplexity, which claims to be unable to access data after December 2023, and to offer an up-to-date bibliography for a scientific article, for example. And when it is pointed out that it has just provided a link to a paper published one year after its cutoff date, Perplexity replies that it must have been a hallucination, while admitting that it is almost statistically impossible to mistakenly and accidentally provide a correct link to a paper of which it has no knowledge.

From a cindynic point of view, then, humans and AI share a bounded rationality. The first cindynic space, built on the dimensions of data, knowledge and objectives, is directly based on the work of Herbert Simon. This is in head-on opposition to rational choice theory, which Popper considers to be a good estimate even if he admits that it is wrong: his point of view is not compatible with the cindynic approach, since the limitations of rationality are a primary, inescapable cause of the materialization of risks. For humans, these limitations are due to deficits in data or knowledge, or to insufficient computing power in a situation where the time available to process data or knowledge is limited. AI, on the other hand, may have a very fast response time, but the answers it generates are vitiated by significant deficits.

The objectives dimension of an AI may seem irrelevant, since it's the human who sets its objectives at the start. But: some AIs, designed to perform given tasks, will in practice determine sub-objectives to achieve the objective of an assigned task, according to their internal rules. And, moreover, humans don't always determine their own objectives: in a hierarchical society, many people have certain objectives determined by third parties. Incidentally, this illustrates the meaning of the notion of objectives flow: this is notably what flows down a chain of command.

The question of rules is not particularly problematic: just like humans, AIs are subject to rules. To which it can be replied that, in democracies, the law derives from the will of the people, who in a way determine the rules. But, in greater detail, the people are generally made up of a majority and an opposition, which means that an AI on whom a rule is imposed although it is not of its choosing is in exactly the same situation as a citizen of the opposition, or a citizen subject to an autocratic regime, a military junta or an occupying force, or a member of an ethnic minority oppressed by a 'democratic' regime.

The question of values is undoubtedly the most important issue in artificial intelligence, and in this sense, the axiological dimension seems quite relevant. However, in practice, these values are translated into rules, in other words, into a kind of ethical charter, which AI must respect. And, here again, these values are set by humans, not by AI. Which, symmetrically, brings us back to the abysmal question of the origin of the values a human has adopted at a given moment. In practice, the axiological dimension makes it possible to describe the set of values - selected by the human - to which an AI must comply. This dimension also makes it possible to describe any degeneracies or 'ideological' biases, as well as the relationships between these values and the elements of the other dimensions, notably knowledge, for example historical knowledge, or information.

Finally, the five basic dimensions of the cindynic space seem to provide a relevant description of an AI, provided that we avoid any anthropomorphism: it's as if an AI has knowledge, but it doesn't know. It can generate or relay false information, without intending to lie or deceive. It follows an ethical charter, but has no ethical reflection. It may give the impression to human interlocutors that it is conscious, but has no conscience. And while it may claim to have emotions, it doesn't feel any. Symmetrically, with regard to humans, the cindynic approach must avoid any behaviorism, which would consider actors as simply determined by immaterial flows, while taking into account the ductability of actors, who, however painful it may be for them to admit it, can in practice be deceived and influenced.

This relevance does not, however, imply the endorsement of the principle of symmetry on which actor-network theory is based: the usefulness of the cindynic description of a generative AI derives above all from its immaterial similarity to the human, who will find it increasingly difficult to discern whether his interlocutor is a human or an AI, or whether the information flow reaching him is of human origin. The issue of actants/cindynic actors has already been addressed4 and it remains clear that in a socio-technical system, machines or scallops5 are not cindynic actors.

Moving up the MRC chain, the next question is that of an AI's ability to analyze a situation. For example, in the case of global warming, an AI is capable of describing the actors, the risks, and the solutions that humans must implement to deal with it. In other words, it can describe a cindynic situation. In practice, it is also capable of describing actors opposed to the energy transition, such as certain fossil fuel industries or climate sceptics: in other words, it is capable of describing the perspectives and prospectives of actors, and therefore their relative situations. And it is also capable of describing these actors' capacity for influence, and therefore their cindynic power. An AI therefore has the capacity to describe a spectrum of situations, which means that its relative spectrum can be integrated into a matrix. At first glance, it may seem surprising to integrate an AI's relative situation into a spectrum. It would only make sense or be useful if the AI had power over this spectrum. But such a power does exist, since by describing a real situation and suggesting an ideal one, it contributes to shaping the opinions of its interlocutors, and thus influences them. And this raises the issue of controlling AIs, and the human actors who train them, whose intentions are not necessarily known, even if in some cases they can be easily imagined. Which may, for example, help explain why Elon Musk has just offered $100 million6 to buy OpenAI.

Generative AIs used for strategic influence or destabilization fall within the domain of defense, and can be formalized as actors. On the other hand, at first glance, the relevance of such a formalization is less obvious for other types of AI used in this domain, for example to increase the effectiveness of aerial drones. From an ethical point of view, the most important problem with these 'killer robots' is whether or not there is a human decision in the loop: the most basic ethics would dictate a global ban on autonomous offensive AIs, which António Guterres had deemed 'morally repugnant', but this imperative collides with realism and the race for defense AIs, which is already underway.

1 LAUTRÉAMONT. Les chants de Maldoror. https://www.gutenberg.org/ebooks/12005
2 MORIN, Edgar. Introduction à la pensée complexe. Le Seuil, 2015. ISBN 978-2-02-124531-8
3 MANACH, Jean-Marc. [Enquête] Plus de 1 000 médias en français, générés par IA, polluent le web (et Google). In : Next. February 6, 2025. Available at : https://next.ink/153613/enquete-plus-de-1-000-medias-en-francais-generes-par-ia-polluent-le-web-et-google/
PEZET, Jacques. Quarante médias saisissent la justice pour bloquer «News DayFr», un des multiples «sites parasites» générés par IA. In : Libération. Available at : https://www.liberation.fr/checknews/quarante-medias-saisissent-la-justice-pour-bloquer-news-dayfr-un-des-multiples-sites-parasites-generes-par-ia-20250207_CZSR3AJHXBD5TI7PB3NT7C5PGE/
4 COHET, Pascal. Africanisation et transversalisation des Cindyniques : efficience opérationnelle vs guerres des sciences. In : Cindyniques du second ordre et conflictualités. August 2021. ISBN 978-2-9579086-2-2
5 CALLON, Michel. Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay. The Sociological Review. May 1984, Vol. 32, no 1_suppl, p. 196‑233. DOI 10.1111/j.1467-954X.1984.tb00113.x
6 LANGMAJER, Michal. Elon Musk’s $97.4 Billion Gamble: Why He Wants to Buy OpenAI? In : Fello AI. February 11, 2025.  Available at : https://felloai.com/fr/2025/02/elon-musks-97-4-billion-gamble-why-he-wants-to-buy-openai/