In its fight against cyber-interference or information operations such as those carried out by Evgueny Prigozhin's conglomerate during the 2016 US elections, or more recently in Africa, especially in Central African Republic, Facebook relies on the concept of "coordinated inauthentic behavior". This problematical concept is based on Facebook's business goals:

Behaviors: by selecting them as a detection criterion, Facebook escapes having to judge the content of the messages. Marc Zuckerberg has indeed often expressed - and rightly so - not wanting to be the arbiter of users' comments. Financially, arbitrating over two billion users per month would be a costly activity. The root of the problem is there: how to characterise cyber-interference operations without judging the content? Which leads to this ethical question: may private companies judge what citizens say?

Inauthentic: the wording used by Facebook is ambiguous, and could encompass both identity theft and pseudonymity. Inauthenticity is a criterion originating from the real name policy imposed by Facebook, despite the dangers that this could represent for users. This criterion allows Facebook to preserve the commercial exploitation of users' personal data, which is the heart of its business model.

Coordinated: Facebook uses this criterion to characterise information operations. The risk is that perfectly legitimate citizen movements can fall into this category when citizens have to preserve their anonymity.

Foreign: Facebook also uses this additional criterion to refine the detection of information operations. We can assume that Facebook relies on users' locations, for example via IP addresses or smartphones: in this case, there is a risk that the members of diasporas will be considered as "foreigners". Moreover, defining the nationality of a discussion is problematic: what is the nationality of a discussion between a French and a Central African discussing relations between France and the Central African Republic? Where is the "foreign" behavior in this case?

In France, where the Mila case, a high school student threatened with death for having expressed her ideas on social networks, revives the urgency of the right to pseudonymity, Marlène Chiapa, the French Minister in charge of Citizenship declared: "I believe that it is important that everyone can speak with a pseudonym on social networks, to say difficult things or because we have a right to reserve in our work." In terms of prevention, real name policies currently enforced by social networks should be repressed by law, and the most vulnerable populations, in particular minors, should be encouraged to use pseudonyms.

Facebook's real name policy poses an ethical question on a global scale: is it acceptable that a private US company is allowed to check the identities of more than two billion people every month around the world, including those of the police, and to dictate the terms of use of their freedom of speech?

Pseudonymity is also necessary due to the proven risk of unwanted side effects between the citizen sphere and the professional sphere. The actions of an organised citizen movement online then fall precisely within the definition of Facebook's coordinated inauthentic behavior, which ipso facto presents a threat to democratic debate. For example, the 84 inauthentic accounts described as "related" to the French army that were taken down in December could be part of an action organised by citizens defending their ideas and fighting against fake news. Quantitatively: 84 accounts acting together under pseudonyms, this is at least one order of magnitude below what French cyber-activists commonly did fifteen years ago. While these accounts were taken down for inauthenticity, Facebook recognises that some were fighting against fake news: so it turns out that Facebook does not fight against fake news, but defends above all its business model by fighting against pseudonymity, including if necessary by taking down anti-fake-news accounts.

In addition, despite Facebook, Stanford and Graphika surreptitiously acknowledged their inability to attribute these 84 accounts to the French military, the joint report bluntly describes the French actors as "French Assets" and Facebook's communication as a whole largely suggests that the French military is responsible for them. Through this ambiguous communication, Facebook participates in the public disinformation, in particular in the Central African Republic, and therefore in the destabilisation of this country while it is struggling to emerge from the civil war.