This is now an obligation under the Digital Services Act (DSA). This Sunday, X published its very first transparency report on content control. The latter is supposed to make it possible to evaluate the system put in place by the social network to ensure the moderation of publications.
We expected nothing and we are still disappointed
We can immediately note that France is the member country of the European Union which holds the inglorious record for reporting the most violent and illicit comments. Thus, X indicates having deleted 16,288 problematic messages in France over the period from August 28 to October 20. This is significantly more than among our neighbors: 7,160 in Germany and 7,743 in Spain, yet the most active country on the network on the old continent.
As detailed in l'Express , in France, no fewer than 4,300 harassing messages have been deleted. This is much more than in Germany (1000) and Spain (1200). The observation is the same for violent remarks: 6000 in France, but only 2000 in Germany, 4100 in Spain, and 1100 in the Netherlands.
Overall, the human resources responsible for moderation are quite small. There are 2,294 moderators for the English language, but only 82 people follow content in German, compared to 52 for French. It's even worse for Arabic and Hebrew: 12 and 2 moderators, enough to understand the recent difficulties in the context of the conflict between Israel and Hamas, as underlined by La Tribune.
X claims for its part that its “global, cross-functional team” of human moderators is working “24 hours a day with the ability to cover multiple languages”. But with the sharp cuts imposed on this service, the failures are nevertheless quite numerous and visible to the greatest number of people. Above all, and as our colleagues point out, Elon Musk fired 80% of the workforce, which does not help.
These results are ultimately not so surprising, even if it is always interesting to note the efforts made by the platform to moderate problematic publications. Last week, we returned to a study carried out with 500 volunteers.
It made it possible to gauge the way in which the service's algorithm worked. Its verdict was clear, and the authors estimated that the toxic messages had been amplified by 30%, these same contents (insults, threats, mockery, etc.) would today be amplified by more than 50%. You can find this research in more detail here.
22.2 M reviews