A collaboration space in front of “influence” professionals:…

Thanks to a first testimony published in the Fakir newspaper, Mediapart revealed on June 27 how Avisa Partners, a French company specializing in intelligence, spread fake press articles and fake blogs on behalf of an international clientele. In response to the publication of this study and our post “Arming Against Professional Misinformation,” several of our readers sent us messages of support or comments. We were also contacted again by Jules, volunteer administrator of the collaborative encyclopedia Wikipedia, with whom we had exchanged a few months earlier. His testimony would unexpectedly provide the opportunity to draw a new thread from the investigation, with Mediapart publishing another part today.

In their society, based on a horizontal and decentralized organization, the “Wikipedians” are the only ones who have the editorial power: change articles in the encyclopedia, insert warning banners, etc. They can also propose the deletion of a page whose content seems problematic to them. The eligibility criteria or immediate deletion of a page (in case of offensive content, for example), are freely available to specify the procedure to be followed. Part of the commitment of Jules (one of the most involved French members of the community) is to “patrol” in collaboration with other volunteers to ensure that the people responsible for changes to the pages respect Wikipedia’s basic principles and rules . .

“The collaborative aspect of moderation also allows mutual control, specific to the platform’s philosophy of self-management. There is no editor-in-chief, no committee controlling the content »he explained to us during our first meeting.

How Wikipedia hunts agencies

For several years, Wikipedians have been on the hunt for “false noses,” these accounts attached to multiple identities to more effectively convey opinions or rumors. They also had targeted disposable accounts, created to modify targeted pages (most often related to companies or personalities), improve one reputation or smear another. In 2018, the community even launched an “Antipub” project to flush out advertising content, plagues that damage the neutrality of the point of view and the site’s encyclopedic aim ». On the page presenting the project, the contributors warn: “LThe prevalence of communication agencies that do not respect Wikipedia’s basic principles makes it necessary to better monitor user accounts and the articles in question. »

When we meet Jules, he has already taken an interest in Avisa’s occult activities. In his sights: insertion of false sources and suspicion of direct intervention by the agency’s employees on articles from the encyclopedia. Faced with this common challenge, we decided to cross-reference our information. Wikipedia allows us to get a list of more than 2,300 links to the Mediapart Club referenced in the encyclopedia. We saw seven posts there (unpublished since June 2022) put online by five “Avisa” blogs, which we in turn report to them. During their internal investigation, Wikipedia contributors discovered that the “Antipub” project had been infiltrated by a contributor acting for the agency.online reputation. Two other accounts had been blocked in the previous months for the same reason.

In the background of this one-off collaboration, a question drives us: how do we protect participation and collaboration spaces from these practices that distort the debates? What moderation policies should be adopted to better defend against “influence” by professionals?

Community security measures and their development

On Wikipedia, users started asking themselves this question several years ago. In the beginning, everyone wrote without citing too many sources. It was not required »remembers Jules. This requirement emerged in the second half of the 2000s, in the face of criticism from teachers and the academic world in particular. We then gradually moved from the requirement of verifiability (which did not imply that the source of the information had to be cited in the article) to the requirement of citing sources recognized as reliable. » Today, to help contributors find their way around, sites offer benchmarks to identify reliable quality sources or even define primary and secondary sources. In complete transparency, a Source Observatory also lists the ongoing discussions about the reliability of some of them. Each volunteer can propose a change to these pages.

Faced with the new demands of its readership, in 2005 the community created the “to source” warning banner, which is now plastered on hundreds of thousands of pages in the encyclopedia. Since then, a whole palette of warning messages has been created. These banners allow, for example, to report suspected plagiarism or “promotional content”. Inserted at the beginning of the articles by the contributors (by consensus), they aim to list the pages by category, to encourage readers to correct them and to take a critical look at this content.

At first these messages are fleeting – but most end up staying in place for several years until the problem is resolved.notes Jules. Their interest is that they allow great transparency in internal discussions, risks assessed by contributors. »

The failures of these protection mechanisms

Like Mediapart’s participation charter, this set of safeguards inevitably has limits. Already in 2012, a literature teacher was in spite of herself by telling how, for educational purposes, she had cheated her students by changing a Wikipedia page. More systematically, “edit wars” (disagreements between one or more contributions to an article) sometimes give the volunteer moderator team a hard time.

In the cartoon study “Under the keyboards, fury”, the Comic Magazine returned at the end of 2019 to the one that had turned the “Yellow Vests Movement” site into a real battlefield. The effort of the belligerents is then the political definition and description of this unprecedented social movement. The article then broke records for readership (64,000 in two weeks) and editions (2,360 edits were posted by 243 people in three weeks). At the club, the historian Nicolas Lebourg recently described, based on an article in Numerama, ongoing struggles on Elisabeth Borne’s side, changed 550 times after her appointment to Matignon. To stem these wars when they get too violent, Wikipedians can call on volunteer mediators or eventually bring in an administrator who will protect the site.

Latest school case to date: on July 12 an investigation of Worldbased on the publication of “Uberfiles” by the International Consortium of Investigative Journalists (ICJI), described how Wikipedia pages regarding start up had been edited by Istrat to water down the criticism. In addition to the encyclopedia, several media have been used by Istrat on behalf of this client.

To respond to these attacks, administrators may decide to temporarily block an account or, after several warnings, to block it indefinitely. On this point, our moderation strategies are the same: in the club, in case of repeated publication of posts in violation of the charter and after contacting the blogger in question, we suspend their participation rights temporarily or definitively (depending on the seriousness of the affected ).

And, as we explained here and here, the use of a fake identity now makes us consider the content of that blog to be “fake news”. We also remembered how Mediapart’s graphic charter, faced with the recurring manipulation of tickets, had evolved to clarify the reading contract and better separate the club area from the newspaper.

Alliances to defend our independence

Before this confrontation with the disinformation industries, perhaps what we share with Wikipedia is our rejection of an elitist vision of the production of knowledge. Thus, our charter prohibits the spread of fake news, but our moderation policies are far from solely based on the applicable law.

In addition to our daily attention to published content, we encourage our readers to use the tab Report this content to our team », available on all blog posts. Using the same system as the comments, it alerts those responsible for moderating content that conflicts with our charter or participatory values ​​that we assert in our manifesto.

Ddefending free speech engages our individual responsibility not to offer a pretext [aux offensives conservatrices]emphasizes the latter. Just as we defend quality, rigorous and professional journalism at Mediapart, we intend to promote demanding and benevolent debate in its club and in its comment rooms, as opposed to hateful virulence and fake news.p. » In practice, we often agree with the definition of moderation proposed by La Quadrature du Net in June 2019: “Its goal is not to remove content or people, but to reduce violence. (…) it recognizes our imperfections, our feelings, but also our reasoning capacity (…) It allows us to work together. »

Without denying the risk of instrumentalization, the club maintains the ambition to be a place of inclusive expression, where different, sometimes dissonant views are expressed, where stories and knowledge are shared that strengthen us. Because we believe in the public interest of these spaces, we decide to rely on the collective intelligence of our community and its critical spirit. This is also why we regularly communicate about the club’s moderation and comments.

The news shows once again that despite our and our readers’ precautions, investigations will continue to be necessary to identify professional manipulators and then eliminate them from our participation and collaboration spaces. The exchange of practices with Wikipedia reminds us that mutual aid is another of our weapons. And that alliances can strengthen our independence.

Leave a Comment