Conspiracies are welcome in Facebook’s metavers

A study of metavers shows that they are still unsafe places for users and users. Their moderation methods do not prevent extreme and dangerous speech.

We already knew that the metaverse could be a dangerous place. In December 2021, a user of Horizon Worlds, the meta-verse of Meta, Facebook’s parent company, said he had been subjected to a virtual sexual assault through his avatar, which was allegedly exposed to touch. Our reporter Nicolas Lellouche, who spent a week in the meta-verse, also witnessed bad behavior in a number of alternative worlds, such as insults, acts of violence, and attempted sexual assault.

These issues are not isolated actions: the Sum Of Us Association published a complete report on May 31, 2022, listing the many moderation issues on metaverse platforms, particularly those in Meta. In addition to security issues for users, the authors write it conspiracy theories would not be moderated well in the metaverse.

Further

metavers
Metaverset is not a safe place // Source: Canva

A QAnon server in Horizon Worlds

The authors of the report explain that extremist content is said to be very common in the metaverse. They especially mention the example of journalists from Buzzfeed, who in Horizon Worlds built a private server dedicated exclusively to fake news. Nicknamed “Qniverse”, referring to the American conspiracy theory QAnon, the group welcomed extreme remarks.

On this server, Buzzfeed journalists were able to freely publish a very large number of fake news about the alleged “theft” of the US election in 2020, or even about the origins of the Covid pandemic, saying it would have been created from scratch. Posts by Alex Jones, one of the main American conspirators, explaining that Joe Biden would be a pedophile and that a cast of reptiles would secretly rule the world, have also been shared without issue.

The journalists deliberately used terms that are usually moderated by Facebook – including references to QAnon, which are usually quickly removed from Facebook. But for 36 hours, the server was not detected by Horizon’s moderation team. They then had to report certain publications several times before moderators responded, informing reporters that they had not found anything that violated the terms of use of the platform.

meta
Meta could not effectively moderate a group sharing extreme content in the Horizon World meta-verse // Source: Canva

“Able to” moderate the metaverse

How could such a decision have been made when the remarks should normally have given them a banishment? In their article, the journalists suggest several hypotheses, such as the limited size of the server or the fact that they did not interact with content outside the group. Nevertheless, their observation shows the limits of the moderation that is currently in place in Horizon Worlds and that these problems are likely to increase only as the number of users increases.

Instead of learning from his mistakes, Meta continues in the metaverse “, conclude the authors of the Sum Of Us report.” Meta has no specific plan for how it intends to moderate harmful content and behaviors, such as hate speech and misinformation. «, They insist. They also recall that Andrew Bosworth, Metas Chief Technology Officer, himself admitted in an internal message that moderating Metaverse was ” practically impossible “. A message that shows that conspiratorial speeches will certainly not be moderated in the near future.

Leave a Comment