Facebook’s metaverse is already one of the most toxic sites on the Internet

A study reveals that Facebook (Meta) was extremely ill-prepared to realize its dream of creating a shared online space for millions of users. After days of trying out Metaverse and its Horizon Worlds virtual chat room, researchers say it’s already one of the worst sites on the web, completely devoid of moderation.

© SumOfUs

Who would have thought that? Business Watch Group SumOfUs published an interesting report last week on the transition from the Facebook empire to Meta. While Metas VR platforms are already enjoying more than 300,000 usersthe group also documents how Horizon worldsa virtual metaverse chat room, is already hosting the worst behavior on the webbetween racism, sexual harassment, homophobia and conspiracy theories.

Less than an hour to be “largely violated”, the total absence of moderation

Meta has two primary VR apps. The one is Horizon worldsan application of social network which allows users to create and interact in unique digital spaces, called “worlds”. As of February 2022, about 10,000 separate worlds have already been designed with an estimated user base of over 300,000. The second is Horizon venuesa separate application dedicated to hosting live events in Metaverse.

Metaversen’s explicit promise is to occupy a digital domain as if you were really there to interact with other people (who already do VR Chat, e.g). Though online harassment is not new, everything necessarily becomes more visceral when a VR helmet is placed upside down.

Horizon worlds
Horizon Worlds © Meta

In their study, the researchers share many examples of toxic behavior, in addition to the almost total absence of moderation in the game space offered by the application Horizon Worlds. The examples mentioned are not lacking: they explain to have been ” hunted across different worlds in the Meta-owned product; of the counterfeit sale of counterfeit medicines laid out on tables, or again, users are of course constantly treating themselves with racist insults or homophobic.

One of the researchers explains that he had to less than an hour left practically violated » by a user while trying the metaverse for the very first time. SumOfUs has also included a link to a video of what they consider to be a ” virtual sexual assault “. Another video shows racist behavior, violence with weapons, etc.

Points out the watchdog group dozens of sexual assaults such virtual especially towards female avatars. No wonder: a metavers tester had already been sexually harassed last year whileHorizon worlds was in beta phase only.

Nothing planned to solve the problem

Meta is moving forward with Metaverset, however without a clear plan for how it will reduce harmful content and behaviordisinformation and hate speech says the report. The researchers even cite an internal memo from March last year, shared by FinancialTimes by Meta Vice President Andrew Bosworth: “ user moderation on any scale is virtually impossible ” he said.

But in February, Meta introduced a feature that prevents other avatars from venturing too close to another player’s body – similar to what other virtual chat rooms like VR Chat. The problem is that researchers indicate that they have been constantly ” harassed “and asked to remove personal border settingswhether it is by the game itself or by other users.

Read: Facebook (Meta): Zuckerberg lost $ 30 billion in an instant

Horizon Worlds © Meta
Horizon Worlds © Meta

Even worse when another user trying to touch or interact with youVR controllers vibrate, ” create a very disorienting and even disruptive physical experience, especially during a virtual assault “, As the study shows. A good idea while Zuckerberg tests gloves to better feel virtual objects …

Horizon worlds still understand parental control with the ability to deselect other users. But the platform is still a big problem for younger people, many of whom are already using it. Unlike social networks, which can use systems to monitor written content or even videos, VR chat rooms just rely on individual users reporting bad behavior.

Meta has repeatedly shown that it is unable to adequately monitor and respond to harmful content on Facebook, Instagram and WhatsApp – so it’s no surprise that Metaverse is already failing “, the researchers said SumOfUs.

Source: SumOfUs

Leave a Comment