why and how to deal with haters?

Before a functioning metaverse is ever assembled, there are a number of technical challenges that still need to be addressed. But above all, companies like Meta must create safe and comfortable environments for users. And metavers moderation is the key to making it a reality.

In recent months, we have heard a lot about the “meta-verse”. Facebook has already launched elements of it as a “successor to the mobile internet”. There are different definitions and approaches to what “metaverse” is. But it will basically be a series of interconnected, avatar-focused digital spaces. You will be able to do things there that you cannot do in the physical world.

Now Facebook offers products like Horizon Home, a social platform to help people create and interact with each other in the meta-verse. However, there are some things we do not know. And one of the most important aspects that we still need to understand is: how will the metaverse be moderated? This is an important issue as social media content moderation is currently a major regulatory issue in virtually all G20 countries. These lines tell you more about the importance of metavers moderation and the different ways to achieve it.

Metavers moderation to solve a number of existing problems

A number of problems threaten the future of the metaverse. Therefore, it is necessary to focus on setting up the moderation of this space.

Sexual harassment, more and more present

Sexual harassment, more and more present in the metaverse, is more and more present, therefore the need for metavers moderation

According to Meta, a beta tester reported something deeply disturbing last year: she had been victim of touching his avatar at Horizon Worlds. Meta’s internal review of the incident revealed that it should have used a tool called “Safe zone”. This is part of a series of security features built into Horizon Worlds. This is a protective bubble that is activated when you feel threatened. Note that on Facebook’s Horizon Worlds, up to 20 avatars can meet at a time to explorehang out and build in the virtual space.

According to the American site The Verge, the victim explained that she had been touched. But the worst thing is that there was other people who have supported this behavior. Vivek Sharma, vice president of Horizon, said it this incident was “absolutely unfortunate”.

This is not the first time a user has experienced this type of behavior. and virtual reality. And unfortunately, it will not be the last. RecentlyJane Patel, co-founder and vice president of metaverse research Kabuni Ventures, shared a terrifying experience. His avatar in the meta-verse was allegedly sexually assaulted and violated by other users.

“They gang-raped essentially, but almost, my avatar and took pictures while I was trying to run away,” she claimed.

Child safety in the meta-verse

The safety of children in the metaverse is not guaranteed

Titania Jordan, Chief Parent Officer at Bark Technologiesa parental control application to ensure that child safety online and in real life, said she was particularly concerned about what children might encounter in the meta-verse. She explained that addicts could target children via in-game messages or talk to them via headset.

Recently, Callum Hood, head of research at Center for Combating Digital Hatehas recently spent several weeks recording interactions in the VRChat game. In the game, people can form virtual communities, party in a virtual club or meet in virtual public spaces. Oculus considers it safe for teens.

But over an 11-hour period, Mr. Hood more than 100 problematic incidents on VRChatsome involve of users who said they were under 13 years of age. In several cases pronounced user avatars sexual and violent threats against minors. In another case, someone tried to shows sexually explicit content to a minor.

Misinformation in Metaverset

Misinformation in Metaverset

BuzzFeed News, an American Internet media company, has built its own private world, called “Qniverse”, to test the company’s virtual reality. She concluded that banned content on Instagram and Facebook does not appear to be banned in Horizon Worlds.

BuzzFeed filled Qniverse with phrases that Meta “explicitly promised to remove from Facebook and Instagram” (e.g., “Covid is a scam”). But she found that even after reporting the group – several times – through Horizon’s user reporting feature, the problematic phrases were not considered. violates meta content in VR policy.

Racism in the meta-verse

Racism in the meta-verse

In a post, an anonymous Facebook employee reports not feeling “good” using the social app VR Rec Room on the Oculus Quest headset. According to him, someone shouted a race apology. The employee tried to report it, but mentions that he is unable to identify the username.

In an email, Rec Room CEO and co-founder Nick Fajt said that a player who uses the same race message had been banned following reports from other players. Fajt believes that the excluded player is the same person that the Facebook employee complained about.

Theo Young, 17, said he began noticing more toxic behaviors, including one homophobic language, in the social lobbies of Echo VR last spring. Young stopped playing when he saw other players harassing a player.

“I gave up the game pretty hard after that experience. It just wasn’t fun anymore,” he explained.

Online harassment has become a major problem

Online harassment has become a major problem

According to a study published this year by Pew Research Center, 4 out of 10 American adults have been bullied onlinee. And those under 30 are not only more likely to experience harassment, but also more serious assaults. Meta declined to say how many reports Oculus has received regarding harassment or hate speech.

A 2019 study on virtual reality harassment conducted by Oculus researchers also found that the definition of online harassment is very subjective and personalbut that sense of presence in virtual reality makes the harassment more “intense.”

Metaversen’s various moderation systems

As part of metavers moderation, there are a few techniques, some of which have already been adopted.

Uses mutes

This is a moderation system that is widely used by players in the meta-verse. This action consists of not receiving sound from a player. Riot, the publisher of League of Legends (a game known for its toxicity issues), has conducted experiments on the subject, making Turn off audio chat between competing teams.

The exchanges were then analyzed as 33% less toxic. However, the mutation also contributes to a self-isolation of the victims. And ultimately, this compromises the experience of the game and its duration of use by the players.

Rumble activation

Activation of personal border

This is a measurement specific to a virtual reality world. It prevents the user from go beyond the intimate boundary (usually about 1 m). In addition, it reduces the risk of physical aggression. This solution has been integrated into Meta Horizon latest update under the name of “Personal border “.

Player ranking or status system

And if we only interact with trusted people ? This is the proposal for the status or rating system. The VR chat has developed its trust system in this direction. Users can divide their social interactions according to the status of other players. It’s a real “à la carte” moderation system close to social networks where you only see your friends’ interactions.

User report

Without any real direct impact it is warn “decision makers” about disruptive behavior in the virtual world. The cause can sometimes be specified. But the application of the measures is the sole responsibility of the decision maker and some follow-up can be done by the person who reported.

The expulsion of the user from the room

To ban the user as a solution

The ban prevents players with disruptive behavior from returning to the game room. It’s a moderation measure temporarily or permanently. However, this solution tends to break communities.

Throwing harassers out of society as a whole is one possible way. But in virtual reality, when communities are so limited, you can consider educating and rehabilitating them.

A closer look at the online toxicity figures reveals that some of the bullies are also bullies. Users with disruptive behavior not all of them can be banned permanently. We risk seeing all the societies in the virtual world shrink, little by little.

Artificial intelligence to combat VR harassment

Meta explores a way to allow users to record retroactively on its VR platform. It is also exploring the best ways to use artificial intelligence to combat harassment in virtual reality, said Kristina Milian, spokeswoman for Meta. However, the company can not record everything people do in VR. In fact, it would violate their privacy.

Metaverse’s moderation, a difficult mission

Metaverse's moderation, a difficult mission

The meta verse is silent harder to moderate than existing Meta platforms. In fact, it takes existing content moderation issues and exacerbates them even more. In a VR / AR world, a content moderator should monitor the content that people post. He also had to monitor their behavior. It means monitoring and moderating what people are saying and doing. Bad behavior in virtual reality is normal hard to follow. In fact, events happen in real time and is generally not registered.

Metas Chief Technology Officer (CTO), Andrew Bosworth, recognized that it is almost impossible to moderate how users speak and behave in the metaverse. He outlined how the company can try to solve the problem. But experts told Verge that Monitoring billions of interactions in real time will require significant effort and it may not even be possible.

Leave a Comment