all moderation issues in the metaverse

The metaverse has been a hot topic for the past few months. We wanted to know more about the problems and challenges associated with moderation. According to Meta, ineffective moderation of their virtual world could kill the project. It is therefore a strong challenge for the development of these new spaces.

We interviewed Hervé Rigault, CEO of Netino by Webhelp to share his vision on this topic. The manager is particularly responsible for the new Netino offering, which offers brand moderation and community management strategies on the web3. A fascinating exchange on the border between technological, political and societal debate.

Learn more about Netino

It’s hard to understand what moderation is in the metaverse, can you explain to us what it consists of?

In general, it is the fight against reprehensible behavior in the spaces that the metaverse platforms create. Unlike traditional social networks, the metaverse aims to create a hyper-immersive experience that appeals to the maximum of senses (although it remains a digital experience).

Overall, it is necessary to be able to anticipate and prevent all toxic and “deviant” behavior between users of these spaces. I use the term “deviant” with tweezers because it also means defining a norm and what behavior deviates from it.

What are the moderation difficulties in these virtual spaces?

The interactions are live, so it is a strong limitation to manage. In order to control the real time, it is necessary to set up self-protection functions for the users. It is obviously impossible to monitor every single person. There are already several moderation features for platform users: attenuation to mute a user, safety bubbles to prevent other users from entering their space, possibility to warn the community, etc.

Some platforms lean towards this self-regulation strategy, where it is the community that takes sanctions against users: blocking for a few days, banishment… On our side, for example, at The Sandbox we have chosen to create communities of ambassadors who will welcome newcomers , explain to them the rules and how this room works. This is an operation that we also apply to brands that have places on the metaverse and that we partner with.

We therefore help users both to discover this new space and to maximize the quality of their experience, but our ambassadors are also there to deal with objectionable behavior.

What kinds of harassment are users exposed to on these platforms?

In these spaces, the forms of harassment are many and go far beyond “written” harassment and can resemble “physical” harassment. Anyone can experience the way violence is experienced when receiving an aggressive message about harassment. It’s violent, and yet only characters remain on one page. If the user wears a virtual reality headset and he feels that someone is approaching, entering his intimate or personal space, the aggression becomes almost physical. Virtual reality headsets can create trauma very close to what we can experience in “real life”.

It should not be a space of lawlessness that serves as a drain for people who feel less and less free in normal life and where they could free themselves from all coercion and all morality.

That would be deadly and dangerous. One of the first human needs is security. So when creating a world like the metaverse, you have to take care of its inhabitants, and consider the avatar as an extension of “real” people.

This topic is very societal because it is linked to consent. Users must not feel and live unwanted experiences. The metaverse is fabulous when lived as an experience, but users must be in control of that experience and not be forced into behaviors they deem toxic.

Learn more about Netino

Temperance in the metaverse is therefore a “technical” subject, human, but above all political?

It must be remembered that these spaces are spaces created by private companies, they are therefore governed by their own rules and their visions of freedom of expression and its limits. I think we need to have a political approach to this topic and think of the metaverse as the organization of a city, of a shared space. This is a powerful challenge: when platforms create worlds, you must invent the rules for those worlds.

This is an issue that must also be considered with states, national and transnational institutions, because the actors of the metaverse are global actors. Leaving private corporations, of which we are a part, the responsibility of dictating the laws in spaces that are less and less virtual presents a real political problem.

In my opinion, the public authorities must act very quickly at European level or at least at national level. It took the legislature nearly 20 years to regulate at least web2, and to decide on its moderation. So be careful not to take 10 or 15 years to take care of web3 like we did for web2.
If on each platform, on each world, there are different laws, those who want to have deviant behavior will go to a more permissive platform. Some platforms will voluntarily pay less attention to attract a large audience.

I have always considered that Netino, through its moderation activities, had a real political mission in the first sense of the word. But we must be careful not to be just subcontractors to do what a transnational private entity uncorrelated with local laws may ask us to do. A big topic therefore, and much deeper than simple “moderation”!

As in social networks, is this moderation controlled more and more by AI?

The Avia law against hateful content on the Internet requires platforms to moderate illegal or hateful content within 24 hours. This commitment to managing millions of pieces of content so quickly has required the automation of moderation. Today, 90 to 95% of network moderation is done automatically at Netino.

On the metaverse there is a lot of work and development on automatic moderation. The platforms work in particular with film studios to reproduce aggressive behavior with actors and to be able to train AI to recognize them. It is therefore an ongoing topic.

Before, we adapted to the rules of the platform, or the brand we were working with, but with the emergence of the metaverse, I think it is the end user who has to decide what they accept and what he wants to be exposed to. The user must have access to a list of behaviors that he accepts or not. I strongly believe in this approach, which seems to me the only effective one for live interactions.

You work a lot at Sandkassen. Is it an expertise and a way of working that can be duplicated on other platforms?

What we offer is much more comprehensive than moderation. We allow community management and engagement. We go back a bit to the beginning of community management, with a desire to humanize these spaces. We therefore have a hundred “real” collaborators who put on virtual reality headsets to explore the different spaces and engage the users.

We currently have a hundred people working on The Sandbox space, and of course it can be replicated to other spaces. In addition to platforms, it is also a real need for brands. They want to create spaces, but don’t necessarily know how to engage “classic” social media online3. We therefore help them in this transition between the two worlds.

Learn more about Netino

Leave a Comment