Policing The Brand New Online World

  • VR and metaverse content moderation is a new field aimed at ensuring safety and improving user experience in the metaverse.

  • Human content moderators are proving to be essential in ensuring safety in the digital world because traditional moderation tools don’t translate well to real-time immersive environments.

  • The immersive nature of the metaverse means that rule-breaking behavior is quite literally multi-dimensional and generally needs to be caught in real-time, and content moderators like Ravi Yekkanti are the primary way to ensure safety in the digital world.

Moderating the metaverse

In recent years, virtual reality (VR) and the metaverse have become popular online spaces for people to explore, socialize, and even conduct business. However, with the growth of these new digital spaces comes a new field of work: VR and metaverse content moderation. These professionals, like Ravi Yekkanti, work tirelessly to ensure the safety and security of everyone using these platforms.

Yekkanti is part of a team of content moderators who protect safety in the metaverse as private security agents. They interact with the avatars of real people to suss out virtual-reality misbehavior. The metaverse’s safety problem is complex and opaque, and since traditional moderation tools, such as AI-enabled filters on certain words, don’t work well in VR, mods like Yekkanti are the primary way to ensure safety in the digital world.

Recently, Meta, the company behind Facebook and Oculus, announced that it is lowering the age minimum for its Horizon Worlds platform from 18 to 13, which makes the issue of digital safety in the metaverse even more urgent. However, it’s not clear how many content moderators Meta employs or contracts for Horizon Worlds.

Journalists have reported instances of abusive comments, scamming, sexual assaults, and even a kidnapping orchestrated through Meta’s Oculus. Traditional moderation tools are not effective in dealing with these issues in real-time immersive environments, and human content moderators are proving to be among the most essential solutions.

Moderators have to contend with nuanced safety challenges, and it can take a lot of judgment and emotional intelligence to determine whether something is appropriate. Expectations about interpersonal space and physical greetings, for example, vary across cultures and users, and different spaces in the metaverse have different community guidelines.

The job also means defying expectations about user privacy. Moderators record everything that happens in the game from the time they join to the time they leave, including conversations between players. It means they often listen in on conversations, even when players are not aware they are being monitored, although WebPurify says it does not listen in on fully private one-on-one conversations.

To deal with the safety issues, tech companies have turned to volunteers and employees like Meta’s community guides, undercover moderators like Yekkanti, and platform features that allow users to manage their safety, like a personal boundary line that keeps other users from getting too close.

Despite these efforts, many tools built to deal with the billions of potentially harmful words and images in the two-dimensional web don’t work well in VR. Human content moderators are proving to be among the most essential solutions.

Companies like WebPurify, which provide content moderation services, are on the front line of this new field, working to ensure the safety of users and enforce the rules of the platform. WebPurify has been offering services for metaverse companies since early last year and recently hired Twitter’s former head of trust and safety operations, Alex Popken, to lead the effort.

Popken says that they are “figuring out how to police VR and AR, which is sort of a new territory because you’re really looking at human behavior”. WebPurify’s employees are at the forefront of the safety challenges in these new spaces, and racial and sexual comments are common. Yekkanti says one female moderator on his team interacted with a user who offered to marry her in exchange for a cow when he learned she was Indian.

Moderators learn detailed company safety policies that outline how to catch and report transgressions. Yekkanti works on a game that has a policy that specifies protected categories of people, as defined by characteristics like race, ethnicity, gender, political affiliation, religion, sexual orientation, and refugee status.

Moderators are trained to respond proportionally, using their own judgment. They have the power to mute users who violate policies, remove them from a game, or report them to the company. Content moderation is not a one-size-fits-all approach, and moderators must be trained to navigate nuanced safety challenges.

Expectations about interpersonal space and physical greetings, for example, vary across cultures and users, and different spaces in the metaverse have different community guidelines. Moderators need to be aware of these cultural differences and apply them appropriately while ensuring that everyone feels safe and respected in the virtual world.

Moderation also means defying expectations about user privacy. Moderators record everything that happens in the game from the time they join to the time they leave, including conversations between players. It means they often listen in on conversations, even when players are not aware they are being monitored.

WebPurify says it does not listen in on fully private one-on-one conversations, but this raises concerns about user privacy. Tech companies have policies and tools in place to deal with safety issues in the metaverse, but more transparency and stringent oversight may be needed. Delara Derakhshani, a privacy lawyer who worked at Meta’s Reality Labs until October 2022, says that companies need to be transparent about how they are tackling safety in the metaverse.

As the metaverse continues to grow and evolve, content moderation will become increasingly important. Companies like WebPurify, which provide content moderation services, are on the front line of this new field, working to ensure the safety of users and enforce the rules of the platform.

Moderators like Yekkanti, who work in the metaverse, have to contend with a variety of challenges, including sexual harassment, racism, and grooming. They are tasked with catching rule-breaking behavior in real-time, and using their judgment and emotional intelligence to determine whether something is appropriate.

While companies like Meta and Roblox have implemented tools and policies to improve safety in the metaverse, there is still a need for human moderators to ensure that users are protected. As the metaverse continues to grow, it will be important for companies to invest in content moderation and provide their moderators with the resources they need to do their jobs effectively.

Yekkanti also acknowledges that there are limitations to his work. He says that the use of artificial intelligence in combination with human moderators could help to solve some of the challenges of moderation in virtual reality.

AI can quickly analyze large volumes of data, including chat logs and user behavior, allowing human moderators to focus on the more nuanced issues that require a human touch. AI can also identify patterns of behavior that indicate harassment or abuse and alert moderators in real-time, potentially preventing harmful interactions before they occur.

However, implementing AI in virtual reality is not without its challenges. The immersive nature of VR means that AI algorithms need to be trained on much larger and more complex data sets than those used for traditional social media moderation. They also need to be able to analyze user behavior in three-dimensional spaces, rather than just in text-based chats.

Despite the challenges, Yekkanti believes that content moderation will continue to play a crucial role in ensuring the safety of users in virtual reality. As the metaverse expands and more people engage with it, the need for effective moderation will only increase. It is a complex and challenging task, but one that Yekkanti and his colleagues are committed to. Through their work, they are helping to create a safer and more welcoming virtual world for everyone.

The rise of virtual reality and the metaverse has created a new field of content moderation. Companies like WebPurify are working to ensure the safety of users and enforce the rules of the platform. Moderators like Yekkanti are at the forefront of this new field, working to catch rule-breaking behavior in real time.

As the metaverse continues to evolve, companies will need to invest in new technologies and resources to keep users safe. The challenges of content moderation in virtual reality are complex and multifaceted, but with the help of human moderators and AI, it is possible to create a safer and more welcoming virtual world for everyone.

 
 

Thanks for reading Solanews , remember to follow our social media channels for more

Leave a Reply