The future of the metaverse looks shakier than anyone can imagine. Tech companies that have bought into the concept fully—like Facebook-turned-Meta and Disney—are facing the realities of building out a concept that ostensibly already exists but has failed to achieve any real popularity. Even members of the video game industry, which has been exploring the field through virtual worlds like Second Life for years, have doubts that it will ever live up to its promise. But in this nascent stage, there is also potential: If the metaverse does take off, people building it now could avoid repeating the mistakes of the past.
As it stands, the metaverse is “not yet set,” says Micaela Mantegna, an affiliate at the Berkman Klein Center at Harvard. Because of this, it might still be possible to limit the rampant toxicity that has infiltrated the web and social media. The metaverse is still connected to its more organic roots, and if those populating it—be they people or corporations—can remember the lessons learned about online safety and moderation, the metaverse could be a less horrible place. Put another way, “we already ruined one internet,” Mantegna said during a recent panel at the Game Developer Conference, but there’s hope for the one to come.
Early metaverse experiences, like Linden Lab’s Second Life, allow users to explore identities and build new worlds. These ideas became the backbone for platforms like Roblox and VRChat, which turn devices into fulcrums for social interaction and community creation. More recently, as companies like Meta have moved to transform virtual spaces like Horizon Worlds into mega-platforms, those smaller communities have felt pushed aside. There is less onus on a user to craft their own world; instead, they navigate the clunky, no-legged future put before them by corporations.
Harassment and other issues have inevitably crept into these spaces. Technology will be misused, Mantegna says, and it’s crucial to start thinking early on about ways it might be abused. Right now, there’s a huge lack of transparency around how the metaverse will work. Any system using algorithms, for example, is vulnerable to bias, whether it impacts economically disadvantaged users, people of color, marginalized communities, or others. It’s also still unclear what the metaverse’s true ecological impact will be. And then there are the sticky questions about surveillance and data privacy. “How are we going to ensure we are not being manipulated in these spaces?” Mantegna says.
Some of these issues could be addressed with robust—and enforceable—laws and ethical guidelines. Regulation probably shouldn’t be left up to the corporations behind metaverse endeavors. But as other platforms have demonstrated, laws cannot match the speed of the internet. You don’t have to look far for examples; earlier this year, streamers who’d been deepfaked found their options for justice to be severely limited.
Most legislation seeking to address these issues attempts to apply “meatspace laws” to web problems, says Ryan Black, a lawyer with a focus on the video game industry who appeared on the GDC panel alongside Mantegna. Furthermore, Black tells WIRED, they’re too “territorial” to meaningfully affect any given platform. “To the extent that there aren’t regulations and laws, we’ve essentially ceded control and authority to the operator via their terms and conditions,” he says. The relationship people have to the modern internet is “very much a provider-to-user” one, he says.