The metaverse is a complex space. So, any issues surrounding its implementation and use are bound to be equally tricky.
But what happens when all the data mined from the ‘verse is up for grabs? What privacy based issues will marketers need to overcome in the brave new world of virtual reality?
Unfortunately, without a good deal of data privacy structure and implementation, everyone and their mum will know you like to hop around the Zuckerberg Metaverse as a little cat wearing a little hat. Not ideal.
Securing the metaverse poses different challenges to other digital spaces, with monitoring and detecting attacks being far more complex than on current, popular platforms.
According to Vasu Jakkal, corporate vice president of security, compliance, and identity at Microsoft:
“With the metaverse, you’re going to have an explosion of devices. You’re going to have an explosion of infrastructure. You’re going to have an explosion of apps and data.
And so it’s just increased your attack surface by an order of magnitude.”
In fact, a recent survey has found it’s not just marketers concerned with metaverse data privacy. Consumers are equally worried. Among the respondents, 50% were worried about user identity issues, 47% were concerned about forced surveillance users might have to go through, and 45% were considering the potential abuse of personal information.
Beyond this, there are plenty of ways the metaverse might impact user privacy, including:
But let’s have a deeper look at the issues that will impact markerers directly.
Currently, there are not that many safety regulations for the metaverse. This is because the space is so new and so different to platforms that have come before it.
But as the platform has the potential to reach billions of people, it's time to start considering how data and personal privacy will look in the metaverse.
Christ Wylie, author of Mindfuck: Cambridge Analytica and The Plot to Break America, suggests privacy should start at the root.
“A big part of the issue is that we are not framing the conversation around those who are responsible. The engineers and architects. The things that are causing harm are the products of architecture and engineering,” he says.
“When you look at how we relate other products of technology and other products of engineering whether that’s in aerospace, civil engineering, pharmaceuticals, etc; there are safety standards. There are inspections. We need to start scrutinising the technological constructions on the internet to ensure that there are regulatory frameworks in place to create a safer environment.”
So, there will be a need for current privacy regulations to either be updated or new frameworks put in place. This is vital to make sure industry standards are clearly defined, and to ensure applicability and consistency in this new metaverse context. This could be based on the amount of data collected, how it is shared with third parties, and definitely how to ensure adequate consent is obtained.
Second Life is a good example of a pre-metaverse space implementing its own version of privacy regulations on a smaller scale. This is an important step for companies to take while wider-ranging legislation is flagging behind. Self-regulation is likely to improve the relationship between user and company and will create a feeling of trust in an unfamiliar space.
Lawrence Lessig, in his book “Code and other Laws of Cyberspace”, detailed four modalities of regulation: social norms, law, market, and architecture. Within Second Life, all forms of these have been used.
In particular, the way the site approaches ‘law’ is particularly in-depth. The participants in Second Life sign a contract with the company, Linden Labs, when they register for the game, agreeing to be bound to the provisions in the documents. This provides Linden Labs with an instrument to regulate the behaviour of the players.
Then, on top of the law, there is also regulation at the code level. When these rules are embedded in the code, such as a ban on bugs, the enforcement is automatic. This means the software will simply make the banned behaviour impossible.
So, within the metaverse, companies may consider the implementation of legislation, or code, or both. But either way, it’ll be vital for top-down laws and regulation to also be created to ensure consumer privacy.
If the metaverse starts off on a bad foot, by ignoring security and privacy, there might be issues with widespread adoption. But this also means that there's an opportunity for platforms which comply with security and privacy to get ahead of the game, thanks to a consumer desire for clarity and trust.
“It has a lot to do with brand and with trust,” said Caroline Wong, chief strategy officer at cyber firm Cobalt.
“If a consumer has a choice of Platform A — which they believe to be secure and private and doing all the right things — and Platform B, which they think will probably lead to getting hacked if they join, then the choice is clear.”
Given the immersive nature of VR and AR headsets and technologies, companies could look into deeply personal, and invasive, parts of their customers' lives. This could be their blood pressure, eye tracking, breathing rates, or other health aspects. Think about everything your smartwatch can do, but even more in-depth. And creepy.
A company could examine your heart rate, for example, which might increase when you see something interesting.
Then, like on social media, companies could then take that data and sell it to advertisers, who could then spring up a pop up featuring a relevant product. So, this becomes a case of your own personal health information used to tailor ads toward you. Plus this data could also be used to inform the companies' algorithms to keep you on their platform longer.
If companies refuse to focus on privacy as integral to the metaverse, we’ll see a repetition of history. When millions of Americans learned in 2018 that political consulting firm Cambridge Analytica has used personal data from Facebook to profile them, it forced a huge change in how Facebook approached its customer relationships. It also helped to secure the passage of a comprehensive consumer privacy law in California called CCPA.
Although, companies and governmental bodies are still falling behind in this regard. Except for Viriginia and Colorado, most states in America still have no legislation, and it can even be argues that Facebook and other companies have only ramped up and fine-tuned their data collection since. So, creating this legislation, transparency and trust will be vital for the new space of the metaverse, in order to retain customer trust in the long run.
It's not just about how these companies will source the data, but how they will store the data once they get it.
As we've mentioned, many mistakes were made in Web 2.0, with the explosion of portable tech, mobile, and social media. Basically, data privacy was thrown out the window. It eventually led to the revelations of scandals such as Cambridge Analytica. All this meant any tryst in Big Tech was lost, and the shine of the technological revolution wore off sharpish.
The Data Protection Act of 1998 was only followed by GDPR in 2018. This meant it took 20 years to upgrade a more comprehensive set of regulations. If this neglectful approach is kept, web 3.0 is likely to repeat the same mistakes as Web 2.0
Like we’ve mentioned before, the wearable tech needed to create the metaverse presents a bunch of super creepy privacy concerns. A vast amount of personal data can be collected on participants with ease.
Compared to traditional social media, metaverse platforms can track individuals in a much more intimate manner, monitoring physiological responses and biometric data.
The depth of this information would be unparalleled. Companies will be granted a deep understanding of users’ behaviour, which can use used to tailor highly specific campaigns in a insanely targeted way.
Louis Rosenberg, a 30-year veteran of AR development, and the CEO of Unanimous AI, says companies should be required to inform you if they assess exactly where customers are looking in the metaverse.
He claims that software providers "should not be allowed to store this data for more than the short periods of time required to mediate whatever experience is being simulated. That will reduce the degree to which they can characterise our behaviours over time."
So, it's imperative that data is collected ethically and cannot be used for malpractice. This could be performed by a wealth of automated systems that would be introduced to protect data integrity in virtual worlds, and even protect against scammers.
But what we can guarantee is that the data collected will be accurate and untampered. Unlike data centres, blockchains are digital ledger technology, which creates new blocks when there are transactions of any kind on the network. So, compared to other networks where data can be manipulated, the data in the metaverse is unchangeable. This can be useful for both users and companies alike, as the information is as accurate as it can be.
The metaverse is set to function through a few base technologies. These are: virtual reality, augmented reality, machine learning, and AI. Since these are behavioural learning technologies, they can collect massive amounts of personal data with speed and ease.
In fact, in recent years, companies utilising AI technologies have attracted the scrutiny of Canada’s privacy commissioners for the unlawful mass surveillance and collection of biometric data.
“AI and machine learning give companies like Meta/Facebook the ability to aggregate vast quantities of data that influence every aspect of our lives,” says Phillip Dutton, co-CEO and co-founder at Solidatus, a technology firm that is currently focused on data privacy.
That is why he says, that “regulators and the public are demanding transparency so they can trust their data is being used and protected appropriately.”
Currently Meta is attempting to lead the way in the ‘verse. Part of the idea of their Metaverse is immersive mixed reality business meetings virtual meeting software Horizon Workrooms. So, this opens up the possibility for metaverse-based work spaces.
This could be a privacy issue, with workers being forced to participate in a new work ethos or lose their jobs. “If your employer decides they’re now a metaverse company, you have to give out way more personal data to a company that’s demonstrated that it lies whenever it is in its best interests,” says Facebook whistleblower France Haugen.
As Kavya Pearlman, founder of the nonprofit Extended Reality Safety Initiative told the Washington Post, VR tech could enable employers to monitor eye-tracking and facial movements to analyse whether workers are “paying enough attention” to virtual presentations, and hiring managers could conceivably mine VR data in an attempt to assess a job applicant’s “cognitive load” during an interview.
Digital rights advocacy group Electronic Frontier Foundation and the Extended Reality Safety Initiative, a nonprofit developing standards and advertising lawmakers on safety in VR, have raised the alarm on the privacy threats Big Tech is heading towards in their versions of the metaverse.
“In some respects, a 3-D headset is not really any different than a 3-D monitor,” said Jon Callas, director of technology projects at EFF. “But then there are other things being done that could be extraordinarily intrusive.”
As we’ve mentioned, there a very few limits on what companies can collect on their users. In fact, investigations by The Washington Post have found companies share personal data such as name, email, and location with third parties without disclosing who those third parties are.
So, as interactions move from screens to headsets, the potential for invasive data collection grows. As a concept, VR isn’t a privacy concern, but in the hands of big tech companies, it becomes a serious opportunity on their side.
In fact, currently, Facebook hasn’t got complete access to customer data when it comes to the Apple App Store, and Google Play Store. This has caused problems for Facebook’s business, even leading them to run campaigns against Apple’s privacy decisions.
It’s unlikely, then, they’ll make the same “mistake” in the metaverse space. It might be why they’re insisting on building their own hardware and operating system.
Currently, an organisation called “The Extended Reality Safety Initiative” has been formed to tackle issues in this space. They’ve created their own oversight panel, and are planning to issue guidance to lawmakers on the privacy risks of VR, as well as guidance to companies on how to handle various privacy and cybersecurity concerns.
But what are some actions companies can take to improve their experience with privacy in the space? Well, you might consider: