Taking Privacy and VR Seriously

Taking Privacy and VR Seriously

They’re Watching You. Really.

One of the most vital, yet elusive, legal issues surrounding VR is privacy.

The concept has its roots in the oldest, most basic elements of American law. It’s enshrined, ultimately, in the Fourth Amendment of the Constitution, which has been defined to include issues such as birth control, free speech, and so on. However – and this is a huge “however” – almost all of the judicial interpretation of the meaning of privacy has taken place long before the development of virtual reality. The technology raises an entirely new set of issues related to privacy, which may require an entirely new understanding of some of the most basic building blocks of the Internet itself.

Let’s back up a little. When you go online, you are surrendering an enormous amount of privacy, often in ways you don’t even suspect. A complete list of this would be a hundred pages long, but for a somewhat startling introduction, take a look at your Google Dashboard (which you can reach by Googling the term) and see exactly how much information Google has about you – which they’re using to target ads, among other things. As part of this, for example, they scan and analyze all of your Gmail correspondence, in order to sharpen their profile of you.

As another example, here’s a list of information Facebook (which, with 1.79 billion users and growing, influences around 25% of the entire human race) gathers about you. They know, and are analyzing, things like whether you purchase a lot of alcohol, or who you are inclined to vote for. And again, this is all used to both target ads to you, and to adjust what you’re shown on Facebook.

But notice one thing about all of this information. It’s all data, in one way or another. Or, to put it another way, it all arrives in Facebook’s databases through your keyboard, phone, mouse or touch screen. There’s a device involved, and Facebook is vacuuming up enough data to fill four immense data centers (in Oregon, Sweden, Iowa and Texas) and they’re building more. This makes the entire issue of what’s private and what’s not relatively easy to both define and control.

With VR, however, all bets seem to be off. Because VR privacy is more closely connect to surveillance than anything else. And nobody’s prepared to think through the implications of that, particularly not lawmakers.

The first area where this is happening is head movement. A VR headset is fitted with a gyroscopic sensor, which tracks what you’re doing with your head – in effect, where you’re looking. Since this is what creates the entire virtual environment, it’s obviously pretty important. It’s also an ideal technology for VR technologists to determine where in the environment you’re looking – what’s getting your attention. Basically, they’re watching you to see what you’re interested in. And recording it. And tracking it.

The next step is way beyond tracking where you look. It’s tracking your emotions. An Oculus Rift headset is engineered to be capable of accepting sensors to closely track eye movements and facial expressions. This is ostensibly in order to allow users to have avatars whose facial expressions match their own. However, when combined with bodily motion, like gestures, it’s also very possible to develop system tools to track emotions. Hand gestures, posture, body position and all sorts of other aspects of the way users inhabit a virtual world are well on the way to creating the capability to not simply know what users think or do, but how they feel, react, and are interested. The VR environment, in other words, can monitor your gestures and facial expressions, and begin to log your emotions, as well as the circumstances that influence them. If the mention of Hilary Clinton generates a certain set of movements, they can know you’re a fan.

This is already well beyond the scope of any regulatory framework initially devised to define, and control, privacy. However, it keeps going. First of all, this data is not restricted to law enforcement agencies, or government agencies, or anything like that. It’s gathered by businesses, and is increasingly available to marketers and corporations as well.

Second – and here’s where it gets really spooky – this information can also be used to influence users’ behavior and beliefs. In short, in virtual reality, the virtual environment can be altered in order to reinforce certain beliefs and behaviors, and to inhibit others. We’re getting into rat/maze/cheese territory here. Avatars and environments encountered in a virtual experience can be welcoming and supportive to visitors who hold one set of beliefs, or behave in a certain way, and to be dismissive, negative or rejecting of a different category of visitors. The virtual environment, ultimately, can become a psychological control mechanism.

The efficacy of this sort of stimulus/response cycle in the real world is already quite well known. Human beings are profoundly influenced by their environments, including their behavior. The most famous example is Phillip Zimbardo’s Stanford Prison Experiment, in which college student assigned to play the roles of either prisoner or guard in a fictitious (i.e. virtual in the 1970s) prison rapidly began to exhibit disturbing levels of sadism and cruelty. The experiment had to be called off after six days instead of lasting the planned two weeks, because environment has a profound effect on shaping behavior. Thousands of followup studies have established this fact of human behavior beyond question. People’s behavior is strongly affected by their environment.

Which leaves us where? And remember, we’re in an environment where VR technology is moving forward at warp speed. The current big, clunky VR headsets, for instance, are the Sony Betamax of VR technology – the devices are getting smaller, lighter, more powerful and less obtrusive very, very quickly. But it looks as if it’s going to leave us in a place where information about our most emotional, subjective, and therefore human, behavioral aspects will be used to develop virtual environments that are specifically designed to influence our behavior and beliefs.

To some extent, this isn’t new. Propaganda, editorials, and political satire, among many other avenues, have been used for precisely the same goal for many, many years. This is considered a foundational element of civil rights, and the ability to freely express yourself in an attempt to influence others is broadly protected by the First Amendment. Yet, VR is different and unique because it is an actual environment. It’s a lot harder to put a book down, and think through what’s in it, than to remember than an avatar in a virtual environment is not real, and to be aware that you’re being both closely tracked and perfectly manipulated.

Which brings us back to the legal issue at hand – privacy. As it’s always been defined, and as countless court cases, articles and debates have hammered out, privacy means the right to be let alone, as well as the right to control who sees your personal information, and what they do with it. Thanks to HIPA, for instance, there are very strict regulations on who can see your medical information, and what they can use it for.

However, as we’ve seen with Facebook, consumers are increasingly trading their privacy for technological access and features, and the boundaries are shifting. The issue seems to be becoming not simply what people can do with your information, but what your information can do with, and to, you. Which is a whole different set of issues. In effect, privacy is morphing into a variation on the right to not listen. Or, to quote William O. Douglas, “The right to be let alone is the beginning of all freedom.” VR may be the laboratory for determining, again, what that actually means.



Peter Darling

All stories by: Peter Darling