Flying cars. Hotels in space. Robotic sexual partners. Some technologies seem destined to remain vaporware: Great ideas in theory which never seem to become practical, widespread products. But next year could be the year in which we tick at least one much-anticipated item off of the list. Finally, we may be getting virtual reality.
VR is, of course, much anticipated by those gamers seeking to immerse themselves in virtual worlds as fully as possible. But its usefulness extends beyond escapism. Thanks to the fact that virtual reality entails tracking a person as closely as possible so as to match movements in reality to what is happening in computer-generated environments, this technology will collect a wealth of data for anthropologists, sociologists, psychologists, and others interested in people and society. Avatars are useful tools for both communication and storytelling, two of possibly the most fundamentally human traits. Thus they have great potential not just in representing us in online games and social networks, but in how we learn, how we work, and much more besides.
In this essay we shall take a look at some of the ways VR has been used.
LEARNING ABOUT OURSELVES IN VR
A study intended to see if real-world behaviours are evident in virtual worlds was once undertaken, and it is known as the ‘Stigma Study’. From neurological studies, we know that when a person encounters a ‘stigmatized other’ their brains show a pattern of activity that indicates they feel threatened. In experiments designed to test whether stigmatization is carried over into virtual reality, Jim Blascovich and Jeremy Bailenson arranged for participants in an experiment to meet with ‘Sally’ both in real life and virtual reality in a variety of conditions:
1: Some participants met ‘Sally’ with a birthmark on her face and also met her avatar which likewise bore a birthmark.
2: Some participants met ‘Sally’ with a birthmark, whereas her avatar had none.
3: Some encountered ‘Sally’ with no birthmark and also met her avatar, which did have a birthmark.
4: Some encountered both ‘Sally’ and her avatar without a birthmark.
The results of these studies showed that people were initially threatened by ‘Sally’ only if she bore the birthmark in the physical world. But within four minutes, participants were threatened only if Sally’s avatar had a birthmark.
What this tells us is that people adjust to virtual reality and accept it as ‘real’, and carry their prejudices over into the computer-generated world. The slight delay also tells us that we take a short while to adjust to the virtual world but come to accept it as grounded reality. In that sense, virtual reality is somewhat like those prism glasses which make the world seem upside down. Not surprisingly, it makes for a disorientating experience to wear such glasses, at least at first. But then the brain adjusts and the subject comes to view their topsy-turvy world as normal. If they then remove the glasses, grounded reality (which, by definition, is right way up) seems upside down to them. Just as people’s brains are neurophysically wired to ‘right’ sensory data, they also adapt to VR and behave accordingly.
As well as using virtual reality to gain insights into human behaviour, we also find instances where the technology is used to alter behaviour. The technologies that make virtual reality possible can be used to create a virtual mirror which users can approach and observe themselves (or rather, their avatar.) But, being a virtual mirror, your reflection can do things that would not be possible in the physical world. It can, for example, morph into another person. In a set of experiments, participants stood in front of a virtual mirror in which they viewed themselves either at their current age, or as elderly persons. This was followed by a twenty minute conversation with another person about their life in the virtual world. Upon exiting the virtual world, the participants who had viewed themselves in the elderly condition budgeted twice as much money compared to those who saw their reflection at their current age.
One thing that virtual reality has long been known for is its ability to create an illusion of closeness. Gamers commonly work with and compete against other players, sharing a space in the virtual world while simultaneously being physically positioned far apart, perhaps on separate continents. As in the case of the virtual mirror, the technology lets us not only reproduce reality but also to do things that would be physically impossible. For example, in the real world it’s not possible for two people to literally share the same space, but in virtual reality there is nothing to prevent the rules governing the computer-generated environment being written so as to allow two bodies to overlap.
Professor Ruzena Bajcsy has developed tracking systems so precise, they are capable of capturing every single joint and movement. Studies have been done to see if people can learn physical movements more successfully by sharing the body of an expert, compared to watching an expert in a traditional video tutorial. Jim Blascovich and Jeremy Bailenson investigated whether the martial art tai chi could be better learned in a virtual world where body overlapping is possible, and their results showed that subjects who could share the body space of an expert did indeed perform substantially better compared to those who simply watched a video tutorial.
The ability to quickly and easily make changes to a virtual environment, and particularly the ability to reproduce dangerous situations while totally avoiding the possibility of any real harm, makes virtual reality a very useful tool for treating phobias or to come to terms with traumatic events. A common method for helping sufferers with arachnophobia and other irrational fears is systematic desensitization, whereby the patient undergoes increasingly intimate encounters with their object of dread. In the case of spiders, this could initially consist of entering a room in which there is a spider in a glass tank that is covered by a cloth. Then, on a subsequent session, being in a room in which the cloth is removed and the spider in the tank is in clear sight. Step by step, the patient becomes gradually desensitized to the point where they are confident enough to allow a tarantula to crawl up their arm.
The problem with this technique is that it entails working with live creatures that need looking after, and that costs time and money. And, as you can imagine, treating something like a fear of flying is costlier still. But, of course, by using virtual reality the costs can be dramatically lowered and we are also able to exercise a degree of control that would not be possible using live actors or mechanical devices.
Working with Dr JoAnn Difede, assistant professor of psychiatry at Newyork Presbyterian hospital, Dr Hunter Hoffman (who had previously found that the escapism of virtual reality is powerful enough to enable burns victims to reduce the perception of pain while their burns are being redressed by fifty to ninety percent) developed a virtual recreation of the attack on the Twin Towers. Little by little, those traumatized by the events of 9/11 encounter increasingly intense recreations, beginning with looking up at the World Trade Center, with no airplanes flying by let alone crashing, and then gradually more and more elements are added in, such as explosions and the sounds of people panicking. Using this technique, patients who had not responded to any other physiological treatment showed dramatic improvement.
And, like I said, the cost of doing this virtually is negligible compared to real life. And as Moore’s law marches on the costs keep going down. When Dr Hoffman started working with burns victims in 1996, a decent VR machine cost about $45,000. In 2016, when the likes of the Oculus Rift are expected to launch, a budget of under $1000 would almost certainly get you a VR setup as good, if not better, than that 90s example.
IMMERSION VERSUS PRESENCE
To those who have never experience a VR setup, it may be hard to believe that fictional recreations can help with real trauma. Everybody knows that something virtual is not real, so how can being in a pretend airplane cure somebody’s fear of flying?
But to think that way would be to ignore the power of ‘presence’. Presence is not the same thing as immersion. Videogames have long achieved a sense of immersion, the capability to draw players into the game and invest in their avatar and the challenges they are setting out to beat. You do not need photorealism to achieve immersion but you do need consistency of rules. So, for example, if the game prevents you from jumping over what looks like a totally clearable obstacle for some arbitrary reason, your attention is directed to fact that you are just playing a game. In some ways, realism can work against immersion, because it is much harder to achieve consistency of rules that match realism compared to a simple fantasy game that stays true to its own internal logic. You only have to watch the documentary movie ‘King Of Kong’ (about the two best Donkey Kong champions competing to achieve the highest possible score) to see how absorbed one can become with simple graphics and consistent rules.
But, no matter how engrossing a videogame can be, immersion is not the same thing as presence. The way videogames are traditionally played, the avatar is something you control and the environment it is in is observed from afar, viewed on a monitor. When videogame technology achieves presence, though, you perceive yourself as literally inhabiting a VR environment.
As with immersion, photorealism is not the most important thing for achieving presence. What is important is that the display offers a wide enough field of view to prevent one from seeing the edges, and that the tracking and rendering technology is capable of updating the point of view fast enough to avoid noticeable latency. According to John Carmack, “twenty milliseconds or less will provide the minimum level of latency deemed acceptable” and if it can be reduced to 18 milliseconds or less, the experience will be perceived as immediate, meaning you can move your head and redirect your gaze in a way that feels entirely natural.
DARE YOU CROSS ‘THE PIT’?
A common way of testing whether a VR setup has achieved presence is to have subjects undergo ‘The Pit’. As its name suggests, this is a VR experience in which you find yourself standing before a pit. Not a shallow pit, mind you, but a very long drop. There is also a plank spanning the pit (plus a real plank in the actual room in which the experiment takes place) and people are challenged to walk across the real plank while they perceive themselves as walking across the sheer drop.
Now, obviously, there is no pit in real life and the subjects know this. Nevertheless, according to Blascovich and Bailenson, one in three adults cannot summon up the courage to walk the plank, and those that do try struggle to maintain balance just as if they were really trying to cross a long drop. This happens because the brain’s perceptual systems (ones operating below the level of conscious awareness) are satisfied that the experience is real. And no matter how much you tell yourself the experience is only virtual, your brain insists you are doing something risky like standing close to the edge of a precipice.
Tracking and rendering technology has long been used to convince the mind to accept something artificial as natural. Undoubtedly the most commonly used example would be the telephone. When you say something during a phone conversation, the inbuilt microphone ‘tracks’ your voice by digitizing it. And what the listener hears is not really your voice but a reproduction that approximates the sound of your voice, ‘rendered’ by his or her phone’s inbuilt speaker. Because the sounds being heard are so close to that of a human voice, your mind believes that is what you are listening to, and we have long-since adopted the attitude that a phone conversation is a direct two-way conversation between people who are physically distant, not a conversation mediated by artificial sounds that repeat what is being said.
So, if tracking and rendering works well enough to achieve presence, the more primitive parts of the brain will be convinced the experience is real. For that reason, VR has proven extremely useful as a tool of therapy for people with phobias and other traumas that can be treated by repeat exposure to whatever threatens them. It also means we have an unprecedented ability to record people’s actions in minute detail and learn more about ourselves and how to make ourselves better people.
I WAS MADE A RACIST
A word of caution, though: Sometimes the results are not what was expected. Consider the case of Victoria Groom, a graduate student who worked with Blascovich and Bailenson. She figured that if white participants could become black people in virtual reality, the experience would help reduce racial stereotyping. Instead, perceiving oneself as black in a virtual mirror actually increased people’s scores on standard measures of racism, and this was the case whether the subject him or herself was black or white. What this study suggests is that simply assigning a racial identity to somebody makes the stereotype more salient. However, it has been demonstrated that face-to-face contact with members of out-groups and taking a stigmatized-other’s perspective can reduce racism. In that case, VR systems like the ones the US Army have developed may prove more useful. The US Army has invested in virtual recreations of foreign cultures like those of the Middle East, and these function as training grounds allowing soldiers to immerse themselves fully in different customs so as to better understand and interact with locals in respectful ways. With VR, we too can immerse ourselves in cultures different to our own with far less expense and inconvenience compared to flying to exotic locations, and use the experience to reduce negative stereotypes.
Schooling stands to benefit greatly from virtual reality. Given what was said about the effectiveness in reducing the costs of treating phobias in VR, one might conclude that the same thing would be true of doing education in VR. It is no doubt true that, once physical buildings are replaced with virtual classrooms and lecture halls, substantial reductions in the cost of providing venues for education would be possible. But the benefits go much further than simply saving money.
When education is tied to physical buildings, there is always a limit to how many people can attend. Only a certain number of students can be packed into a physical classroom or lecture hall, which is why access to the best schools and universities is a competition for a limited number of places. But a virtual classroom or lecture hall could comfortably accommodate student numbers of a size that would be totally unmanageable in a physical space. Hundreds, thousands, maybe millions of pupils could easily fit into a virtual classroom.
The idea of a classroom of a million students may bring to mind an image of a comically oversized room, with the poor students at the back looking across a vast sea of fellow students to the tiny speck that is the teacher, far, far away at the other end. But, of course, we can always negate this problem by using VR’s ability to bend reality. In a physical lecture hall there is a ‘sweet spot’. It is in the centre of the room, a few rows in front of the podium. Those that occupy this sweet spot learn what is being taught better than those positioned elsewhere. Obviously, in a physical hall or classroom only a few can occupy this ideal position, but as we saw in the example of sharing the same space as an expert instructor, in VR everybody’s POV can be rendered to ensure they occupy the best seat.
If there were a vast lecture hall of a million students, it would not only be those learning who would struggle. The teacher, too, would find it very hard to effectively deliver a lecture to so many people. Dozens of eye-contact experiments have shown that when a teacher looks at a student, that increases the chances of the student learning what is being taught. A well-trained lecturer will take care to spread his or her gaze around the audience rather than focus on some while ignoring others. But when you are dealing with an audience of a hundred or more, spreading your attention evenly means that, on average, each student has eye contact with the teacher for only one percent (or less) of the time.
A teacher’s avatar, though, can devote its full attention to every single person simultaneously. How? By tailoring the information being sent to each student’s computer that is rendering the avatar they are learning from. From the perspective of student A, it is as if he has the best seat in the room and has plenty of eye-contact with the teacher, while simultaneously students B, C and so on have precisely the same experience. Blascovich and Bailenson have tested whether lectures given by teacher avatars with ‘augmented gaze’ really can teach more effectively and found that, as a group, whose attendees whose lecturer had the ‘magic’ ability to devote his or her full attention to many individuals simultaneously retained more information compared to those learning from an avatar with no such ability. It may also be worth noting that not a single student has ever detected that the attention they were receiving was not genuine.
VR can not only be used to give teacher’s avatars superpowers of attention, but also to help the person behind the avatar lecture more effectively. The hardware tracks where your gaze is being directed and therefore, and can therefore in principle alert you should you be ignoring sections of the audience for too long. Blascovich and Bailenson achieved this with an algorithm that caused students to literally start fading from view if they were being ignored. Using this visual aid, teachers ignored those attendees at the far edges of the room for ten percent of the time. This was a substantial improvement compared to teachers who did not have their behaviour brought to their attention by a visual aid: They ignored people on the periphery for approximately forty percent of the time. What is perhaps most encouraging is the followup study lead by Peter Mundy who studies autism at the University of California Davis. When autistic children attended virtual classes which used ‘fading classmates’, they looked others in the eyes in a similar manner to non autistic children.
Another academic, Albert “Skip” Rizzo (who is a psychologist and researcher at the Institute for Creative Technologies at the University of California) uses the data gathered by the tracking technology of VR to identify children with ADHD. “Skip” created a virtual elementary-school classroom in which several distracting events occur during a lesson. Children with ADHD exhibit head and gaze movements quite different to those without the condition (their gaze wanders frequently, whereas people without ADHD mostly focus on the teacher). With VR tracking tech monitoring movements and highlighting behaviour, it takes only a few minutes to diagnose pupils with ADHD.
The ability of VR to track and record every action, utterance and gesture made by a student could potentially be used to provide useful information pertaining to how well he is she is learning lessons. No teacher could possibly subject a person’s facial expressions, tone of voice and micro expressions to the level of scrutiny that VR tech can, and if the student were to find themselves being examined so closely it would probably negatively affect their concentration. But VR tech can collect orders of magnitude more informative data points compared to standard midterm and final exams and do so in a completely discrete way. Using such data, it would be possible to determine with much greater precision what aspects of a lesson somebody is having difficulty with. People do not all share the same learning styles and it would probably be very much easier to tailor virtual classrooms to complement individual’s’ specific strengths and weaknesses compared to physical classrooms.
So far, we have been discussing classrooms in VR, but it would hardly be using VR to its full potential if all students did was attend virtual classrooms and lectures. With VR, history lessons need not just consist of listening to an avatar lecture about life in Tudor England, but actually experiencing it for oneself by spending time in the court of King Henry VIII or a hamlet from that period, designed to be as authentic as our best historians and archeologists can make it. Students learning about biology could be shrunk to cellular or molecular level and take a ‘fantastic voyage’ through a body, witnessing events like cell-division, again represented as accurately as our best scientific knowledge will allow.
The potential for VR to make use of learning methods beyond the traditional classroom was highlighted by Chris Dede at Harvard, who has created several VR learning scenarios. One of these, River City, required students to figure out why people were getting sick. Discovering the cause entailed talking with townsfolk, hospital staff, university scientists and other virtual inhabitants, and sharing the information gathered with fellow students. According to Dede, when students have first-hand experiences provided by being immersed in a virtual town with a disease outbreak, they reach a much more full understanding of the relationships among causes and effects compared to traditional classroom settings. One reason why is that students find the virtual experience much more immersive and engaging.
A branch of psychology known as ’embodied cognition’ takes the perspective that knowledge is aided by peripheral bodily actions such as postures and gestures. To give one example of this phenomenon, Professor Michael Spivey at Cornell determined that a set pattern of eye-movements focused learners’ attention more efficiently and this aided them in solving a particularly difficult brain teaser. If we go with the assumption that VR tech will one day interface directly with the brain and body, during any learning task avatars’ peripheral movements could be purposefully controlled with the human learners feeling those movements as if they were actually performing them. According to Blascovich and Bailenson, “if repetition of movements is crucial, then learning could be improved automatically and unconsciously. Learning could take place… even during naps, because the machine is controlling one’s motor movements”.
VR is not exactly a new thing in the world of work. Of all the VR systems out there, the one people are most familiar with is probably the flight simulator. Flight simulators consist of an enclosed unit whose interior matches the cockpit of an airplane. The unit itself is mounted on hydraulics that can move the unit so as to accurately recreate the movements an aircraft can make. As the pilot operates the controls, force-feedback provides realistic tactile sensations, and the view outside the window (which are actually monitors) shows a virtual landscape rendered to display precisely what you would see at that point in time. All of this combines to produce a simulation of flying that is so accurate they can be used for ‘zero flight time’ training, meaning commercial pilots complete all of their training in simulators, with no need to fly the real thing until their first commercial flight.
The automobile industry has also been a user of VR. As you may well imagine, there are dramatic savings to be made when one moves from building physical prototypes to playing around with virtual vehicles. Ford’s Vice President of Engineering reckoned that VR saves six months in product development time. The Ford VR centre is an arm of the company’s product development division. At IEEE VR 2009, Ford VR’s manager- Elizabeth Baron- described several ways in which VR has been used to help design and test automobiles. All vehicle manufacturers have to deal with ‘human scaling’ which basically means fitting cars to people, taking into consideration specifications such as interior size, seat ratio, and the placement of the steering wheel, pedals and other components. Obviously, using trial and error to get human scaling right is very much cheaper when done in VR, and it also allows one to do things that would not be possible in the physical world. For example, Ford VR’s centre was able to test whether a particular sun visor would block glare for drivers of different heights by repositioning the Sun.
Perhaps the greatest potential for cost-savings lies in that form of market research known as test stores. A test store is, as its name suggests, a mock-up of a supermarket or some other store which companies use for focus testing. One such company that uses VR test stores is the Fortune 500 corporation Kimberly Clark. Kimberly Clark uses virtual stores to design and test product placement, layouts, displays, and other aspects of a store that can affect shopper preferences. When it comes to market data collection, VR proves very effective because it can capture so much useful information. It can record exactly where customers walk, where they are looking, what they pick up and what they purchase. Once VR headsets become commercially available, companies that use virtual test stores will have a preferable alternative to traditional methods of finding focus groups. The way this is usually done is that market researchers will go to a mall or some other place where people congregate, single out those who appear to fit the demographic of interest, and put together a sample group of ten or so people. But once VR becomes widespread, there will potentially be a much larger pool of consumers, one that’s more closely representative of the actual population of interest. Furthermore these consumers could be sampled randomly, which would help to ensure statistics gathered are more accurate.
It it were a physical test-store that was being built, it would of course be impractical to construct more than a few. But VR test stores impose no such constraints, making it possible in principle to design products and stores individually tailored to every consumer. Shopping malls of the future might be the result of thousands of variations sampled by millions of focus groups.
As we shop in the stores of the future, it is possible that we will be served by assistants that use VR to give them superpowers. We saw in the part on education how VR can augment a teacher’s ability to engage with an audience, and it can also be used to make salespeople more effective. We may one day be served by avatars employing the ‘Chameleon Effect’. This was a term coined by social psychologist Tanya Chartrand, following experiments that demonstrated researchers tasked with conducting interviews preferred actors who (unbeknown to the researchers) mimicked their gestures.
Blascovich and Bailenson also conducted experiments in the effectiveness of mimicry, this time using agents rather than actors (an agent is an avatar controlled by an AI rather than a human). In one condition, the agent mirrored the head movements of participants about four seconds after they occurred. In the other condition, the agents’ head movements were not repeats of the participants’. Less than three percent of participants were aware that they were being mimicked, and the mimicking agents were rated as being more persuasive, credible, and trustworthy. Blascovich and Bailenson commented, “a virtual car sales agent might be programmed to mimic potential car buyers’ movements in a digital showroom. A human salesperson could do this as well, if they took time to learn the same acting techniques the interviewees used in Chartrand’s experiments. But avatars and agents can do something which no human can, and that is mimic several people at once. This would be achieved in a similar way to how a virtual lecture hall can allow many people to sit in the same seat: the user’s computer rendering the avatar or agent tailors the viewpoint so they interact with a salesperson uniquely tailored to be persuasive to them.
Before people try out for a job, perhaps people will hone their interview techniques in virtual settings? The company SiMmersion provides one such virtual interview room, along with ‘Molly’, an agent that can draw from a script of hundreds of standard interview questions. The replies she receives influences her moods, with good choices making her more encouraging, and poor ones making her more blunt and abrupt. The interview can be set to several different difficulty levels, allowing the challenge to rise as one’s skills in interviewing increase. In a virtual setup like this, mistakes that could cost one a potential job become useful guides for improvement. Par for the course with VR, the system is recording everything so it can be replayed for review purposes, and Molly can be personalized to suit individual educational backgrounds, work histories, and other personal details.
So, there we go. A few ways in which VR has been used outside of gaming. Makes you wonder what other potential uses await discovery once VR finally becomes mainstream, doesn’t it?
“Infinite Reality: Avatars, Eternal Life, New Worlds, and the Dawn of the Virtual Revolution”, Blascovich, J., & Balienson, J.
“Stigma study”, Blascovich, J., “(Virtual) Reality, Consciousness and Free Will”, in Baumeister, R. & Vohs, K. (eds), ‘Free Will and Consciousness: How Might They Work? (New York: Oxford University Press, 2010), 172- 190.
“Tai-chi training”, Bailenson, J.N, Patel, K., Nielsen, A, Bajcsy R., Jung S & Kurillo G., “The Effect of Interactivity on Learning Physical Actions in Virtual Reality”, Media Psychology, 11 (2008), 354-376
“Phobia Treatment”, Riva, G., Wiederhold, B. K, & Molinari, E. “Virtual Environments in Clinical Psychology and Neuroscience: Methods And Techniques in Advanced Patient-therapist Interaction (Amsterdam: IOS Press, 1988)
“Burns victims”, Hoffman, H.G, “Virtual-Reality Therapy”, Scientific American, 291 (2) (2004), 58-65.
“Made a Racist”, Groom, V., Bailenson, J & Nass C., “The Influence of Racial Embodiment on Racial Bias in Immersive Virtual Environments”, Social Influence, 4 (1) (2009) 1-18
“Augmented Gaze”, Bailenson, J. N., Beall, A, C., Blascovich, J., Loomis J. & Turk, M, “Transformational Social Interaction, Augmented Gaze, and Social Influence in Immersive Virtual Environments”, Human Communication Research, 31 (2005), 511-537
“ADHD AID”, Parsons, T. D, Bowerly, T., Buckwalter, J. G., et al, “A Controlled Clinical Comparison of Attention Performance in Children with ADHD in a Virtual Reality Classroom Compared to Standard Neurophysical Methods”, Child Neuropsychology, 13 (2007), 363-381
“RIVER CITY”, Dede, C., “Immersive Interfaces for Engagement and Learning”, Science, 323 (2009), 66
“EYE-MOVEMENT AND PROBLEM SOLVING”, Grant, E. R. & Spivey, M. J., “Eye Movements and Problem Solving”, Psychological Science, 14 (2003), 462.
INTERVIEW VIA AVATARS, http://simmersion.com