We are facing more and more questions concerning interaction in virtual reality. The topic has become increasingly relevant as it’s closely connected with VR functionality. Interaction in virtual reality is as important to a user’s experience as the VR headset technology is, both critical in expanding immersion levels.
As soon as the first Oculus headset, the Rift DK1, was developed and a new wave of interest in VR swept through the world, questions surrounding input systems became more relevant. The keyboard and mouse proved to be actually useless tools, while using just head-tracking and viewpoint proved to be insufficient ways of interacting with interfaces and environmental elements.
In mobile VR, tools like the touchpad used in Samsung’s Gear VR headset and wireless gamepads have caught on. But PC and console versions of VR HMDs, besides using classical gamepads, give much bigger opportunities for input systems. The “big three” — Oculus, HTC/Valve and Sony — will all accompany their HMDs with controllers sensitive to movements.
Oculus and Sony focused on developing a tracking system fit for a user in a sitting or standing position, while Vive’s room-tracking system works within an area up to 15 by 15 feet, allowing the user to move within this space.
Many people may be surprised as to why the major VR players offer consumers the use of controllers sensitive to movements, but not current alternatives like sensor devices that capture hand and finger motions, gloves and other various prototypes. Let’s see.
Requirements for tracking and input systems
Firstly, what are the must haves for a control system in VR? If we were to describe the most important requirement in a single word, we would say “naturalness“. VR control systems should be as close to the natural interaction with our surroundings as possible. Naturalness also implies a low barrier to entry due to intuitive understanding of the principles of how things work.
Next comes synchronization. It’s closely connected with the concept of naturalness and is extremely important. By synchronization we mean the matching of a user’s actions in real and virtual worlds. This aspect concerns not only UI but rather UX and the user’s effect of presence in virtual reality.
It’s exactly the tracking systems combined with good-quality HMDs that contribute to it. The better the tracking opportunities available, the bigger the immersion effect and the opportunities for interaction with an environment. Synchronization, however, is not only a question of opportunities and accuracy of tracking – stability is equally important.
So, to answer the question as to why big VR players offer us the tracking and input systems described above, well currently these systems create the most universal, naturalistic, intuitive and stable interaction in a virtual environment that can already be positioned as a good product decision for users.
Major doubts about controllers offered by the “big three”
Controlling virtual worlds with hands only i.e. not holding any additional devices seems the most natural way of interaction. So, it may appear that the input devices currently offered to consumers by platform owners with two controllers that users need to hold in their hands may worsen the process of natural interaction with an environment.
However, a controller in the hand is nothing more than a tool like a hammer, a car steering wheel or a computer mouse. At a certain level this tool becomes a part of the user itself. For example, when a person has already been driving for some time, they begin to actually feel the car’s dimensions, the feeling of their body extends to the car body’s borders.
Some research shows that when monkeys use certain tools, something changes in their brain’s neural networks — the tool integrates into the scheme of their own body (a constantly updated map of the body shapes and limbs) — the model of a hand extends to the tip of the tool.
When we got the HTC Vive and began testing it, we immediately understood why authors of many demos often display virtual models of real controllers, creating full synchrony between real and virtual interactions. Holding a real controller and seeing its virtual analog allows for a deep level of immersion.
In one panel video from Oculus Connect 2, a developer shared his experience about using controllers. After testing the project’s demo, while still wearing an HMD, he put the real controllers on a virtual table, having completely forgotten that there was no table in reality. Letting the controllers go, they hit the floor and broke.
In other cases, different objects like a hand, a gun, or even a ketchup bottle, can be displayed in the place of the virtual controllers. And the subjective difference between displaying a virtual controller or some other object instead of it turns out to be not as strong as it may have initially seemed. At a certain moment you forget that your holding a controller and just operate virtual objects with your hand. The controller becomes a part of you.
Now lets explore the existing alternatives to the controllers we discussed above. How good are these analogs from a perspective of naturalness and synchronization and more importantly, how do they work for a wide range of consumers?
Optical sensors of hands and fingers
There are several problems when using optical sensors of hands and fingers for motion capture.
When interacting with the virtual environment, it is necessary to always keep your hands in front of the sensor — in front of your face basically – which is a rather limited area. Besides being quite unnatural, it is physically difficult to do so throughout a long period of time. Another problem is that we often use our hands without looking at them at all, we don’t see our second hand when pulling a bow string, for example but the system, simulating the bow’s position, should understand where our hand is at any given moment.
The current algorithms of hand tracking are far from ideal: under some angles of sight, users almost can’t define their hands and constant problems with tracking imply high entrance barriers and worsen the general experience of using the devices. It’s possible that Leap Motion’s new version of their SDK has already solved this problem. If so, it’d be interesting to give it a try.
The concept of using gloves in VR control systems evokes the nostalgia of cyberpunk in literature and cinema. However, though gloves can somewhat resolve many issues with using sensors, there are other problems with UX and everyday use — different people have different hand sizes, there are questions of hygiene, extra time needed for preparation before the gloves can be used, the actual feeling of continuously having the gloves on in an above-zero temperature and wires going around the users’ hands.
Who knows, though.. Ernest Cline’s characters in the famous “Ready Player One,” use haptic gloves every day and no one seems to complain.
One of the more significant challenge with VR input system currently is the question of a user’s movement in a virtual space. In general, aside from the limited space for movement available for HTC Vive users, very few other systems allow for synchronizing real and virtual body movements. And it’s exactly that lack of synchronization that leads to the so-called “sensory dissonance” causing dizziness when moving in VR.
The problem is so serious that in many projects (especially developed for VR), the only option was to completely or strongly limit the possibility of free movement in virtual space.
It doesn’t relate to situations when a user is, for example, placed inside a virtual car or a huge robot — more like the simulation of completely free movement (mechanics of classical FPS).
There are various ways to reduce unpleasant effects by changing the mechanics of movement such as movement towards the “viewpoint”, movement through teleportation from point to point, visualization of movement trajectories, using slow motion effects and more extravagant ways, etc., but the most effective is the refusal of any free movement or putting a player into a “cabin”.
There are, of course, various solutions allowing for imitation of natural movement, e.g., Virtuix Omni. It’s possible that such devices will find an audience in the future, but in their current state all existing platforms for movement are not very suitable for a wide range of consumers to be a part of VR HMD consumer versions. Bulky installation, requirements to remove footwear or to have special footwear to use on the platform and the extra time needed to adjust the platform before playing, limit the device’s use and are some of the most obvious reasons why the platforms in their current state will not be widely adopted.
Another doubt related to current input systems is the representation of players’ avatars in virtual reality. Among the major issues are the limited opportunities to correctly visualize an avatar.
In this case, as well as in the case with systems of movement, it seems more effective to adapt under the current restrictions, than, for example, to make a more complicated tracking system which would consequently be more expensive.
Developers of the sandbox game “Toybox” for Oculus Rift have thoroughly addressed the issue of displaying avatars.
Does the user lose anything from this? It depends on how it is presented. In a game where a user plays as a ghost, for example, visualization restrictions are justified by the narrative and as such the problem disappears.
In our article about developing a VR Escape room we give a similar example of how Leap Motion’s potential problems were justified by the means of the narrative.
In many games the player’s character, as well as his real body, shouldn’t move freely. For example, when the avatar is placed in a driver’s seat of a car, it partially does the trick. And in the Toybox example, the game’s conditionality (the avatar looks like a hologram) supported by the chosen visual stylistics, works quite well. The lack of realism and the lack of the avatar’s whole body parts is also made believable by realistic head and hand movements, thanks to high-quality tracking.
Here is a great lecture on input systems and many related questions from Oculus Connect 2.
Summing up, we don’t insist that all alternatives mentioned above are not promising. The point is that while we are in 2016, optical sensors for hands capture, haptic gloves and other types of input devices are inferior to controllers like Oculus Touch in a number of parameters which are important for the end user. The existing analogs still need to resolve a set of issues in order to be as universal and convenient for the end user, as the controllers that will be delivered together with consumer versions of major HMDs (which are not ideal either). There is a high probability these problems will be solved for some of the analogs, but our bet is on the optical sensors for hand and finger tracking. We are only in the beginning of a long and incredibly interesting way to full immersion.
Denis Tambovtsev, Natasha Floksy and Olga Peshé