Towards the Holodeck Experience: Seeking Life-Like Interaction with Virtual Reality

  • Vergence distance is the distance that the brain perceives when the muscles of the eyes move to focus at a physical location, or focal plane. When that focal plane is at a fixed distance from our eyes, let’s say, like with the screen in your VR headset, then the brain is literally not expecting for you to detect large changes in distance. After all, your eye muscles are fixed at looking at something that is physically attached to your face, i.e. the screen. But, when the visual content is produced in a way so as to simulate the illusion of depth (especially large changes in depth) the brain recognizes that there is a mismatch between the distance information that it is getting from our eyes vs. the distance it is trained to receive in the real world based on where our eyes are physically focused. The result? Motion sickness and/or a slew of other unpleasantries.
  • Motion parallax: As you, the viewer, physically move, let’s say walk through a room in a museum, then objects that are physically closer to you should move more quickly across your field of view (FOV) vs. objects that are farther away. Likewise, objects that are positioned farther away should move more slowly across your FOV.
  • Horizontal and vertical parallax: Objects in the FOV should appear differently when viewed from different angles, both from changes in visual angles based on your horizontal and vertical location.
  • Motion to photon latency:. It is really unpleasant when you are wearing a VR headset and the visual content doesn’t change right away to accommodate the movements of your head. This lag is called “motion to photon” latency. To achieve a realistic experience, motion to photon latency must be less than 20ms, and that means that service providers, e.g. cable operators, will need to design networks that can deterministically support extremely low latency. After all, from the time that you move your head, a lot of things need to happen, including signaling head motion, identifying the content consistent with the motion, fetching that content if not already available to the headset, and so on.
  • Support for occlusions, including the filling of “holes”. As you move through, or across, a visual scene, objects that are in front of or behind other objects should block each other, or begin to reappear consistent with your movements.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
CableLabs

CableLabs

Our mission is to create a powerful innovation engine that develops life altering technologies that move communities and industries toward more connected tomorr