What you’ll learn:
- What technology hurdles must be overcome?
- How positional tracking and virtual object placement impact development.
- How 3D scanning, time of flight, and HMI will solve key issues.
The metaverse is expected to transform our very concept of virtual reality. Before it can do that, though, it has some very real-world problems to solve.
Interoperability is a major issue, as numerous mega-players compete to establish platforms and operating standards. Display and optics solutions need to become more practical. Without solving the problems of size, weight, and appearance, VR headsets will continue to be used mostly for gaming and specialized applications.
There are technical issues to overcome as well. From a purely operational standpoint, two of the biggest challenges for developers are positional tracking and placement of virtual objects.
The metaverse figures to be orders of magnitude more complex than traditional virtual environments. It will include worlds that are purely imaginative, as well as those that mirror their real-world counterparts in clarity, detail, and size. For the metaverse to be useful, users inhabiting a virtual environment need to know exactly where they’re positioned and where they’re moving, regardless of how far or how fast.
Virtual object placement is the other critical challenge. Metaverse applications will need to understand the surrounding environment to accurately place items. Together with positional tracking, precise object placement will be essential. Satisfying one or the other isn’t enough—for the metaverse to succeed, it must accomplish both.
3D Tech Will be Vital
Fortunately, hardware innovations are underway to support metaverse tracking and positioning. One of the breakthrough areas involves accurate scanning technology for digital duplication—a pursuit met through 3D scanning.
3D scanning can be achieved with depth cameras based on different technologies. Take structured light, for example. It’s a technique in which a light pattern is projected onto an object and then read by a 3D imaging camera. The camera detects distortions caused by differences in distance to recreate the image with high fidelity. 3D scanning duplicates a person or object digitally, enabling them to be inserted into the virtual environment.
Another positioning innovation is 3D time of flight (ToF), which allows metaverse applications to track real-world objects for accurate placement as they move. ToF emits light pulses into a scene that are then reflected and read by a sensor. With ToF, not only can moving objects be recognized, but they also may be digitally integrated into the virtual environment. Anything from tools and vehicles to animals and even individuals can be identified.
Unlike LiDAR (light detection and ranging)—another type of 3D scanning—ToF depth-sensing cameras provide full-field information. Conventional LiDAR relies on lasers to scan from top to bottom and left to right. Producing full-field with LiDAR is hugely expensive, making it cost-prohibitive for most metaverse applications.
Display technologies are evolving to support 3D. Many types of devices expected to be enjoyed by metaverse users have been around for years. Still, at this point, most demonstrate significant limitations.
Google Glass and Snap Spectacles have had, at best, mixed success due to cost, functionality, and privacy concerns. And even the most advanced VR headsets have only a narrow field of applications. While popular with gamers and other niche users, they shut off the wearer from the real world, making it difficult to interact with anything other than the virtual setting.
Ergonomics and Security
Satisfactory performance and ergonomics standards will be essential to broad metaverse acceptance. Human-machine interfaces (HMIs), data rates and rendering, among others, will be needed to create seamless interactions with the metaverse.
For displays, most experts agree that 60 pixels per degree of field-of-view is a minimum essential for video rendering that matches real-world conditions—a requirement that’s being met by the best optical devices.
Other types of HMIs will soon undergo an incredible transformation as well. Just last November, Meta’s Reality Labs division showcased a prototype haptic glove with ridged plastic pads that let the wearer “feel” surfaces. With the bar being raised for sound, sight, and touch, metaverse denizens are sure to enjoy increasingly immersive experiences.
Yet even the most advanced tech makers have found that comfort and capability are vital to acceptance of wearable devices. VR/AR (augmented reality) eyewear will need to be light enough to wear all day, attractive enough to be appealing, and capable of delivering a dazzling user experience, all at affordable price points.
One breakthrough, dubbed surface relief waveguide technology, may satisfy this need by providing see-through waveguides on high-refractive index glass. This solution supports imaging applications in VR and MR (mixed reality) displays as well as 3D sensing and even automotive heads-up displays.
Finally, developers must address privacy and security concerns. Though data security will present a challenge, 3D scanning is ready to help mitigate privacy risk. Unlike 2D equivalents that record facial images, 3D scanners only record anonymous data from “point clouds” for authentication. No image or other humanly identifiable record is created.
Balancing Act
The next few years will surely be critical to metaverse realization. Innovators will benefit from viewing the journey as a series of small steps, with each win standing on the shoulders of the last breakthrough.
For each piece of meta-hardware, developers will need to find a balance point among form factors, data quality, computing power, power consumption, and bandwidth limitations. Overcoming these obstacles will make the metaverse—the ultimate virtual experience—a practical reality.