Augmented and mixed reality: How best to implement mobility?

Article By : Brian Dipert

What are augmented reality and mixed reality, how do they differ from virtual reality? Will they achieve mainstream acceptance, and in what implementation form(s)? Read on for the answers to these and other questions!

Back in December, I shared some thoughts on the current state and likely (IMHO, at least) future trajectory of virtual reality in its various implementation forms. This time around, I thought I’d focus on two other points along the altered-reality spectrum; augmented reality and mixed reality. The latter term, in common usage across the tech industry, has been specifically adopted by Microsoft; the concept’s also referred to as “merged” reality by Intel and “extended” reality by Qualcomm, for example.

First off, what’s the distinction between virtual, augmented, and mixed reality? The Wikipedia definition for the latter (which I commend to your attention if you’re at all interested in the next level of information) describes the “virtuality continuum” extending from a completely real environment on one end of the spectrum to a completely virtual environment on the other end. Augmented reality, along with mixed reality (also sometimes referred to as augmented virtuality, although others use the term augmented virtuality to define a particular continuum point, while mixed reality describes the overall continuum … so confusing) are intermediary points along the spectrum between the fully real and fully virtual endpoints.

Here’s more from Wikipedia:

The definitions in the modern contemporary economy make the distinction between VR, AR and MR very clear:

  • Virtual reality (VR) immerses users in a fully artificial digital environment.
  • Augmented reality (AR) overlays virtual objects on the real-world environment.
  • Mixed reality (MR) not just overlays but anchors virtual objects to the real world and allows the user to interact with the virtual objects.

From an implementation standpoint, MR is both more compute-intensive than AR and requires more comprehensive “sensor fusion” arrangements. The latter provides the system with both “situational awareness” (e.g., visible light image sensors, radar, LiDAR, infrared and UV, ultrasound and sonar, etc.) and “positional awareness” (GPS receivers and inertial measurement units, for example), of both itself and the surrounding environment.

Think of it this way … it’s one thing to place virtual information “on top of” real world images in an unchanging location and with unchanging characteristics, regardless of what the overlaid real-world view contains. Placing that virtual information in a specific location with specific dimensions, aspect ratio, color, brightness, and other attributes to correlate to objects in the real-world view, and dynamically updating the virtual information’s qualities as the real-world view changes over time, is a far more challenging task.

Therefore the word “mobility” in the title of this blog post; as with VR, it’s one thing (albeit not necessarily a particularly useful thing) to tether an AR or MR headset to a hefty computer to offload the processing heavy lifting, but a much more difficult thing to craft a standalone mobile AR or MR device with long battery life, light weight, compact form factor, sufficient compute, communications and storage capabilities, and a market-palatable price tag.

After researching and pondering a bit, I’ve come up with four distinct platforms for implementing AR and MR, in each case exemplified by products already in the market (translation: there may be more in the future). My long-term prognosis for each platform option varies, as was the case with the three VR platforms I discussed back in December: computer- and game console-based, smartphone-based, and standalone.

Special-purpose headsets, smart glasses, and other devices
Examples here include Google’s Glass:

Microsoft’s HoloLens (a second-generation version of which is reportedly en route):

the much-ballyhooed Magic Leap One:

and dedicated hardware such as 8tree’s dentCHECK:

In specialized industrial, education, and other focused applications that are willing to pay a sufficiently high price to cover limited-volume (translation: high) bill-of-materials costs while also ensuring the hardware and software developers a sustainable profit, I’m quite optimistic about the future of this particular platform option. Imagine, for example, a product development team using HoloLens headsets for collaborative design purposes, students using mixed reality to learn more about what they see as they’re dissecting an animal in the lab, or a technician virtually pulling up a maintenance manual for a piece of equipment being worked on.

It’s in broad (translation: all-important high volume, since this is a consumer electronics blog, after all) usage scenarios that my enthusiasm wanes. Look, for example, at Google Glass; no matter how I admire the company for taking a chance on an unproven product category, Glass was a flop as a general-purpose consumer electronics device (as anyone who wore a set in public and got called a “Glasshole” by others will attest). Conversely, Glass is reportedly doing much better now that it’s been hardware-enhanced and refocused at vertical market applications (although the jury’s still out here; see what happened to castAR and Meta).

In gaming-centric implementations like the Magic Leap One and the mixed reality (non-HoloLens, specifically) headsets from Microsoft partners like Acer, Dell, HP, Lenovo, and Samsung, I’m not sure there’s sufficient differentiation from existing VR headsets to provide enough market room for additional critical mass-sized entrants. Admittedly, the overall market’s still embryonic, but HTC and Facebook Oculus already have made sizeable investments and gained notable market share and brand recognition as a result. Put one or a few outward-facing cameras on a VR head-mounted display (HMD), after all, as HTC is already doing, and you’ve pretty much got all the hardware potential necessary to also deliver augmented and (sensor fusion- and compute horsepower-dependent) mixed reality experiences.

The bottom line: IMHO, a dedicated-function AR or MR headset just doesn’t deliver a sufficient return on investment to justify the necessary investment (fiscal, dignity, and otherwise) for the masses. And I’m not sure if the situation will ever change, no matter how unobtrusive the hardware looks, and even if it shrinks to the size of a set of invisible contact lenses (accompanied by a matching shrinking price tag).

Enhanced standard headsets and glasses
While buying a piece of headgear just for AR or MR purposes may not ever become a widespread “thing,” what about if you’re already wearing one? The Focals by North that I linked to in the previous paragraph, for example, are now not only $400 less expensive than they were only a month ago when publicly introduced at CES (admittedly not exactly a confidence builder for corporate longevity), but can also be fitted with prescription lenses at a $200 adder.

If I’m already wearing prescription glasses, especially if they’re expensive non-insurance-reimbursable models, I might be persuaded to drop a couple more Benjamins on AR-augmented versions (the “killer app,” I’m telling you, is facial recognition that IDs the person you’re talking to, whose name you can’t recollect from memory). This statement assumes I like the look of the frames, of course, and they work with lenses capable of correcting for my presbyopia (or astigmatism, for example, for others). Admittedly, I’m a tech early adopter (translation: geek) but here are a few more examples of possible AR augmentation for gear already owned and in use:

[Continue reading on EDN US: Automotive HUDs and mobile devices]

Brian Dipert is Editor-in-Chief of the Embedded Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company's online newsletter.

Related articles:

Leave a comment