A recent teardown of mine showcases the third generation of Snap’s Spectacles product line. At least some of you might have finished that lengthy tome with a lingering question on your minds: “How do these compare against the first two Spectacles generations that preceded them?” Glad you asked! The evolution of the Spectacles product line has been rapid, although Snap CEO Evan Spiegel is under no delusions that the glasses will be mainstream any time soon, per a 2019 interview with The Verge:
Spiegel is playing a long game. He often says that AR glasses are unlikely to be a mainstream phenomenon for another 10 years—there are simply too many hardware limitations today. The available processors are basically just repurposed from mobile phones; displays are too power hungry; batteries drain too quickly. But he can see a day where those problems are solved, and Spectacles becomes a primary way of interacting with the world.
First-generation Spectacles were announced in September 2016 and began shipping two months later:
Initially sold for $129.99 solely through “pop-up” vending machines with little to no advance notice of availability and limited stock (all intended to boost “hype”), they included only one camera, in the vicinity of the wearer’s right temple. The similarly circular structure at the left temple was devoted to a ring of LEDs, intended to indicate to others around you that the glasses were powered up and image capture was in progress. At first, they only captured short video clips in a unique 115° circular format, accompanied by single-microphone-captured audio:
However, still image capabilities were subsequently added via a firmware update. To say they didn’t sell well would be an understatement; Snap eventually wrote down the inventory and terminated the first-generation product line…only to replace it with the premier second-generation model in April 2018, followed by two other 2nd-gen siblings later that same year:
The price rose slightly, to $149.99 (for the original design; $199.99 for the follow-on “Nico” and “Veronica” versions). The construction was still plastic, albeit (arguably) more stylish, it added a modicum of water resistance, and the permanent yellow rings around each temple’s camera-or-LED structure were ditched (though the LEDs to alert others remain). The camera’s angle of view decreased slightly—to 105°—to reduce fisheye distortion, and more traditional widescreen, vertical, square, and other options were added to the image capture format portfolio. For the first time, Snap publicly disclosed camera specs: 1216 x 1216 pixels (translating to 720p widescreen) at 60 fps for video, and 1642 x 1642 pixels for stills. And a second mic was added, creating an ambient noise-reducing “array” configuration (while still only capturing mono audio).
- This time around they cost $380
- For that price adder, you get two cameras, one at each temple
- You get four microphones, with each pair still in an array setup, for “stereo” audio capture
- Spectacles 3 embeds a GPS receiver, so you can “geotag” your captured images
- The frames are (mostly) metal, albeit no longer water resistant.
- And the carrying case is “premium”
Why two cameras? For video, they enable determination of distance from the cameras for various objects in the image. This enables the insertion of depth-cognizant special effects (which Snap calls Lenses), such as those in this example from Snap’s YouTube channel:
If you have a VR headset, this YouTube VR-encoded video is even more dramatic:
A similar “3D” outcome is achievable with stills; download the footage to your Android or iOS smartphone, slip it inside the included Google Cardboard-based viewer (from which Google itself seems to be backstepping, but I digress…), and you end up with a View-Master-reminiscent depiction of the scene you’ve previously seen.
Note that this image processing takes place in the wirelessly tethered mobile electronics device, not in the glasses themselves, which in many respects is a smart approach. Stereo (i.e., “binocular”)-based depth perception via parallax disparity is comparatively cost-effective from a hardware implementation standpoint, requiring only standard image sensors versus specialized time-of-flight, structured light or other depth-discerning approaches, but its algorithmic computational demands are comparatively high.
Shifting this computation load to the smartphone decreases the glasses’ size, cost, and weight, as well as maximizing between-charges operating life for a given-capacity battery. But it also means that the 3D effects can’t be viewed live from the glasses themselves in an “augmented reality” approach (which would also require an integrated display, of course). That said, just such an enhancement seems to be planned for Snap’s upcoming fourth-generation Spectacles, which are already being promoted on the company’s website. One thing you can certainly say about Snap is that they’re not afraid of Osborning themselves…
What about the competition? Well, more recently (in late July during an earnings briefing, to be precise) Facebook CEO Mark Zuckerberg pre-announced an upcoming set of smart glasses co-developed with Ray-Ban. A bit more than a month later, the Ray-Ban Stories were official:
Conceptually, they’re reminiscent of Snap’s Spectacles 3 (they also, for example, come in a prescription-lens option), albeit with some notable divergences:
- The base price, $299, is a bit cheaper than the $380 Spectacles 3, although the carrying case is arguably less “premium”
- They come in “3 timeless shapes and 20 frame and lens color combinations”
- They embed not only microphones (albeit “only” three in this case) but also speakers (akin to Bose’s Frames), enabling you to listen to music with them on, for example, as well as make and take phone calls via a Bluetooth-tethered smartphone intermediary
- Instead of a physical button at the top of each arm near the hinge, they’re controlled via a touchpad on the side of the arm or (if you’re an exhibitionist) by your voice
- There’s only one LED, near the right camera, to alert those around you that image capture has been activated by the wearer. It’s smaller and can also more easily be covered or otherwise obscured. The privacy implications are perhaps obvious
- Most importantly, arguably, Stories only work with Facebook’s various social media services, versus Spectacles, which (of course) work only with Snap’s Snapchat. The significance of the “ecosystem lock” can’t be understated…if the online community you’ve cultivated is on one service, there’s little motivation to purchase smart glasses that work with the other, no matter how otherwise comparatively compelling they may be.
The earlier-mentioned upcoming fourth-generation iteration of the Spectacles concept promises to nudge Snap closer to the full-blown implementation that I automatically think of when I hear the words “smart glasses”: something you wear on your head and through which you can view the real world, containing a camera that also captures and interprets real-world images, coupled with a display (standalone or leveraging the glasses’ lenses) that feeds you information to augment the real-world data you’re seeing…such as the name and other information on the person currently standing in front of you. Something like Google’s controversial Glass headset:
And considering other companies’ recent product announcements, I’m apparently not the only one with a beefier implementation of the concept in mind. Less than a week after Facebook and Ray-Ban’s Stories were officially unveiled, for example, Xiaomi “launched” its own Smart Glasses, complete with a built-in MicroLED optical waveguide display in one corner (and a camera in the other):
I put “launched” in quotes because, quoting Ars Technica’s coverage, “there’s no release date or price point yet.” In fairness, however, Ars Technica also notes that “The company calls this a “concept” device, but previous Xiaomi “concept” devices have turned into real, for-sale products…it looks like it’s a working device…Xiaomi’s feet are firmly on the ground, and there isn’t much about the product that hasn’t been done before by Google Glass…”
And speaking of online services, if you’re a YouTube (or other streaming video) fan, you might be interested in Nreal’s latest stab at smart glasses success, the Air:
Two years after unveiling its premier Light glasses, and a year after fending off a lawsuit brought by AR headset competitor Magic Light (which delayed first-generation product shipments), the upcoming Air is lighter and less expensive, with improved display technology, albeit also reduced functionality (no integrated motion tracking, for example).
In general, what do I think of the “smart glasses concept?” In one sense, I empathize with the tech industry’s increasing desperation to come up with the next “killer product,” as users’ existing smartphones increasingly become “good enough” and the upgrade treadmill slows. Xiaomi doesn’t even try to pretend otherwise, as the opening quote from the earlier-shown video notes:
Someday, in the future, smartphones may become a thing of the past…imagine every smartphone function integrated into what you wear…
That said, I remain skeptical. Camera-, microphone- and speaker-inclusive glasses are modest-at-best feature set upgrades to standalone frames, ensuring them a niche-at-best market. The more capabilities beyond that base set that you include—an integrated display, for example—the heavier, bulkier and more expensive the smart glasses will inevitably be, no matter how much of the overall processing burden you offload to a tethered smartphone (and isn’t the long-term goal to eliminate that tether, anyway?).
Even more fundamental than the aforementioned critiques, however, is the fact that glasses are first and foremost a fashion statement…and fashion is fickle. Dropping a Jackson, a Grant, or even a Franklin on a pair of conventional glasses that you may sooner-vs-later grow tired of is one thing; spending many Franklins on something you will no longer be motivated to put on your head next year at this same time is quite another. Then again, I never got why folks drop one-too-many Franklins on tennis shoes, either; and come to think of it, I did once acquire a pair of camera-equipped ski goggles…
On that note, I’ll wrap up with a question to all of you. Are you in agreement with me here, or am I just not “cool” enough? Thoughts as-always welcome in the comments; just don’t be too harsh with any “Dipert is a fuddy-duddy”-themed feedback, please…
Brian Dipert is Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.