There’s one thing people want to know when they see my first-gen Ray-Ban smart glasses, and it’s got nothing to do with AI, or cameras, or the surprisingly great open-ear audio they put out. They want to know what’s probably front-of-mind right now as you’re reading this: Do they have a screen in them? The answer? Sadly, no… until now.
At Meta Connect 2025, Meta finally unveiled its Ray-Ban Display smart glasses that, as you may have gathered from the name, have a screen in them. It doesn’t sound like much on the surface—we have screens everywhere, all the time. Too many of them, in fact. But I’m here to tell you that after using them in advance of the unveil, I regret to inform you that you will most likely want another screen in your life, whether you know it or not. But first, you probably want to know exactly what’s going on in this screen I speak of.
The answer? Apps, of course. The display, which is actually full-color and not monochrome like previous reporting suggested, acts as a heads-up display (HUD) for things like notifications, navigation, and even pictures and videos. For the full specs of that display, you can read the news companion to my hands-on here. For now, though, I want to focus on what that screen feels like. The answer? A little jarring at first.
While the Ray-Ban Display, which weigh 69g (about 10 more grams than the first-gen glasses without a screen) do their best not to shove a screen in front of your face, it’s still genuinely there, hovering like a real-life Clippy, waiting to distract you with a notification at a moment’s notice. And, no matter what your feelings are about smart glasses that have a screen, that’s a good thing, since the display is the whole reason you might spend $800 to own a pair. Once your eyes adjust to the screen (it took me a minute or so), you can get cracking on doing stuff. That’s where the Meta Neural Band comes in.
The Neural Band is Meta’s sEMG wristband, a piece of tech it’s been showing off for years now that’s been shrunk down into the size of a Whoop fitness band. It reads the electrical signals in your hand to register pinches, swipes, taps, and wrist turns as inputs in the glasses. I was worried at first that its wristband might feel clunky or too conspicuous on my body, but I can inform you that it’s not the case—this is about as lightweight as it gets. The smart glasses also felt light and comfortable on my face despite being noticeably thicker than the first-gen Ray-Bans.
More importantly than being lightweight and subtle, it’s very responsive. Once the Neural Band was tight on my wrist (it was a little loose at first, but better after I adjusted), using it to navigate the UI was fairly intuitive. An index finger and thumb pinch is the equivalent of “select,” a middle-finger and thumb pinch is “back,” and for scrolling, you make a fist and then use your thumb like it’s a mouse made of flesh and bone over the top of said fist. It’s a bit of Vision Pro and a bit of Quest 3, but with no hand-tracking needed. I won’t lie to you, it feels like a bit of magic when it works fluidly.
Personally, I still had some variability on inputs—you may have to try to input something once or twice before it registers—but I would say that it works well most of the time (at least much better than you’d expect for a literal first-of-its-kind device). I suspect the experience will only get more fluid over time, though, and even better once you really train yourself to navigate the UI properly. Not to mention the applications for the future! Meta is already planning to launch a handwriting feature, though it’s not available at launch. I got a firsthand look… kind of. I wasn’t able to use handwriting myself, but I watched a Meta rep use it, and it seemed to work, though I have no way of knowing how well until I use it for myself.
But enough about controls; let’s get to what you’re actually doing with them. I got to briefly experience pretty much everything that the Meta Ray-Ban Display have to offer, and that includes the gamut of phone-adjacent features. One of my favorites is taking pictures in a POV mode, which imposes a window on the glasses display that shows you what you’re taking a picture of right in the lens—finally, no guess and check when you’re snapping pics. Another “wow” moment here is the ability to pinch your fingers and tweak your wrist (like you’re turning a dial) to zoom in. It’s a subtle thing, but you feel like a wizard when you can control a camera by just waving your hands around.
Another standout feature is navigation, which imposes a map on the glasses display to show you where you’re going. Obviously, I was limited in testing how that feature works since I couldn’t wander off with the glasses in my demo, but the map was quite sharp and bright enough to be used outdoors (I did test this stuff in sunlight, and the 5,000 nits brightness was sufficient). Meta is leaving it up to you whether you use navigation while you’re in a vehicle or on a bike, but it will warn you of the dangers of looking at a screen if it detects that you’re moving quickly. It’s hard to say how distracting a HUD would be if you’re biking, and it’s something that I plan to eventually test in full.
Another interesting feature you might actually use is video calling, which pulls up a video of the person you’re calling in the bottom-right corner. The interesting part about this feature is that it’s POV for the person you’re calling, so they can see what you’re looking at. It’s not something that I’d do in any situation, since usually the person you’re calling wants to see you and not just what you’re looking at, but I can confirm that it works at least.
... continue reading