Since picking up a virtual reality headset a couple weeks ago, I’ve been asking myself: how should the future operating system for apps work?
Like: how do you write docs? How do you collaborate on a deck? How do you launch your messaging app? Games are easy because they get to be weird. But for apps you need standard behaviours.
So I’m trying to think through this from first principles and see what comes to mind…
ALSO keep in mind that I have become obsessed with the overview mode in Walkabout Mini Golf. It’s incredible to have the “Godzilla’s eye view” (as I called it in that post at the top): gazing over the course with all its trees and ponds, a mountain halfway up my chest. And then being able to literally kneel on the floor and stick my head into a cave, examining closely all the intricate objects and furnishing in the rooms inside.
For me, the key difference between a screen-based user interface and a VR-based UI comes down to this:
If there is a small icon on my laptop screen, no amount of me moving closer will magically add resolution. But if there is a small icon in VR, leaning in will resolve more detail.
Quick maths: let’s say an icon (like a user profile pic) is 1cm across and apparently 20cm away, and I lean in to halve that distance to 10cm. The number of pixels dedicated to that icon increases 4x.
You can pack a ton more information into 4x pixels!
This is crazy fundamental. On our phones, we “see” with our fingers - panning, swiping - but although you can pinch-zoom on a photo, there’s nothing you can do that lets you peer closer at an interface element and get more data than is already there. You can in VR. And compared to tapping or pointing with an input device, moving your head a small amount is zero cost.
Like, imagine looking at the tiny wi-fi icon in the top bar on your home screen. Simply lean your head towards it a little (unconsciously) and you are able to read the name of the current wi-fi network; buttons for commands appear.
... continue reading