Loosely I can see two visions for the future of how we interact with computers: cyborgs and rooms.
The first is where the industry is going today; I’m more interested in the latter.
Cyborgs
Near-term, cyborgs means wearables.
The original definition of cyborg by Clynes and Kline in 1960 was of a human adapting its body to fit a new environment (as previously discussed).
Apple AirPods are cyborg enhancements: transparency mode helps you hear better.
Meta AI glasses augment you with better memory and the knowledge of the internet – you mutter your questions and the answer is returned in audio, side-loaded into your working memory. Cognitively this feels just like thinking hard to remember something.
I can see a future being built out where I have a smart watch that gives me a sense of direction, a smart ring for biofeedback, smart earphones and glasses for perfect recall and anticipation… Andy Clark’s Natural Born Cyborgs (2003) lays out why this is perfectly impedance-matched to how our brains work already.
Long term? I’ve joked before about a transcranial magnetic stimulation helmet that would walk my legs to work and this is the cyborg direction of travel: nootropics, CRISPR gene therapy, body modification and slicing open your fingertips to insert magnets for an electric field sixth sense.
But you can see the cyborg paradigm in action with hardware startups today trying to make the AI-native form factor of the future: lapel pins, lanyards, rings, Neuralink and other brain-computer interfaces…
... continue reading