Augmented reality has been with us for a long time. Glasses and contact lenses turn the eye into a two element lens to correct issues with our eye's lens. We use sunglasses to change the intensity and/or polarization of incoming light. Some of these are responsive to changes in the scene around us. Some of us wear hearing aids. A few use assistive animals like seeing eye and epilepsy dogs.
Some of these applications like glasses have fixed characteristics. Others like dogs can be highly adaptable. Apple offers good example of computationally adaptive AR with AirPods and the cameras on iPhones and iPods. The technical press tends to describe AR as overlaying a computational display - often information from the cloud keying on local state information like who you are, what you're interested in, your location, and so on.
As it happens reality is much richer than what we experience with our senses. Visible light is only a tiny segment of the electromagnetic spectrum and soundscapes are much richer. There can be any number of useful and recreational explorations into the larger reality - science does this all the time. Wearable AR can open up wider videos into the incredible richness of reality itself.
It's been interesting watching Apple's shift in how they position their watch. At first there seemed to be an assumption that it was a fashion accessory like a normal watch with some extra complications. After a few generations it evolved into a health and safety wearable. Initially Apple's visual AR device will probably offer simple visual overlays that you might see on a smartphone but with time I suspect it develop a strong health component that will be a major attraction. People with serious glaucoma could get a view of the areas their eyes are missing. As displays get better perhaps a visual AR wearable could replace glasses entirely matching your prescription and becoming a computational tool with the tricks we see in computational photography. "zoom to the moving target and slow motion..." or "reduce glare from the water.." It's easy to sit down and come up with a dozen ideas that aren't games. The list gets even richer when you start linking in other sensors.
We'll see wearable AR used in many modes, but I suggest the augmentation of local reality will be very important in ways we don't yet appreciate. There are a lot of hurdles that make me think early devices will be crippled, but after a decade this could be a very rich area indeed.
Modal changes are an obvious enhancement: What I want at a party is quite different from when I am working on my car.
How modes are triggered is key: do I signal via gestures or voice? How context sensitive do I want the device itself to be?
Posted by: Greg Busby | 01/07/2022 at 02:30 PM