The world we live in is incredibly information-rich. The reality we think we live in turns out to be a tiny proxy of the real thing. We miss most of the electromagnetic spectrum and have a limited ability to process sound. The very large and very small are difficult to process. We have problems with very short and long events and sense the world in the time domain. What we think of as reality seems rich and is often beautiful, but it is a limited proxy constructed in our brain from slow moving signals from our sensory organs. And using our brain we can build tools to sample otherwise inaccessible regions of Nature and create a deeper understanding of reality.
Today I received an email from an outstanding volleyball player. In a match Sarah has to react to constantly changing conditions with dramatic motions. Her field of vision can zip through thousands of degrees per second - enough to flummox even the best gyros - but her brain gives her a smoothly connected image. In fact the shards of information that make up human vision are quite choppy. You've undoubtedly heard of persistence of vision - the trick that makes movies and video seem smooth - but synthesizing a smooth image when the head is moving around pulls together information from what seems to be the distant past .. as long as fifteen or twenty seconds. Even our sense of "now" is an illusion. It's in the past and is a half second wide window that integrated sensory information and ideas into something that seems immediate.. Her brain - your brain - is synthesizing a virtual reality of sorts.
There is a large enterprise working on computationally generated virtual reality - VR. So far it is very task specific. You would think 3d vision is just vision, but it is the fusion of several senses including hearing and inertial information. If you don't get this completely right, and even the visual piece is difficult, the illusion can break and some people get disoriented and nauseous. Fortunately there are domains were a computational VR works well.. just like clever 2d moving images draw you into stories and feel very real.
Your brain can work with very sparse and abstract material. Read a good book and the print on a page turns into rich images and characters fleshing out something very real and powerful. Garrison Keillor was fascinated by the power of the spoken work. He often said listening to a good story was more immersive than movies and sometimes real life. He liked to share his over the radio..
Radio can be thought of as an augmented reality. We've extended our hearing range as far as radio can be received and have given the sender and receiver the ability to further augment it for their own needs. Augmented reality (AR) is very common .. think the glasses you may be wearing.
Computation is part of the process. Lenses turn out to be optical computers in their own right so looking through a lens gives you the result of a real time computation with the "program" a fundamental part of the lens.1 Given enough computer power you can perform sophisticated digital signal processing and work with images like a lens can and even go beyond. We're starting to see such capabilities appear in sophisticated computational cameras like the iPhone 7S.
Science often relies on instrumentation that allows us to sense Nature that our senses haven't evolved to sense. The very large, the very small, short and long time periods, the full electromagnetic spectrum, a huge acoustic spectrum... the list goes on and on. A new form of instrumentation comes along and a sleepy field explodes into something that changes the world. Photography and simple prisms combined with telescopes in the mid nineteenth century giving us astrophysics. In the 1930s Bell Labs work on understanding the basics of radio noise gave us radio telescopes leading to confirmation of the Big Bang. And now we can "hear" two black holes spin into each other sending an acoustic jiggle through spacetime itself at the speed of light.
We're only beginning to appreciate what this means for us as individuals - health, play and curiosity will all be impacted. I use a small signal processor with an ultrasonic microphone to listen to the bats at a small pond near our place in the Summer. I can save and study the acoustic signature and easily identify the species of bat and ponder questions like 'do they echolocate in color?' My dermatologist uses a small microscope attachment on her iPhone to make images of questionable skin patches. A bit of processing tells her what's going on almost as well as a biopsy. Arguably the Apple Watch is doing some low level AR and that seems likely to expand. While virtual reality is being touted as a way to watch sporting events, amateur and professional athletes will be using a variety of augmented reality devices in their training.
Some of the processing is intense. My wife Sukie and another dear friend have vision issues - blind or nearly blind in one eye navigation becomes an issue. I've been thinking about ways to deal with this .. several inelegant solutions. But you could also scan the area with LIDAR (like radar, but using infrared light) and do some hammer and tongs signal processing using neural networks implemented on digital signal processors. Just like the work on self driving cars. Currently this kind of computation requires five to ten trillion floating point operations a second (TFLOPS) .. a couple of AMD or Nvidia graphics processor unit chips can do that, but suck down a lot of power.. typically hundreds of watts. But there's hope, My new MacBook Pro can do about two TFLOPS on about forty watts.
Here's where it gets interesting... the processors in high end smartphones may have enough processing power to very useful personal level signal processing for all of us. The big driver is photography and AR is probably the next. Silicon-on-a-chip processors with one teraflop GPUs that draw a watt or two are probably only a few years away. That'll be a game changer.
Forget the cloud as THE universal resource - the future will be partly cloudy.
While very useful for pulling together information from scattered databases and making efficient use of computational resources the speed of light effectively kills it for serious AR. (Google had a different vision of AR that may have it's place, but is quite narrow and not that different from a mobile phone). Here's one way to think of it. The problem is the flavor of AR I mention constantly has to deal with a flow of new information and computed results. You can't let the communication time to the cloud get in your way. Getting to the cloud is often tens of thousands times slower than the computation and there can be other delays.
Image this.. You're a tax preparer and have a crowd of people lined up. It takes you about ten minutes to perform the calculation for a return. You hand it to the waiting person and start on the next. You could send it away to the tax return cloud where a powerful machine with all of the necessary database could finish it in a second but the wait to get to the cloud and back is 10,000 times longer than your speed. The wait would be about two months,. It's much better to keep just the information you need in you local shop and work away.
In fact a DSP is much better. Rather than one processor that doesn't have to go to the cloud, it has many thousands. They're rather dinky processors ... slow and highly specialized, but they get the job done with comparatively blinding speed. It turns out our brain works something like this too..
So while there will be some augmented reality that uses the cloud, the really powerful stuff won't ..2 it will be local. I suspect this is Apple's direction. There are some rather amazing things you can do with sound synthesis and I'm guessing there's a reason for a serious processor in Apple's new wireless earbuds.
Rather than the normal recipe I'll mention a couple of ideas I've been thinking about (this is a very rich area - dozens come to mind with a bit of thought). Several waste products from burning hydrocarbons have serious negative health repercussions. What if we had tiny sensors that talked to our smartphones? Sensors that could measure CO, NOx, PM2.5 and other pollutants and send the information to the web to create dense, near realtime maps of pollution. It is likely housing values would be impacted .. we're seeing that in China now. Law makers might even respond to millions and millions of parents... Perhaps we could even get meaningful protections (I like the term environmental protections better than regulations). Sensors are the tricky part .. they need to be cheap, small, rugged, accurate, repeatable and self-calibrating. Nothing good enough exists, so this is a nice challenge for instrumentation groups at some of the better Universities.
The other one is simpler and harder.. I have synesthesia.. two of my senses get somewhat mixed together in my brain.3 I have no idea what it must be like to have a crisp separation between vision and sound, but people tell me they want to hear and see what I hear and see,.. So why not? Some approximations could be done with augmented or virtual reality. I'd be curious to see if I could un-mix the mixing I have to experience cleanly separated senses - if only for an hour. That is a more difficult task as you'd have to know the mixing function to some precision.
There isn't much in tech that gets me excited .. AR does. The myriad of things that can be built on a few core underlying technologies and the chance for serious cross-disciplinary opportunities.
1 Put a transmissive object one focal length in front of a lens and it's Fourier transform is formed one focal length behind the lens. Lenses do rather impressive signal processing.
2 There are several other reasons for not going to the cloud, but that's a longer discussion. It may well be that you take ownership of this class of data and it probably should be secure and private.
3 The more I learn about the condition, the more I'm convinced that is something evolution has mostly selected against. Again a long and probably boring subject.