Augmented reality (AR) — the term does not exactly jump off the tongue. But the concepts behind the technology are beginning to change what we think of ourselves, objects and the people in the world that surrounds us.
I am no expert on AR but over the past few months I have seen enough examples of the way mobile devices change our reality to start wondering if what I am looking at is really what I think it is. With Google Glass people will see a data layer that is not visible to the human eye. Through an iOS or Android device, a person can now use apps provide a different context for playing games, monitoring environments or tracking one’s brain activity.
I asked people developing technology for the AR world what they emerging. Here’s what they said:
Vikas Eddy, Co-Founder of Occipital wrote in an email interview that AR has not quite lived up to its potential due to the lacking capability to track and map the real world. But as computer vision algorithms and hardware improve, the camera will become the most important sensor and input mechanism not just for AR but for all computing:
Think about how much visual information each person processes on a daily basis while going about their lives. Almost none of this information is accessible for computation … yet.
Today, your smartphone’s computational reach into its surroundings end at its touchscreen surface. To your device, the real world isn’t a canvas of interactivity. Instead, it’s little more than a grid of pixels that might as well be random. interactive and fun, thereby extending the computational reach of your device into the visual space around you.
At the Blur Conference, Sphero CEO Paul Berberian gave me a demo of a new game called “Sharky the Beaver,” that TechCrunch’s Romain Dillet wrote about earlier this month. Sharky is essentially a robotic ball that serves as a rolling marker. The user controls the ball through a bluetooth enabled device. As the ball rolls across the floor, the user sees Sharky bounce around eating cupcakes. By creating two streams of data, the experience goes between the real world and the virtual one fairly seamlessly.
Sharky is available to developers as an SDK. A likely outcome is a library of avatars that people control via the little, flashing robotic balls. For instance, a furniture company may create a network of avatars that people can use to see how tables and chairs look by rolling the ball around the living room.
I also had the chance to talk at Blur with InteraXon Co-Founder Ariel Gartern about the company’s brain sensing headband that allows your brainwaves to serve as a way for monitoring concentration levels or as a means for controlling window shades or the lights in a house. Its first in-house app helps with brain fitness for “better attention skills, improving your memory, reducing anxiety, building a more positive attitude and staying motivated.”
There are a number of excellent ways that brainwaves and AR fit together. There are predominantly two kinds of AR that people refer to: glass style AR, where one wears a pair of glasses and the world is augmented or mediated on the screen; and iphone-camera type AR, where one holds up an iphone and new layers are added to a scene.
Google glass-style AR provides the opportunity for collecting brainwave data because you have a continuous-wear device that can continuously record brainwave signal. Adding brainwaves to this environment allows you to show real-time activity about you, presented all the time. An example being it could continuously register and stream your level of stress throughout the workday. It also allows the computer system to do a better job of presenting contextually-aware overlays. It can provide content and augmentations that take into consideration not just information informed by place or visual input, but also takes into account the context of the user. Many of these systems are “context aware”, adding the context and state of the user, thus informing what kind of information is presented and in what way it will be presented, for instance, are you sleepy and therefore want information about hotels in the area? Are you cognitively mazed, so only pertinent info should be presented?.
Brainwaves in an AR system also allow for real-time neurofeedback. So you can know your brainstate and have the opportunity to optimize it- being able to choose and be guided into the desired state as you go about your day.
But what is the future of augmented reality? Cyborg Anthropologist Amber Case and Co-Founder of Geoloqi, said augmented reality will become interesting when the barriers to creating custom objects, animations, apps and experiences is drastically lowered. Similar to Flash or the App store, AR becomes interesting when these experiences become very personal or shared between friends.
Games and tacky 3D animations will only go so far in AR. The real measure of AR is when it solves real-world problems that may seem boring and everyday with realistic and minimal interface. When designing for AR, think of the minimum viable interface instead of the shiny one and work from there. Most AR has had the exciting “wow” factor which lasts for about 15 seconds. It is a big jump from there to useful everyday applications. Think of the interface of Google. There’s practically nothing there. It doesn’t get in the way of interaction – it causes the data to be exposed in such a way that it can be interacted with.
Bonus! If you want to think of the future of AR, think about how it can be abused or pranked with. People think about negative things, but it’s always focused on adults. Think of kids growing up with this tech with the ability to code. Think of a future in which AR bullying is a fun prank of kids that are just learning to hack and code. A bunch of kids can put an AR kick-me sign or augment some other kid and share that layer of reality with a small group of friends. Someone takes a picture and gets a bunch of upvotes from a bunch of friends. This is AR+social permissions. The person who is getting made fun of can’t see the augmentation, but they understand and have to retaliate.