Mobile learning via AR is not a simple evolution of eLearning.
Just as the transition from desktop computing to mobile computing required a new approach in regards to learning (see here and here), so too does the transition from traditional mobile devices to smart glasses and other wearable technology.
Here, we’ll list just a few of the problems we’ve come across in our research.
Sadly, we are not all billionaire genius Tony Stark, the man otherwise known as Iron Man. The image above highlights a problem we faced early on – providing too much information to users is overwhelming, and it distracts from the reality we’re trying to augment, not replace.
Imagine being confronted with the sheer volume of information in the above picture. Proper decision-making would be next to impossible.
When designing augmented reality applications, it’s important to remember that you are quite literally placing things in front of what the user is trying to see.
One option is to ensure that the information is omnipresent but out of the way, such as requiring the user to look up or down to access a menu, for example.
Another is to use stereoscopic vision to provide two slightly different images to each of the user’s eyes, in much the way that virtual reality (VR) creates the illusion of depth.
If you’d like to see this effect for yourself, you should check out Google Cardboard, which provides simple VR experiences with very little upfront investment.
In our tests, we’ve used 3-D engines like libGDX and Unity to give objects real depth in the user’s field of vision.
Determining user intent is fairly trivial in the mobile space.
Generally, tapping on an object will select it, and confirmation dialogs can ensure the proper selection has been made.
However, the traditional “tap” or “click” action does not necessarily exist in the AR space. Google Glass has only swipes and voice commands, the Epson Moverio uses a touch screen, and the ODG R-6 has a small touch pad with limited gesture detection.
Some solutions, such as APX Labs’ Skylight, opt instead for having the user focus on a given point in space for some period of time to act as a dedicated selection gesture. This concept is also present in the VR space, with focus used to make selections in the Mercedes VR app for Google Cardboard.
Occlusion, with regard to AR, is the process of understanding the position of an object in real space and ensuring that it “blocks” the appearance of objects in virtual space.
In theory, this can give the appearance that an object in the virtual world truly exists in real space. However, implementing object occlusion is difficult and can lead to fuzzy edges.
Depth-sensing cameras, such as Microsoft’s Kinect or Google’s Project Tango can help provide the information on true object position that is need for this to work, but this technology has yet to be implemented in a consumer-grade device.
A recent concept video from Magic Leap, the Google-backed company producing a mysterious AR device, suggests that they may have some form of depth-sensing built into their device, given that occlusion happens quite naturally in the video itself.
At Float, we’ve had hands-on experience building the sort of AR mobile learning platform that has been on the horizon for years.
We’ll be sure to keep you posted as we conduct even more research in this field, and feel free to contact us for more information on how AR can work for you.
Latest posts by srichey (see all)
- Introducing the Float xAPI Library for F# - August 27, 2018
- 4 Challenges Designing Augmented Reality Applications for Smart Glasses - May 4, 2015
- 9 Ways Smart Glasses Can Increase Employee Productivity - April 27, 2015