Using Cameras as Powerful Mobile Learning Tools

Take a Picture, It'll Last Longer

Mobile Devices, Pedagogy and Learning Comments (2)

The old cliché goes something like this: “the best camera is the one you have on you.”

As trite as this sounds, I do think this is true.

I’ve gone from camera to camera, each gradually getting smaller and smaller. Of course, this increases the convenience of carrying them around. It’s just easier to bring them with me when they fit into a pocket.

Until recently, I carried a camera with me.

However, the latest generations of smartphones have dramatically improved their image capabilities. Capturing images with 5-8 megapixels is considered baseline functionality in 2013. Phones are often better cameras than the point-and-shoots we own because they offer HD video recording, flash, HDR, and sometimes, even 3D.

Because of these new capabilities, and the fact that I always have my phone with me, I rarely take my point-and-shoot with me, nevermind my larger SLR camera. The key for me is that I have it with me and don’t need to think about bringing it.

Images and video are powerful learning tools – viewing and creating them can be a great way to learn.

When you couple the image-gathering capabilities of the device with the network connectivity and other features on board, you have a very capable platform for creating and consuming visual content.

New affordances like heads-up display (HUD) and augmented reality (AR) are additional dimensions to your image-based learning content that only mobile devices can do. This is a brand-new world of performance support and just-in-time information, as well as social and informal learning. Visual assessment via computer vision and object detection is also a new avenue we have open to us in the learning world.

Exploring In More Detail

The device’s camera is only a camera in the loosest of terms. Sure, it can capture images as both stills and videos, but really, it’s also digital visual input that allows the output to be processed in a multitude of ways.

We see this already in everyday uses like scanning QR codes, cameras autofocusing on faces in frame to enhance your photos, adding filters to Instagram images, and on and on.

Some other examples that are out there that are exciting form a learning point of view are apps like LeafSnap and Word Lens.

LeafSnap, a project from Columbia University, the University of Maryland, and the Smithsonian Institution, is a mobile field guide that helps you identify trees if the northeast United States based on the shape of their leaves. This is accomplished via a technique known as computer vision, and even more specifically, a trained object-recognition process.

Word Lens, launched in 2010, allows you to point your camera at signage and then have the signage translated into your desired language. Don’t believe me? Check this video out:

Hopefully, now the wheels are spinning. So much more than a simple camera, this visual output from the sensor can also be coupled with on-screen elements to show wayfinding clues, visual overlays for geolocation, temperature or any number of things and even be processed to alter the image for easier understanding of what you may be looking at.

For some examples outside of the mobile device realm, consider these heads-up display and image-processing examples:

HUD display in a car

View Video

These types of displays have been augmented and providing real-time information to pilots and military applications for years:

Military HUD in a plane

Military HUD

With the smartphone, you can enhance images and add useful overlays to assist your learners in finding where they need to go, or even where to grab dessert, by combining the image with geolocation data. Take a look at this image of Urbanspoon’s then-groundbreaking feature from 2009, the Scope:

I hope they have cannoli

I hope they have cannoli

With Google Glass on the horizon, I expect this sort of view to become even more common.

Google Glass imagery

More than a little creepy, right? (The Score in the top right makes this really creepy)

Stalking aside, there are many useful things augmented displays offer that make them a really useful performance tool: repair instructions, facial recognition, real time video chat, and even visual checklists could all become common working and learning tools.

Consider the possibilities of applications that both inform you how to set up a workstation or retail shelf, and then can verify that you have set it up correctly through pattern recognition using computer vision.

I fully expect the use of augmented vision and glasses to be a must have job accessory for technicians and people that work with their hands by 2020.

I realize this may all seem like science fiction now.

You can get started today, though. Think about creating an internal image database where coworkers share best practices. Or maybe you could create a video-sharing portal where people narrate their work and share how they get things done. These ideas, and much more, are all relatively easy to set up with common content management systems and some modest application development effort.

Some of these use cases may require building an app, and will not work in mobile Web experiences due to browser security issues. As always, determine if your technology strategy and your audience goals align.

The really cool thing about all of this is that it clearly shows that mobile learning is a two-way street. Both consuming and creating images can be enlightening and add to performance outcomes for your organization.

Do you have interesting ways you are leveraging images and cameras in your mobile learning efforts today? We want to hear about them in the comments.

Follow Float
The following two tabs change content below.
As managing director of Float, Chad Udell designs, develops and manages interactive Web and mobile projects. Chad has worked with industry-leading Fortune 500 companies and government agencies to concept, design and develop award-winning experiences. Chad is recognized as an expert in mobile design and development, and he speaks regularly at national and international events and conferences on related topics. In 2012, Chad released his first book, “Learning Everywhere: How Mobile Content Strategies Are Transforming Training.” In 2014, he co-edited the book, “Mastering Mobile Learning: Tips and Techniques for Success” with Dr. Gary Woodill, Ed.D.

» Mobile Devices, Pedagogy and Learning » Using Cameras as Powerful Mobile...
On June 7, 2013
By
,

2 Responses to Using Cameras as Powerful Mobile Learning Tools

  1. […] information to the mobile learner. Everything from location services to security, the camera to stored preferences and push notifications to gestural input have been featured on the […]

Leave a Reply

Your email address will not be published. Required fields are marked *

« »