What a Nice Gesture – Incorporating Multitouch And Gestures Into Your Mobile Learning Effectively

3 Factors Holding Up the mLearning Industry Up in Adopting Multitouch in Gestures

Mobile Development, Mobile Strategy, User Experience Comments (4)

We live in a touchy-feely, device-driven world.

Your phone, tablets and likely your laptop have ability to interpret and use the multitouch input to assist with using your devices in a wide variety of ways. These types of inputs help us interact with on-screen elements, mimic real-world gestures and lower the wall between the metaphor of icon-driven graphics and physical objects.

We’ve likely seen videos of new computer users, children and even animals interacting with smartphones and tablets naturally, easily and with few troubles.

Reflect on this for a moment in your experience as a designer for computer-based experiences.

Here is a video of some toddlers flipping through a magazine, expecting it to be an iPad, for example:


These devices are intuitive and natural to use for people of all ages and skill levels.

When the original eLearning experiences, such as CD-ROM and Web applications, arrived, often these courses would be accompanied with an introduction that was more like Using a Computer 101 than it was related to the core learning objectives the course was created for. Tutorials like “here is how you use a mouse” (and more) all permeated these interactive pieces. This was largely needed because many had either little computer experience, or they had moved from an experience based on green screen or terminals to a GUI-driven PC.

These tutorials were also needed because using pointer devices like a mouse and a cursor is virtually nothing like anything in the real world. There is no physical parallel, and the metaphor of files, folders, and clicking and dragging is a bit of artificial contrivance at its root.

Why tap “Next” when you can swipe right to left? Why click or tap a magnifying glass icon when you can pinch and zoom? The rotate or flip buttons are pointless when you can grab an object and spin it with two fingers. The possibilities are incredible.

So, with the new vocabulary of input and control at our fingertips, why are so few of us taking advantage in our mLearning experiences?

In my experience, I think there are three primary drivers holding the industry up in adopting true multitouch and gestural inputs in our mLearning work:

  1. Lack of consideration in design and consideration in creating a mobile-first experience.
  2. Lack of experience and vision creating or designing for multitouch and gestural metaphor.
  3. Lack of support for these input methods in common tooling we used to produce typical learning products with.

Let’s explore these issues and suggest ways to overcome them.

Creating A Mobile-First Experience

In Learning Everywhere, one of the four main content types I explore is “Content Converted from Other Sources.”

This approach, while valid and appropriate for many pieces of content in your library, is not a mobile-first experience. This is clear when interactions and elements are brought over from a mouse-and-keyboard-driven environment.

Artifacts like mouse hover prompt to “click here,” Next buttons and many other elements that are needed or helpful on a computer have no place or are completely inappropriate and unusable on a tablet or smartphone.

Consider the cliché exploratory interface that requires you to hover over items to get more information on them. This simply will not work on a mobile device.

How do get away from these conventions? The answer is to stop “converting” and start redesigning. Take into account the target device’s capabilities and re-examine all and any user interface or user experience factors that could or should change. On-screen prompts, user interface controls and deeper interactions in your applications and websites need proper attention given to them.

Designing For Multitouch and Gestural Metaphors

Design disciplines all have their own sets of patterns and conventions, visual languages and approaches to standardizations on how the design should be conveyed to engineers, developers or manufacturers that need to interpret the plans to create the final product.

Software designers use Garrett IA and UML diagrams. Architects use a standard set of views, elevations or projections, and specific types or elements for plumbing, doors and the like. Electrical engineers all use the same sets of figures to relay items like transistors, resistors and the other items needed for the schematics.

Likewise, a similar sort of vocabulary for gestural and multitouch, and a set of conventions on when, where and why the various types of gestures should be used, has emerged. Covered in great length and amazing detail in the 2008 book, Designing Gestural Interfaces, by Dan Saffer, this set of rules and the accompanying visual vocabulary that informs developer how to employ them is something new to the training community, by and large.

A quick Google image search for “multitouch gestures” results in a wide array of visual depictions of these gestures.

Multitouch gestures

Sample Gestures

To begin incorporating these gestures intelligently in your work, it would be wise to read up on them via articles like this, but you will also want to find some libraries of graphics to begin incorporating into your sketches and wireframes. Stencils for Visio, PowerPoint, and Keynote, or OmniGraffle are readily available, so get started now.

Just having access to the templates certainly doesn’t make you an expert, but it does give you a framework you can start to explore and an expanded toolkit to enable you to design mobile-first interfaces.

Supporting Gestural Input in Your Development Workflow

If you have started creating mobile-first experiences and documenting your design process with properly annotated gestural inputs, you may be wondering where to take things now.

If you are primarily a rapid eLearning tool user, you have no real options available to you at this time. At the time of this writing, no major software packages out there – regardless if they state they support mobile or not – cannot directly address gestural or multitouch input out of the box.

This certainly throws a wrench into the works, but it doesn’t need to completely stop you from trying out long presses, swipes and more in your next mobile learning project.

For the most part, HTML authoring tools don’t support this sort of input directly, either. There are documentation pages in the respective device and OS developer areas to show you how to support these gestures, but you are going to be mostly on your own when it comes time to write the code.

Some tutorials can help you get started with multitouch and HTML5, and some mobile-friendly JavaScript libraries already have support for these gestures.

Recently, some enterprising and bright developers have risen to the challenge to make this easier, and have created some third-party libraries that make developing Web experiences with multi-touch and gestural input capabilities much easier to get started on.

The library Hammer.js adds robust support for the most common gestures and touch events. On top of that, it has the rather punny tagline, “You can touch this!” to boot if that’s your sort of thing.

In Closing

The world of multitouch is an expansive and interesting one to delve into. The intuitiveness of the use of these gestures to interact with content is real and demonstrable. Your users will value the time you take to craft mobile-first interfaces.

With some minor adjustments to your design workflow and attention to the changes needed in your development toolset, you can accommodate and embrace these new ways to empower your users.

The main thing holding you back at this time is you and your desire to grab hold of something new.

The following two tabs change content below.
Chad Udell is the Managing Partner, strategy and new product development, at Float. There he leads his design and development teams to successful outcomes and award-winning work via a strong background in both disciplines and a singular focus on quality. He has worked with industry-leading Fortune 500 companies and government agencies to design and develop experiences for 20 years. Chad is recognized as an expert in mobile design and development, and he speaks regularly at national and international events and conferences on related topics. Chad is author of Learning Everywhere: How Mobile Content Strategies Are Transforming Training and co-editor and chapter author, with Gary Woodill, of Mastering Mobile Learning: Tips and Techniques for Success. His newest book, Shock of the New, co-authored with Gary Woodill was released in April of 2019.

» Mobile Development, Mobile Strategy, User Experience » What a Nice Gesture –...
On May 10, 2013
, ,

4 Responses to What a Nice Gesture – Incorporating Multitouch And Gestures Into Your Mobile Learning Effectively

  1. Mike Kostrey says:

    This is an excellent article, I totally agree with needing to adapt to the changes of mobile, and gestures and multi touch are a huge part of that. One of the things that I wonder about is that with the move toward responsive design we are designing one website for all devices. We will need to compromise between the tools the mobile environment gives us, and the tools the desktop environment gives us.

    Also, on the topic of wire framing for this sort of thing, we use OmniGraffle to wireframe our websites and we do something similar to indicate where we will need to build in web applications, or other functionality. We use a script (it is available at https://www.jtechcommunications.com/blog/blog-detail-12) to count up the different elements to help us estimate the time and budget for each project.

  2. Great information and I found it very useful. Thank you for sharing and keep posting new information.

  3. Thank you very much for sharing a great article and I got some wisdom here. I’m expecting more articles like this and hope you will write it for us.

Leave a Reply

Your email address will not be published. Required fields are marked *

« »