How To Bring Mobile Devices To Their Senses And Extend Our Performance

Smarter, Faster, More Powerful Intelligent Apps Will Help Us Improve

Industry News, Mobile Development, Pedagogy and Learning Comments (4)

Every day, we help enterprises make intelligent digital transitions to new business models. Assisting intelligent digital transitions means using new information technologies in smart ways that work toward enterprise objectives, based on methods of machine learning, uses of data and accumulated knowledge, approaches to secure networking, and the efficient building of intelligent interfaces using human-centered design.

As people, what we think of as intelligence doesn’t take place in splendid isolation inside a human brain, or inside a “computer brain,” either. What we think about highly depends on the data that comes into our brains through our senses. In the same way, “machine learning” algorithms become more intelligent when fed new data from external senses.

Float is working in this area specifically because constructing enterprise software that meets human needs in smart ways is our new expanded mission. We have started down this road by focusing on “computer vision,” the ability of computers to make sense out of images. This is manifested in our work with Google’s Project Tango, which we have shown at recent conferences we’ve attended.

But, computer vision is only one of many senses that computers can have. Already all five of the human senses we recognize can be, or shortly will be, replicated by computers, including the computers part of the bundle of technologies in our mobile devices. (We may have a sixth sense, “magnetoreception,” the ability to perceive magnetic fields.) Almost four years ago, IBM predicted that computers would have all five human senses within five years. Mostly, that prediction has already come true. Here are examples:

  1. Sight – computer vision includes taking in still or moving images, then processing the images for meaningful information. Uses include recognizing the objects in an image and then adding captions, providing feedback about an environment to people who are visually impaired, or tracking a specific person with a camera.
  2. Hearing – computer hearing (also called computer audition, acoustic monitoring, or machine listening) is the computer taking in sounds and making sense of them. Examples include speech recognition programs, music recognition programs, and industrial monitoring.
  3. Touch – computer touch includes the use of vibrating touch screens, haptics for virtual reality goggles, and website navigation for people who are both deaf and blind.  
  4. Smell – computer smell (also known as electronic noses, digital scent technology, or olfactory technology) can involve body odor detection, monitoring of industrial sites, and detecting superbugs in hospitals.
  5. Taste – soon, it will use computer taste technology for tasting at a distance in virtual reality or while watching cooking shows, and in creating new foods.

Our technologies often extend human abilities by finding new sources of data, or which go beyond human performance capabilities. Some things that mobile computing technologies can detect that go beyond our usual five senses include:

Check out this free white paper on how to combine mobile affordances for more than 17 million unique possibilities for apps.

What can we do with the data from this expanding array of computer senses?

Using new techniques of “cognitive computing,” we can store the incoming massive amounts of data, combine them with already stored accumulated data and/or knowledge, program the computer to process this mix using machine learning algorithms, make decisions based on the results of this processing and conditions in environment, and output the results to a display or take other actions based on decisions that have already been programmed into the computer.

Instead of using mobile devices simply as screens, storage devices or terminals, the next five years will see the rise of super intelligent applications as mobile computers become faster, smarter, and more powerful. But, without collecting and processing lots of sensory data, this leap in functionality won’t happen. That’s why Float is working hard to bring mobile devices to their senses.

Follow Float
The following two tabs change content below.
Gary Woodill is a senior analyst with Float, as well as CEO of i5 Research. Gary conducts research and market analyses, as well as assessments and forecasting for emerging technologies. Gary is the co-editor of "Mastering Mobile Learning," author of “The Mobile Learning Edge,” and the co-author of “Training and Collaboration with Virtual Worlds.” He also presents at conferences and is the author of numerous articles and research reports on emerging learning technologies. Gary holds a doctor of education degree from the University of Toronto.

» Industry News, Mobile Development, Pedagogy and Learning » How To Bring Mobile Devices...
On July 27, 2016
By
, , , ,

4 Responses to How To Bring Mobile Devices To Their Senses And Extend Our Performance

  1. […] in the form of additional visuals, sound, vibrations, touch, smell, or even taste. As computers develop their own senses and enhance ours, we can create new perceptual sensations that help us to better understand the world around us. […]

  2. […] can now create many sensory experiences through computer programming, and the ability to transform products based on algorithms and data […]

  3. […] July, I wrote about how the inclusion of various sensors allowed mobile devices to augment and extend our performances as human beings. While technology is […]

Leave a Reply

Your email address will not be published. Required fields are marked *

« »