Every day, we help enterprises make intelligent digital transitions to new business models. Assisting intelligent digital transitions means using new information technologies in smart ways that work toward enterprise objectives, based on methods of machine learning, uses of data and accumulated knowledge, approaches to secure networking, and the efficient building of intelligent interfaces using human-centered design.
As people, what we think of as intelligence doesn’t take place in splendid isolation inside a human brain, or inside a “computer brain,” either. What we think about highly depends on the data that comes into our brains through our senses. In the same way, “machine learning” algorithms become more intelligent when fed new data from external senses.
Float is working in this area specifically because constructing enterprise software that meets human needs in smart ways is our new expanded mission. We have started down this road by focusing on “computer vision,” the ability of computers to make sense out of images. This is manifested in our work with Google’s Project Tango, which we have shown at recent conferences we’ve attended.
But, computer vision is only one of many senses that computers can have. Already all five of the human senses we recognize can be, or shortly will be, replicated by computers, including the computers part of the bundle of technologies in our mobile devices. (We may have a sixth sense, “magnetoreception,” the ability to perceive magnetic fields.) Almost four years ago, IBM predicted that computers would have all five human senses within five years. Mostly, that prediction has already come true. Here are examples:
- Sight – computer vision includes taking in still or moving images, then processing the images for meaningful information. Uses include recognizing the objects in an image and then adding captions, providing feedback about an environment to people who are visually impaired, or tracking a specific person with a camera.
- Hearing – computer hearing (also called computer audition, acoustic monitoring, or machine listening) is the computer taking in sounds and making sense of them. Examples include speech recognition programs, music recognition programs, and industrial monitoring.
- Touch – computer touch includes the use of vibrating touch screens, haptics for virtual reality goggles, and website navigation for people who are both deaf and blind.
- Smell – computer smell (also known as electronic noses, digital scent technology, or olfactory technology) can involve body odor detection, monitoring of industrial sites, and detecting superbugs in hospitals.
- Taste – soon, it will use computer taste technology for tasting at a distance in virtual reality or while watching cooking shows, and in creating new foods.
Our technologies often extend human abilities by finding new sources of data, or which go beyond human performance capabilities. Some things that mobile computing technologies can detect that go beyond our usual five senses include:
- Geolocation
- The positioning of a device
- Proximity detection
- Force/Pressure/Contact detection
- Acceleration/Deceleration
- Magnetic field detection
- Ambient light detection
- Outside air temperature
- Barometric pressure
- Chemical detection
- Radio frequency identification
- Emotion detection
- Metal detection
- Radioactivity detection
What can we do with the data from this expanding array of computer senses?
Using new techniques of “cognitive computing,” we can store the incoming massive amounts of data, combine them with already stored accumulated data and/or knowledge, program the computer to process this mix using machine learning algorithms, make decisions based on the results of this processing and conditions in environment, and output the results to a display or take other actions based on decisions that have already been programmed into the computer.
Instead of using mobile devices simply as screens, storage devices or terminals, the next five years will see the rise of super intelligent applications as mobile computers become faster, smarter, and more powerful. But, without collecting and processing lots of sensory data, this leap in functionality won’t happen. That’s why Float is working hard to bring mobile devices to their senses.
Latest posts by Gary Woodill (see all)
- Rapid Doubling of Knowledge Drives Change in How We Learn - January 23, 2018
- What Does AR for Learning Enable That Previously Wasn’t Possible? - January 19, 2018
- Punctuated Equilibrium: Shifting from the Familiar to a New Normal - January 16, 2018
[…] Read the full story by Float […]
[…] in the form of additional visuals, sound, vibrations, touch, smell, or even taste. As computers develop their own senses and enhance ours, we can create new perceptual sensations that help us to better understand the world around us. […]
[…] can now create many sensory experiences through computer programming, and the ability to transform products based on algorithms and data […]
[…] July, I wrote about how the inclusion of various sensors allowed mobile devices to augment and extend our performances as human beings. While technology is […]