Understanding the Benefits of Using Computer Vision and Machine Learning for Performance Support

Augmented Reality Comments (0)

What if you could see everything? Let’s go one step further. Imagine that not only can you see everything, but you can also know everything about what it is you are looking at. This amazing ability would enable you to verify an object’s significance or validity. You could then make a nearly instantaneous judgement on what to do with that information. You would know whether to order more of something. Maybe it tells you to move an item to the appropriate area of the store floor to help sales. You could easily fix something that is out of alignment, broken or otherwise out of order.

These seemingly far-fetched hypotheticals are increasingly becoming real-world empiricals. People are already augmented by all-seeing devices (eg. Smartphones, Tablets and the like) connected to cloud data sources. These devices see the world around them via Computer Vision (CV). Via the device’s affordances these devices can understand the context in which they are situated. With neural networks and the requisite machine learning processes that power them, these apps are also able to help make their human users smarter and more efficient.

Computer vision and machine learning can solve big problems

The constant task switching, the repetitive, and sometimes tedious nature of many jobs related to supply chain, retail, food service and any other multitude of industries where vast quantities of goods are identified, cataloged, moved, and otherwise manipulated can lend itself to error and inefficiencies in processes. These inefficiencies add up over time and with that, oftentimes errors are introduced, compounding the problems. Fatigue and cognitive overload is a constant worry.

There is ample opportunity in situations like this to shift the paradigm and jumpstart radical improvements in performance. With new advances in neural net creation and machine learning implementation mobile devices can be trained to do a large array of related tasks in this space. Inventory and planogram compliance, configuration verification and validation are all candidates for being transformed by these emerging technologies.

How we’re leveraging this amazing tech right now

Here at Float, we’re doing just that. We’ve recently launched an app for a client that has revolutionized a visual inventory process they depend on for their business to run smoothly. The previous non-augmented process required a high level of attention to detail and a considerable amount of time to perform. Not only was the task time consuming and tedious – it was difficult! The products being inventoried often look very similar, and this frequently leads to far too many errors in the overall process. 

This is a task where all people are just kind of bad at it. It’s repetitive, a tad boring, and because of the task switching needed while on the jobsite, it is prone to errors.

Float worked to capture and catalog the inventory and used that to feed a custom created machine learning neural network. This network can be used in the inventory process to enable the app used for their supply chain and order process to “see” the products and perform the inventory with near perfect accuracy in a fraction of the time. Initial tests are showing that the process is down from about 25 minutes on average previously to about 8 minutes in the first three months of this product being available. With nearly 500 users in the field performing this activity several times a day – you can see this adds up!

500 (users) * 10 (inventory operations per day) * 17 (minutes saved per operation) = 

1,416 hours saved daily * 260 working days in a year = 368,160 saved hours.

368,160 hours = huge savings!

Of course, this only speaks to the reduction in time on task. Meanwhile, as we have introduced this computer vision accuracy to the task, we have virtually eliminated miscounts or misidentified products. With our testing done in house, the automated review of the network shows that it is usually at a 98% accuracy at the time it is compiled in the cloud. When we deployed it into the wild, even when humans get their hands on it, the network is still over 85% accurate, with only about 15 out of 100 inventory operations requiring any hand adjustments or revisions after a spot check. These corrections are usually minor and only take a moment to complete. This is pushing us to nearly a 100% accurate inventory in about a third of the time needed for the operation before.

So what’s next?

We’re just getting started here. Image classification and integration of the process into established software is a great way to see radical improvements in outcomes without requiring massive amounts of training or professional development. We’re essentially mitigating the need for training before it even occurs. Thinking about all the things that happen in restaurants for plating and portion control, kitchen standards and organization and so many other issues that arise, it’s clear there are opportunities to augment human workers in that space. When you apply this type of technology to the retail space you can enhance merchandising and planograms usage, plan for better sell through, assist in cross-sell and upsell and so many other workplace needs. In hospitality, rooms cleanliness assessments, stocking linen closets, groundskeeping and so many other facilities related tasks could be enhanced with a bit of Computer Vision and AI.

What problems could you solve at your restaurant, store, hotel or other business where your workers have tedious or error prone tasks that continually suck up training time? Let us know in the comments here, or reach out to us today to talk.

The following two tabs change content below.
Chad Udell is the Managing Partner, strategy and new product development, at Float. There he leads his design and development teams to successful outcomes and award-winning work via a strong background in both disciplines and a singular focus on quality. He has worked with industry-leading Fortune 500 companies and government agencies to design and develop experiences for 20 years. Chad is recognized as an expert in mobile design and development, and he speaks regularly at national and international events and conferences on related topics. Chad is author of Learning Everywhere: How Mobile Content Strategies Are Transforming Training and co-editor and chapter author, with Gary Woodill, of Mastering Mobile Learning: Tips and Techniques for Success. His newest book, Shock of the New, co-authored with Gary Woodill was released in April of 2019.

» Augmented Reality » Understanding the Benefits of Using...
On April 9, 2020
By

Leave a Reply

Your email address will not be published. Required fields are marked *

«