At mLearnCon 2012 conference June in San Jose, and again at the DevLearn 2012 conference in Las Vegas, a piece of software called the Tin Can API took the attendees by storm. “Tin Can Alley” was one of the most popular areas in both conferences, and the developers of Tin Can have been featured in a number of presentations.
One of the first applications that uses Tin Can is the Tappestry app launched by Float Mobile Learning at DevLearn (mLearnCon was its public beta). In total, 11 companies had adopted the Tin Can approach for their software by mLearnCon, and by now that number has more than doubled. In this post, I will try to explain Tin Can in a non-technical way, and review its strengths and shortcomings. For more information on Tin Can, view our posts on the subject.
Tin Can is an extension to the SCORM standard for eLearning courses (it’s not a replacement for SCORM), a standard maintained and updated over the past 10 years by the Advanced Distributed Learning (ADL) initiative of the U.S. Department of Defense. One of the main purposes of SCORM is to make online learning content compatible with many different learning management systems (LMSs). The problem with SCORM that Tin Can addresses is that communication is mostly one-directional between learners and learning management systems. Tracking with SCORM is carried out from the perspective of the system doing the tracking, rarely from the learner’s point of view.
The name Tin Can reflects the desire of its developers, Rustici Software, to have communications in learning tracking systems be two-directional or multi-directional. The Tin Can API “solves a lot of problems that older specifications suffered from, but it also adds new capabilities, new business cases, and new ways of handling content,” according to the Rustici website.
It is important to note that both SCORM applications and Tin Can track “learning activities,” not learning itself. Learning takes place in a person’s brain (or within the networked storage facilities of “extended minds”), and does not automatically result from simply participating in an activity, whatever the intention of the activity’s designer. That is true for all learning management systems and eLearning courses – we can only assume or infer that learning has taken place based on a person’s participation in specified learning activities or the results of specialized activities called assessments. But, learning occurs in many different ways, most of which are not prescribed in a formal way by an institution or training department, and/or assessed by a learning management system.
We refer to this kind of learning as “informal.” Informal learning events can range from accidents that happen to long discussions over a glass of wine. Any non-institutional experience that results in a relatively permanent change in the behavior or understanding of a person about any aspect of human existence can be viewed as an informal learning event.
Most informal learning is not tracked and reported. It just becomes part of our repertoire of knowledge and skills. But, in our society, organizations are generally run by managers who like to see reports, preferably with numbers, that describe the results of the activities of the organization. This data, in theory, can then used to make decisions about the direction and activity level of the organization. Because of the desire for managerial control, many organizations want to track evidence of informal learning in addition to the data that is being collected about formal learning activities. This is one of the main goals of using the Tin Can API.
Because informal learning can be so varied, there is currently only one efficient way to collect and track such data – the reporting of learning activities involving employees by learners themselves, by third-party observers, or by software agents connected to sensors. Tin Can standardizes such reporting in several ways:
- Use of standard statements that follow this form: Actor, verb, object – “I did this.”
- Reports of outcomes after an activity has been completed
- Inclusion of content description only after an activity has been completed
- Ability to use learning content stored anywhere on the internet
- Design of a new learning record store (LRS), a much simpler idea than an LMS
- Allowing the LRS to store user defined variables
- Tracking of new types of data such as those based on simulations or games
- Integration of real-world learning events with digital activities
- Lets a learner start an activity on one platform and later continue the same activity on another platform
- During training, instructors can observe and comment on the learning activity while it is taking place
- Collaborative groups and teams can be tracked as well as individuals
- Content can be tagged or rated for later retrieval
At first glance, it appears that Tin Can does not take into account many of the unique affordances of mobile learning, such as the importance of location, orientation, time, and haptic feedback. But, Tin Can allows for levels of complexity in its statements that may cover this concern.
Its developers acknowledge that many aspects of learning experiences can happen outside a Tin Can-based system. What is needed are standardized and comprehensive ways to make statements about learning outcomes. The Tin Can website explains one approach to solving this problem:
Statements can get as complex as you’d like them to be, and that’s one way where the answer to a “more powerful” e-learning specification comes into play…An example of a more complex statement would be:
[Somebody] says that [I] [did] [this] in the context of
[ _____ ] with result [ _____ ] on [date].
Of course, most LMSs do a lot more than this, launching courses, giving assessments, and plotting career paths for each employee. But, from the perspective of what training managers want – good reliable data to use in their reports to senior management – Tin Can will provide more comprehensive reports, without the massive architecture and cost of most enterprise LMSs. It is easily used with a mobile device such as a phone or a tablet. As shown in Tappestry, the API can be used as specified, but can also be extended with additional features that are not in Tin Can.
There are other issues in the development of Tin Can to date, but to the credit of the developers, they are listed on their website as weaknesses to be resolved through more discussion with the learning and development community. There is a call for suggestions, and a recognition that more work needs to be done to get this initiative right. What a refreshing change from the hype of many vendors, who gloss over problems and pretend that their software can do anything. The folks at Rustici are to be congratulated on their progress in such a short time. I’m impressed, and look forward to new versions of Tin Can as they are announced.