SCORM Isn’t Dead, But Apparently It Is Misunderstood

A recent entry on “Rod’s Pulse Podcast” contained an audio interview with Dr. Bobbe Baggio, In the interview, Dr. Baggio asserts that “SCORM is dead”. She also says that “people who work with SCORM understand what I mean”. Dr. Baggio, I work with SCORM, I do understand what you mean and I do agree with your premise.

GASP! WHAT? The SCORM guy agrees with somebody who says that SCORM is dead! Hold on, don’t panic, catch your breath. I did not say that I agree that SCORM is dead, I said that I agree with her premise, and as we’ll see, those are two different things entirely.

Dr. Baggio’s central premise is that “intelligent searching has eclipsed the need for metadata and tags”. Amen, hallelujah. I absolutely agree. There are still some very valid uses for metadata, but for most of the world, intelligent searching will be good enough.

BUT, it is a long stretch to get from there to “SCORM is dead”. To make such a leap represents a misunderstanding of what SCORM is and why people use SCORM in the first place. It is an understandable misunderstanding though considering the way SCORM was initially “sold” to the world.

In Dr. Baggio’s view, SCORM is all about reuse. She views SCORM as being all about storing content in repositories to enable people to search those repositories and reuse content. Her postulation is that metadata is the key technological enabler of this functionality. I disagree with this view in three significant ways.

My first disagreement is that SCORM is all about reuse. Yes, reuse is one very significant goal of SCORM, but the real reason most people adopt SCORM is interoperability. Most people use SCORM so that they can import content created in any system into their LMS.

My second disagreement is with narrow definition of reuse contained in Dr. Baggio’s arguments. Dr. Baggio’s view of reuse is that an organization’s content is all stored in a central repository. Because of the content’s metadata that repository is easily searched, allowing people to find reusable learning objects to use in other courseware. This is certainly a valid type of reuse and it is the main view that has been “sold” by ADL as a driving factor behind SCORM. But, there are other types of reuse that are far more common in today’s world. For instance, SCORM enables courses to be reused as a whole and across entities. For instance, content vendors (like SkillSoft) can produce a huge library of content that is sold to thousands of organizations. Because of SCORM (and other standards), SkillSoft only needs to produce one version of that content and it can be used by many of organizations without modification. This is reuse. This is what is happening in the industry. This is what has made SCORM the de facto industry standard for learning content. This is what SCORM is really good at. This is why SCORM is healthy, viable and far from dead.

My final disagreement is with the implicit assertion that metadata is the key technological enabler behind SCORM and as such, since metadata isn’t necessary anymore, SCORM is dead. In my view, metadata is the least important aspect of SCORM considering the way SCORM is currently used by industry. Go ahead, take it away, it won’t affect things. As I previously argued, SCORM is about more than reuse and content discovery, so metadata’s irrelevance doesn’t kill SCORM. But, let’s assume for a moment that SCORM is only about reuse and discovery of reusable learning objects in a central repository. In other words, let’s step into the view that Dr. Baggio espouses. Even in that world, SCORM adds a lot of value if metadata is replaced by intelligent searching. SCORM encourages content to be logically chunked, enabling reuse. SCORM removes technical dependencies allowing content to be portable across different environments. The interoperability that SCORM provides is still required when the discovered content is to actually be used.

I don’t mean to disparage Dr. Baggio’s remarks because I completely understand where her opinions come from. SCORM was sold to the public to do precisely what she envisions. When SCORM was first released, all the hype was about reusable learning objects pulled from central repositories. Metadata and reuse were all the rage. That vision is still valid, but unfortunately it hasn’t emerged as we had hoped. (I believe the obstacles standing in the way of that vision are related more to business models and constraints than they are to technological deficiencies, but that’s another blog post.) Along the way though, the world found that SCORM was very useful in a world without central repositories. SCORM is VERY widely adopted and very widely used. This adoption has happened voluntarily by industry. Broad industry adoption simply does not happen unless there is significant value in a technology.

Dr. Baggio made several other comments that I disagree with, such as “SCORM from the beginning never really worked really well, “if SCORM is going to work in your organization, you have to go into SCORM and customize it to meet your needs”, “the need for SCORM has certainly gone away” and “if you look at the research about who uses SCORM, it’s not being used”. I disagree with all of these statements, but they do follow logically from her original premise. Since I’ve already argued against the root premise, I don’t think these comments need to be addressed.

I do agree with a few other things she said, such as “since SCORM was invented, we have better technologies” and “SCORM might morph into something else”. I absolutely agree with these assertions. SCORM is about 10 years old and due for some updating. I am very involved in the effort to define what SCORM 2.0 will look like. A new organization called LETSI has taken on that monumental task and their efforts have garnered a lot of support.

So, in short, I think that SCORM is anything but dead. A while back, we bet the company on SCORM’s future. Today I am much more inclined to double down than to walk away.


  • philip

    well said, i agree with you 100%, especially regarding SCORM being more popular as an interoperability standard as opposed to a reuse standard. the latter is a great concept, but the former is the reason SCORM has been adopted by learning organizations across the globe.

    moving forward, many SCORM enthusiasts (myself included) would LOVE to drop some of the metadata aspects of SCORM, as it isn’t always useful and adds unwanted complexity.

  • Brian Caudill

    I also agree, metadata is not even required by SCORM and is not a big part of the “conformance”. It still is a DoD requirement, but our private sector customers only provide course level metadata that is used to tell the student an abstract of what the course is about. SCORM is alive and well and required by just about every e-learning contract that is let. Also, with the fact that SCORM has been widely adopted around the world by LMS vendors and e-learning developers, the demand for newer better SCORM will only grow. SCORM is not dead; it is growing deep roots in the community and continuing to thrive.

    So when you say:
    “So, in short, I think that SCORM is anything but dead. A while back, we bet the company on SCORM’s future. Today I am much more inclined to double down than to walk away.”

    I will be right there with you on that bet and our chips are all in!

    —Brian Caudill
    JCA Solutions

  • Damon Regan

    Great post, Mike. Certainly interoperability is the primary value. However, I think the reuse value is still possible with a different approach. Robby and I proposed a focus on content packages rather than SCOs in our recent I/ITSEC talk. So much of the focus has been on the object reference model historically, yet the focus rightly belongs on sharable content.

    SCORM doesn’t require metadata as Brian stated. Unfortunately, the perception is that it does based on the strange use of application profiles in SCORM 1.2. I was glad SCORM 2004 decided to remove any notion of mandatory. What is mandatory rightly belongs within an organization’s approach to cataloging.

  • Mike Rustici

    Damon, I thought you and Robby might have something to say about that post! Your proposal to shift the focus of reuse up to the package level rather than to the SCO level makes a lot of sense in today’s world. Maybe once package level reuse is more widely adopted, the framework for SCO level reuse won’t be much of a stretch.

  • David Ells

    It seems that SCORM metadata has all the features to deliver the “sold as” intention of aggregating smaller self-contained learning objects into specialized (i.e. dynamic) “courses” or “books” or even something more interesting, like an aggregation that changes over time as you progress towards different paths. (I read a lot of “Choose Your Own Adventure” books as a kid. Plus, it still *sounds* like a really cool idea.)

    It’s just that the interoperability provided by SCORM has taken a great priority because it is immediately valuable to the eLearning industry. From the perspective of content providers and companies maintaining a large active LMS, it is a heaven sent to have one single standard point of interaction with the other side. Just this idea alone, in a world where it was previously absent, is enough to provide new opportunities for both providers and LMS holders. Suddenly, for the same amount of work, a content provider has no technical barriers in the way of selling to potentially hundreds of more (LMS) customers. It’s the economy of interoperability, and business wise it’s a no brainer.

    I would go beyond the statement that “SCORM is not dead” to say that “The potential usefulness of SCORM metadata is not dead”. But unless/until the active commercial industry using SCORM moves towards the concepts pushed by ADL (i.e. content repositories and aggregation of sharable learning objects), I think it will be left up to a more non-commercial, community effort to explore (and exploit) the usefulness of SCORM in that context, and there metadata would play a key role. I’d be surprised to hear if there were not a number of universities conducting research along these lines, that is, studying dynamic learning aggregations, and comparing them to traditional static aggregations like text books. SCORM could be an interesting tool in that type of research, but only if it was readily available. At any rate, I think that the increasing availability of SCORM software to the non-commercial community would reveal several other uses of the standard beyond the extremely beneficial use of interoperability between content providers and LMS holders. But, SCORM may never need to fulfill the ADL’s release “dream”, and as the blog post noted, it can live side by side with any number of other content discovery mechanisms.

  • Mike Rustici

    David, you hit the nail on the head in identifying adaptive learning paths as having huge potential in the educational sector. Unfortunately however, the higher eduction and K12 spaces have seen rather laggard adoption of SCORM. This is largely because SCORM is a bit rigid for these environments. Education (as opposed to training), encourages lots of free exploration. Training is a bit more structured. In its current implementation, SCORM is oriented more towards training than education. Conceptually, it should be useful in both contexts, but the actual details of the current implementation seem to have gotten in the way. Fortunately the SCORM 2.0 effort underway in LETSI is being developed under the umbrella of “learning, education and training”.