Inteview with Ingo Dahn, CEO IWM Koblenz

Key Points:

  • LMSs don’t always implement standard fully, content may not be tested, tools should explicitly describe the profile of the standard which they fully implement, and be conformance tested.
  • Common Cartridge approach makes sense, core required profile, but optional features as well. Extension points are also important, and should have been included in CC
  • Tools supporting the use of standards should be a key concern for standards organizations
  • Publishers want to render their content their way, not have it rendered by someone else.
  • Enough information should be available to enable content-driven sequencing.
  • In blended learning, teachers want to disaggregate content, control its sequencing and only use parts of it.
  • LMS becomes less important as a front-end, becomes a service provider.
  • Need integration with FaceBook, Google, other LMSs, etc. to support life-long learning.
  • Tracking isn’t as relevant (in EU), or even allowed due to data protection rules. Certification will be on the basis of a test, not tracking.
  • Scalable complexity

Can we start with a brief description of you and your involvement in e-learning standards and SCORM in particular?

First of all, I’m running the central e-learning unit of the university and so we have been looking for seven or eight years at standards, and following standards is an important part when we select products. We think that it is, the most valuable thing we have is the content that our staff brings into the systems, and we don’t want to have lock-in. We want to have the freedom to change the systems, though we won’t do that light-heartedly; it’s always a big effort. The other thing is that we are involved in a number of service projects in particular, also in Europe, in research and development projects, where we looked at serving broader communities with e-learning products and technology and that made us look as early as 2001 and 2002 with standards, starting with IMS content packaging and then diving deeper into that more. A push came in 2004 when we joined the European project TELCERT, which was about developing generic testing technology. The philosophy of TELCERT, which we later implemented, was you start with a specification based on one or more XML schemas and then you have application profiles which modify the schema to the needs of particular communities. Then we wanted to automatically generate conformance testing systems for these profiles, since we found that there are a number of communities interested in adopting specifications to their needs, which is what they do mostly with specifications, but they don’t have the resources to develop conformance-testing technology. So we wanted to facilitate that. That was later on completed after the TELCERT project, but the basics were in the TELCERT project, so we developed in the TELCERT project a tool to capturing domain profiles in machine-readable format, and later developed this into a generic technology to automatically generate conformance test systems for application and domain profiles. For some time, this was the IMS standard testing technology, before the IMS moved last year to a new development of online test systems. Satisfying the needs of further conformance testing projects, for example in the field of e-procurement, we are further extending our testing capabilities going beyond what can be done with Schma and Schematron tests.

Since around 2005, maybe, I became involved in the IMS work, in the Common Cartridge specification, first as an invited expert, and later we became a member, and an IMS Global Academic Affiliate. I was involved in the development of the Common Cartridge specification 1.0, mostly in providing the group with testing technology, which evolved as the specification was developed, so that always the current examples could be tested.

I am also one of the authors of the IMS Application Profiling Guidelines and of the recent IMS LODE specification.

Recently, we had another case, a European best-practice network project ASPECT, which has just finished a few days ago, which had a number of content providers of educational content, and European Schoolnet as coordinator, and also involved educational ministries and teachers. This was about publicizing the use of standards, in particular for meta-data and meta-data-harvesting, but also for content packaging, with SCORM and common cartridge, where the role of our team was mostly concentrated on Common Cartridge while a lot of SCORM support was given by Dan Rehak. So we worked together with Dan in one work package, and we were responsible for that work package. In that project, we explained to the content providers, to the ministries and also to the teachers that were involved, what is the essential content in these standards, were the differences and commonalities between those standards are. We also gave them support in developing SCORM packages, testing them, and also in developing Common Cartridges and using the particular features provided by the Common Cartridge specification for their didactic scenarios. Working with material which the content providers had in their stock gave a lot of insights into what content providers wanted, what problems they had, and what they preferred with their particular business models.

That was the thing with the ASPECT project and there’s another project we were in where standards are relevant. This is the Role Project, which aims to explore potential widget-based learning environments, which use just back-end services, and where widgets communicate with each other, and can be configured toward a particular learning situation. I think that’s a rough overview of our involvements with standards and specifications. Besides that, I had some correspondence with people in the Dublin Core community on meta-data, and I was guest-editor of the Journal of Technology in Instruction, Cognition, and Learning, which explains the role and status of various standards in the learning domain also toward the non-technical experts.

Some of the questions I typically ask have more been geared towards those directly trying to accomplish specific learning goals in the standards and what trouble they’ve run into… do you have anything you’ve run into working with SCORM or that others have run into where you think there are some limitations, or things missing?

What we observed also from others in the ASPECT project is that SCORM is mostly used just as a content packaging tool, and that people don’t have the resources to dive deeper. That was what we found in the ASPECT project, some of the content providers had SCORM packages, but when we looked at them, it was just a collection of content and a structure of content, and it didn’t use something like simple sequencing, etc.

A main problem that always affects interoperability is that sometimes a standard isn’t implemented fully, so you have just a certain part of it implemented, and – for a customer – it’s hard to find out which part it is, in fact, so that you can’t be sure that if you acquire something, it really works. Also, what we found, things are not always conformant. We observe, and noted from interviews with content providers that we conducted, that a conformance testing is considered as an important issue by the commercial publishers, but many of the people producing open-source products don’t, in fact, test, or invest in setting up a test suite or doing a conformance test.

There are also issues of usability of the ADL SCORM test suite but we could help others to test their content using the ADL suite. Non-technical people sometimes have difficulties to interpret the test messages and to correct even simple errors like a typo in a namespace. Some of those errors can simply block interoperability or block the necessary more in-depth testing. So some kind of service or support people to approve their material and to solve actual interoperability problems would be really helpful, I think.

So, two facets to that: you were running into interoperability problems where the implementation on the LMS-side wasn’t as it should be, and also some cases where the content wasn’t done as it should be. And then, also, places where people had tried to use the test-suite, but the results were too confusing. So, what could be done to address those sorts of things more rigorous testing, and also the results of testing should give a non-technical user a better sense of what to do.

That is all correct, but I think the clue to improving the situation is to have tools that really implement the standard fully, and this applies to any standard, not just SCORM. And, which are really conformance-tested so the end-user doesn’t have to bother with difficulties of knowledge or interoperability. I think it’s better to have a restricted standard, which guarantees interoperability, rather than to have a broad one, that is only partially implemented, and where customers can’t rely that they will interoperate, which is what all the thinking is after.

And would you include in that broad sense, just having a smaller standard that guaranteed things would work and not really adding a set of features that were in any way optional?

Not quite. I think the approach which IMS took for Common Cartridge makes a lot of sense: To have a broad standard, like content packaging or QTI. And then have a core profile, which are the things everyone must implement if you want to claim conformance. But to have interoperability, you need full conformance on both, the content side and LMS side, so the content should only use those elements that are specified, and the LMS should fully implement those elements. I am not very happy that IMS has recently relaxed those requirements by making some elements optional instead of removing them from the core profile if they were not commonly used. I think a place where IMS was too restrictive with Common Cartridge is to exclude all extension points. It might be good – to encourage development – to allow people to add extension points which are at least tolerated so that people can experiment and broaden up and also develop more rich profiles which then can feed back into the next version of the core profile. So having a broad specification with a lot of optional elements on one side is okay and is helping development, but on the other hand, you

should have a core set of elements which must be fully implemented and guaranteed by . So then, an author of content could say, this is a package, which is confined to the core, or it has this and that additional features, so it in fact conforms to another profile, and then some kind of profile repository beneath so you can easily track if I have one system which has a certain kind of read-profile and another system which has a certain write-profile which has produced that courses, that they interoperate so that the receiving system can read everything that the writing system has produced. In any case, there should be a possibility to strip off the additional features so that you can have at least a core content which is guaranteed to interoperate.

Besides issues of concerning the structure of a standard we increasingly have to be aware of problems related to the development of standards and profiles. These become more important as core profiles are tailored to support the changing current practice, implying that they are bound to be changed more frequently as well. One of those problems is backward compatibility. We have experienced this problem in particular with the invention of SCORM 2004 and with QTI 2. It will be interesting to observe how missing backwards compatibility will affect the transition from Common Cartridge Version 1.0 to Version 1.1. Certainly there are a few cases where backward compatibility cannot be maintained. I just wish that in these cases the standards organizations would also take the obligation to aid the community with tools to automate the transition from one version to another as much as possible.

Since you’ve talked both of optional features and extension points, are you more in favor of one or the other, or do you think there is need for both of them?

Yes, I think there is a need for both. Optional elements are for the situations that have been foreseen in the specification process. They may not be needed by all, but if they are used, then they will be used in the same – foreseen – way. But you always have some features which you couldn’t foresee in the specification process. Which are perhaps relevant just for niche and that can be handled with extension points. And I think it’s not too difficult to implement those points in the way that they are tolerated, at least. And that would give people the option to add the features which they need for their particular niche. We find that in some cases, you have very close vendor-producer relations. In particular, there are some regions in Europe, where the ministry prescribed for the school the use of certain systems, and the vendors produce particular towards that system, and those systems have some particular features and it’s important for those vendors to find some way to serve those without violating the standards. So if they have a possibility to extend for their particular need, I think that’s some feature that is really useful.

Extension points are also an important feature to keep specifications modular and to build domain profiles, i.e. combinations of different application profiles for a particular domain. For example the IMS Content Packaging specification uses an extension point so that metadata can be included into content packages using another organization’s specification like IEEE LOM.

What sort of features are those that you are seeing that vendors have put in that have to have been done in a proprietary manner? Is that all extra data being passed through, or what are the uses those features are being put to?

I know that mostly in the field of assessments, that they develop their own types of assessments, sometimes they have their own tools for developing assessments, and content-providers in particular, if they invest a lot into developing those, a lot of graphics capabilities and nice-looking features, then they are reluctant to have that stripped down to a standard way, which is then rendered by someone else. So they want to bring over their particular way of rendering and picturing that part of content, and also communicating with the LMS. So that’s a part where I note most of this. Publishers say, well, we are proud of our content and if the standard can’t fully support that, then we would be forced to deliver a quality we don’t like.

Beside this reason to deliver better quality in a non-standard way – which, I think, is OK if the user is informed about the nonconformance – sometimes proprietary formats are used unnecessarily since users find existing standards too complex. Only too often the reaction to this perceived complexity is the development of a proprietary specification which breaks interoperability. This is a problem for all standards describing a sufficiently complex domain. Is the transition to smaller or more modular specifications a possible solution? I don’t think so – understanding a hundred small specifications and their interrelation isn’t going to be easier than understanding one big specification. I’d rather expect the provision of easy to use tools supporting the use of a specification to be the key, followed by the provision of core profiles supporting common practice.

Maybe assessment content could even pass to a system a whole report about the results of the content? Does it seem like that would get around the problem of proprietary assessment formats, or does it seem like they’re doing something else?

No, I think that would solve the problem, if you’d just give them some kind of window where they could produce the content, and they can be launched with respective context information, and can pass back information in a generic way, than I can imagine you would have some types of information that frequently occur and that are usually stored in the LMS database, and that the XML systems can communicate back to the LMS these particular data.

For example, one point where this is important: SCORM had this simple sequencing feature, but I think most of our staff are more used to doing sequencing by themselves. So it’s important to have all respective information back, so that they can work with that. That has to go in particular fields in the database or grade-book, in particular. The IMS LTI specification is a promising attempt to standardize communication between LMSs and third-party systems in a generic way.

So the specific information going back, you mean the LMS needs to make available enough information to the content to be able to sequence itself, and you need to – are you talking about content essentially doing it’s own sequencing across multiple SCOs or packages, or basically where everything is wrapped into one big unit, one big SCO, and that’s then sequenced internally?

That’s in fact an even broader problem. What we found, in our blended learning context, where you were working with a teacher, is that they very much like the feature of Common Cartridge that they could disaggregate content, and I know this is a double-sided sword, some of the authors and content providers don’t like that, but it’s something very much appreciated by the teachers — what they want to do mostly is when they want to do sequencing in their course, they will have their own material. They have their tests which could be done in external content. They get the test results back and then they would like to branch to their own material or to a particular place in an external content package. So they have more outside view on the use of the external material in their course. They would like to determine how to embed that.

Do they want to see what has happened with that external material in the past, or are they making these branching — they might decide to branch to external material based on what’s happening with internal material?

Yeah, the material might be external, but it may also be internal, but some content which they brought in for themselves, for example, they could let an external system do some tests, and when they get a detailed test result back, they might do their own analysis and then provide some helping instructions particularly for their students, which are referred then back to the external material. So this kind of combination and interoperability of this external material is something that is really interesting.

Right now, that sort of typically is set up within the LMS, which is then managing different portions and bits of material that might have come from one content package?

Yeah, and maybe in some situations it’s not that important that it can disaggregate a content package, but the important thing is that you can refer to a particular point in that content package and just maybe hide the rest of it.

The LMS really is going to provide the feature, or should be the system providing the feature for the instructor to set up those sorts of rules; but what needs to happen from the standard-side is to provide enough information, enough communication from the content to the LMS, and that hook, as you’re mentioning, you’ll need to link into and launch a portion of your content, specifically, is what the standard needs to provide; is that correct?

I would prefer if the LMS provides the tools to direct the learner to specific items as this is the place where the instructor designs the sequence of activities and as these activities may involve different auxiliary systems from different vendors. As a result of an interaction the external system should provide a set of defined name-value pairs and the LMS should provide a possibility to calculate from these values other values which are then stored in the course database to describe the status of a particular learner.Some other feature we note as being important is the possibility of the external tool, if it’s interactive, to resume some session. So somewhere, either in the LMS or on the tool’s server the situation of the interaction with the student and the content needs to be saved.

In SCORM right now, that’s handled with location information or called a bookmark, and then also suspend data. Does that seem to be adequate, or maybe there needs to be a larger size there, or does there need to be more structure?

If you have a complex thing like a simulation, and the student has been partly working with that, then it would be useful to relaunch the simulation, just jump to that situation to continue. So it could be just a binary object, which is passed back from the content to the LMS for storing it so that on the next launch, so the next time the student wants to access that object, it simply goes right back to that status. The SCORM 2004 suspend/resume feature is doing that.

So it could be handled by suspend data as long as it could accept the object, and in fact, a quite large one, potentially. I guess not just what people have run into trouble with that you’re aware of already, but what can you foresee people trying to do down the road, even five or ten years from now, that the standards as they are won’t be sufficient to help with?

In the longer term future, we need a much better integration of the LMS back-end service into the whole world of tools which people use for learning. Maybe Google, Google Maps, Facebook, for managing their communities, etc. Those LMSs become less important as a front, they just provide services. But we have this problem of life-long learning, and currently, a student when he learns in an institution, he works with a particular LMS, he’s forced to learn with that. And he doesn’t have a chance to set up his own learning environment, which he can use life-long. What in fact they do there in their learning environment, by working with different software that doesn’t communicate with each other, like Facebook, like using Word or any portfolio system, or several LMSs if they work with institutions over time, and I think we need an integration on a much broader scale, which is open to integrating virtually any kind of social system or content format.

And part of what you’re saying is that goal would be to allow learners to keep their profile or portfolio where regardless of which system or org. they are experiencing learning through, that they have their history with them?

We envisage that a learner might have his intellectual assets in a variety of databases, preferably in his own system or at some data provider to which he has trust. But that there are also other artifacts he has produced or certificates, which are on other systems like university certificates or when he is in a working environment and he may also produce something else for his company which is under the control of the company, but to the extent that he can give others access to thatto the extent to which the company would permit that, the services which the company permit, so he’ll not always have full control, but referring back to what he did in earlier learning as long as he has a right to do that, it might be very helpful for really, lifelong learning. As an illustration: A university might provide an electronic certificate on its server for the learner but the learner may decide who can access it. In another case, the portfolio of a learner may contain a reference to some document on a company server but the company may restrict access to this document to other company employees.

Back to FaceBook if someone is learning or they’re having a learning experience through Facebook, or some of these other mediums, do you see a need for that information to be tracked somehow, like in another LMS, in terms of getting credit for something — that kind of tracking. Or is that not relevant with some of these social learning situations?

Tracking, usually isn’t very relevant here. And that’s in part due to the data-protection rules, which we have here. In most cases, it’s even forbidden unless it’s really needed for educational purposes, which is not the case in most situations. When you really want to certify someone, you don’t do it on the basis of checking data here, but on what he has produced on some kind of examination, some oral or written examination, or online tests, etc., but not on the basis of tracking.

We have here the situation that if a teacher wants to have tracking, which is possible in our LMS, then he has to inform the student that he’s doing tracking and has to explain why he is doing that. And we have found that tracking is very seldom used here. At least in the higher education and school sector.

So if tracking isn’t used, then what you’re recording data for is not — you’re still potentially recording data to determine where someone should go next, the branching that we were talking about earlier, but not for certification purposes. Is that right — it’s not that there isn’t tracking, just not for certification?

We do some tracking on the administration side to check on the function of the system, but that’s not really released, even to the teacher, and also that has to be deleted as soon as possible. But this kind of tracking, which we need for branching, this is just, rather recording test results. And the point is that this is, in a legal-sense, has a clearly defined purpose. If I can say you either go through this path because — for example, we have a SCORM course for someone who works with patients, and he said there are some videos in there, and he only wants to see people that if they have before visited another site. In that case, tracking the view of that site has a clear purpose. And we can inform people, and they will definitely be willing to do this. But this is a rather seldom situation, but it occurs. It provides a purpose for tracking, not just tracking without a purpose.

Is there anything else you wanted to bring up you haven’t had a chance to?

Not really. I think it would be very helpful if we could get an alignment of the various

standards and avoid competition between them. That really confuses people.

I think it’s important to to support the thing that most people want to do, and on the other hand, be flexible and extensible and allow it to cover a long range as well.

Scalable complexity.

Yes!