Key Points:
- have a rich project management simulation, not using SCORM. In order to support it using SCORM, a richer persistence model is needed
- persistence of detailed state
- 4th edition persistence works,crudely, but LMS support for 4th edition was missing
- interaction model insufficient for a project management simulation
- we (content author) provide report on interactions — generic view would not be useful – so there should be a standard way to do that
- need a way to incrementally update content in the LMS
- JavaScript API should not be needed
- would be nice to be able to include server side scripting such as PHP, and have it executed on the LMS web server
- interoperability still needs work, particularly in suspend/resume/exit workflow (as of 2nd edition)
Can you start out by briefly describing what your involvement in e-learning has been?
I have been in the software industry for about thirty years, started off with mainframe databases, went through the mini-computer thing, the PC thing, and internet thing. Spent a lot of time in applications development, and then was doing sort of system-level type of software, most recently, in the past ten years or so. A couple years ago my wife and I decided to form our own company; I had typically done the start-up thing. I live in the Boston area, starting up there, opening the doors on day one, building it up for a few years, and then leaving at some point when it starts to get too big or acquired. Working for companies anywhere from eight years to two years, depending on the individual instances.
Anyway, my wife and I decided to form or own company, we had two small children, wanted to stay close to home. She had been working as an adjunct professor at Boston University and was helping out with the distance learning program there, specifically focused on project management. That’s what her background primarily has been. And she was looking at some of the tools they were using, particularly a project management simulation that was pretty crude. And she thought it seemed a bit lacking, a not very useful educational tool. So she probed in a little bit and found out that B.U., like a lot of colleges, had spent a lot of time taking their classroom experience and simply getting it to be online, for all the obvious reasons. And they had not really done a whole lot with taking advantage of the new capabilities of having online courses, and the simulation was just something they had found and thrown into the course. So she talked me about potentially meeting with the B.U. people, and probing them exactly what type of thing might be useful to them in the future, sort of a next generation. And that’s what we did, and we did this back in 2007, and spent 2008 building our first simulation. We used them as a design partner and initial deployment partner, but the intent wasn’t strictly to build it for BU nor just for universities; we also wanted to target PMs directly, eventually. 2009 was when we had our first deployment there, and we engaged with the university environment during 2009 (university customers, I should say). And then in 2010, we began an affiliation with the Project Management Institute, sort of the keeper of that standard, and have been doing custom work for them in addition to what we do directly for universities and also directly with PMs. The thing that we built originally, we didn’t have any constraints on it. Our intent was to host it ourselves, we weren’t selling it to anybody else or building it for anybody else, per se. Where we would hand over the content to them. So I just built our own platform and did that because there didn’t seem to be a viable simulation-oriented tool I could use and I figured I would have no constraints, that whatever I needed to do, I could do it.
The SCORM thing only became relevant when we got deeper into our relationship with PMI, that they wanted us to do some custom work for them where they would own the content. And one of their requirements was that it would be SCORM compliant. And we have built courses that are similar visually to what we did on our own, with our own background platform, but run with one of the[ir licensed] LMSs. They also needed it to be stand-alone. The e-learning group is pretty much isolated from the IT group at PMI and gets virtually no support. So they needed something that just – , here’s the zip file, I can plug this baby in, and it still works. No back-end support, within the PMI environment or anything.
So with that constraint, I dove into the ADL standard, ran through the specs (and with my software background, I’m used to dealing with specs and various constraints) did some poking around. I think I’ve even engaged with people at your company before — I don’t know if it was you or not, but sending emails, asking general questions. And I sort of formed my own impression of what SCORM 2004 4th edition had to offer. As it came to pass when we were looking at implementation with PMI, they were on one LMS and they were switching over to another. But in both cases, they didn’t even support SCORM 2004 4th edition. So, looking at — I gave it at least a casual inspection, but I did the best I could reading through a boring spec, I was kind of disappointed in what I found in SCORM 2004 4th edition. And then to be restricted by that a couple years before by the 2nd edition, I thought, Boy, this is not a whole lot of fun. So I have a list of things I would like to have, and that we were able to implement in our own platform, or take advantage of in our own platform, but with the constraint that it’s got to be SCORM-compliant and it’s got to be stand-alone. There’s some very obvious things to me that it would be nice if future generations of the standard were more flexible in that area, to give people richer tools to work with.
Why don’t we start by going through your list, unless you have anything you want to ask about the project?
Just a disclaimer, that I’m by no means a SCORM expert. But I do have a little experience. I’ve worked with four LMSs, same chunk of content. It was a single SCO, so the contract just said it has to run under a SCORM-compliant LMS, but they didn’t have anything else deeper than that. So in our case, a single chunk of content, you come in, there’s a tree you navigate through, there’s various things you run into along the way, and then you come out the other side. There’s various assessment, where the user asks some simple questions, to multiple choice, to drag and drop exercises, all sorts of things.
The main thing I ran into up front was the crude persistence model. So, my background is in database, and when we built our own platform, we were able to build all sorts of rich tools to build rich simulations. And the user gets hit with a lot of stuff, and as they manipulate that stuff, we can set rich state data for what they’ve done and what they’ve chosen, and then of course we keep building off that, so it gets more complex the deeper you get into the simulation. Without going into a long explanation, I’m sure you have seen project management tools, so you’ve seen building schedules and you have obviously worked with spreadsheets. We do a lot of stuff in our simulation that is comparable to that, so you’re seeing reports and numbers flying all over the place, and schedules, and slippages and whatnot. And with a back-end database, it was pretty straightforward to persist that as required, and give each user a unique experience; it’s not like there was a canned set of stuff they had to deal with and they just got exposed to it along the way. Or, there was relatively simplistic interactions where we would ask questions and they had to get it right, and we would answer, Yes, you either matched it or didn’t. Here, the user was working with web-based tools that were comparable to working with a spreadsheet or with a project management tool.
So in the state data you would have, it would essentially be what was the state of the spreadsheet or schedule?
So, like a schedule in our simulation would be, we’d have like 165 tasks, which is actually relatively simplistic for a lot of schedules, but for a learning tool is much richer than anything we ran across. So keeping track of that, and also keeping track of it across time, with a lot of external parameters that influence what goes on the schedule. Date-slippages, cost-slippages, that kind of thing. And on the spreadsheet side — I don’t know how much you know about PMI’s project management standard, but they have this thing called earned-value management metrics, and it’s a way to sort of quantify what’s going on within a particular project. And assign numbers to it that you can assess, are we doing well or not? Versus the very simplistic: are we ahead of schedule, on-schedule, behind schedule, etc. And so that’s where the spreadsheet stuff came in. And with a complex set of tasks like that and covering multiple months, and all sorts of dollar-parameters and day-parameters that the user can provide to present a unique view on those earned-value management metrics, it’s fundamentally a spreadsheet. So all that stuff, in a database, is pretty easy to keep track of. But looking at the ADL spec, the only thing I found for what I would call any sort of rich data persistence was, I forget the name but you could basically say — “Here’s a chunk of data, and I can put in there I want.” Obviously, I could put in chunks of XML, and then parse that as required.
This is in the fourth edition. So when I looked at the spec, that’s what I was looking at, I didn’t even take a look at the prior versions, I just said, “How can I persist data?” And in the 4th edition, the only thing I found that seemed to be relevant was this chunk of data thing.
I got it to work. In my test cases, I just put together a little chunk of XML and said, “Listen, I want to stick this thing in and be able to get it out and be able to parse it.” So once I could do that reliably, at least I have a crude tool of saving off rich data. Of course, you can organize your XML very similarly to how you would your data tables, an instance of the data based upon table type, and of course, multiple stuff within it or you could nest it as much as you want if you really want to get complex. I knew that with that place, I did know how to persist stuff, even if I have to do a lot of wrapping to make it a little more straight-forward to manipulate, like with the database where it was very simplistic to do that type of stuff.
I found that restrictive. It seems like it should be easier to deal with, and then of course when I found out that with the LMSs not supporting the 4th edition, I had to go back to prior versions. Like, they didn’t have that. And I talked to several people trying to find: Hey, I’ve got stuff to store, and it’s just data that users manipulate, it’s not interaction per se, like for exercises and whatnot; I didn’t find a lot of flexibility there. Number one complaint, even with the 4th edition, is some enhanced persistence model that allows people to do richer things than just straight-forward multiple choice and whatnot. I thought the interaction model, when I got into that — for basic stuff, it was pretty straightforward and wasn’t very difficult to deal with. And anything richer, like I had built on my own, I just didn’t see an easy way to do it. And for a reasonably sophisticated user, somebody trying to come in and do that on their own with a lower level of sophistication, they couldn’t do it at all. I don’t have a magic answer of what I would like to see, but I do want to say the persistence model is horrible.
Yeah, so — knowing that you don’t have an answer of exactly what you’d like to see, let me give you a binary choice: which would you say is better to try to add more, make the persistence model, more database like? Or, to be able to use your own persistence model in your content to essentially run your content on a content-server and then just connect it in and then have your content track back to the LMS and not worry about running within the constraints of the LMS?
Either one, or both would be acceptable. If there was an easy way to get to a database, and there was an easy way for a content-provider to say, “I’ve got some , a chunk of stuff, I’ve got a zip-file, and I could put something in there that when it gets inserted into the LMS, that data model gets included as well.” In the case of just making it easier to do basic things, the data model seemed like it would be okay. Sort of an extension of the data model right now, just make it easier for people to deal with. Because a raw chunk of XML is somewhat cumbersome for your average person to deal with. I mean, there’s sort of libraries, etc., that make it easier; but fundamentally, you say, I’ve got data type one, and data type two, and data type three, and you’ve got a bunch of instances of each of these things, and there might be relationships — you know the standard thing that relational database does, you could probably wrap the data-stuff in such a way that it would be easy for people to interact with. To say, I can define a persistence object within my manifest, and something — however that’s done, and I can create instances of that, and I can retrieve them, and I can maybe do some interesting queries. Some of the basic capabilities that a database has: insert, delete, update, poke around. It would just make it easier for people, and I think that would be powerful.
The database connection, I can see how that might done as well, it’s just the LMS would have to have some way of segmenting it, just like it does when you stick in, you know, web content; so that stuff doesn’t bump into each other, you have a way of saying stuff is coming in, I’m going to put it somewhere in my directory so if someone comes to get this particular entity that I know exactly what they’re talking about and they can’t really get upset at that. You could do something similar in database LAN too. Depending on what type of back-end database they might provide support for, or wrap . So to sum that up, one or both would be way better than what they have right now.
The more you describe accessing a database through the API, I wonder if really besides, — I’m more and more liking, if you really want to do true database access, it seems like moving the content out of the LMS environment and then saying the content can connect back to the LMS, is probably a better way for some of those things.
Yeah, I think that it’s probably easier to just provide a conduit to a database, in an abstract way, so they don’t know what’s physically behind there, but it’s got the same basic capabilities and they deal with the name-space issues. So that whatever is defined directly in or referenced by the manifest, that stays out of the way of anything else and provides some decent level of performance.
The SQL can be whatever makes the most sense, but the — right now, you can do that if you control your deployment environment. In my case, I’m a third party going into PMI and they are telling me — give us a zip file and that’s all you get. That’s very restrictive.
Now, do you have, would you have the option, because I’m thinking about keeping the content off the LMS, would it have been an acceptable solution to say, Hey, the content is going to run on my server and it’ll track back to your LMS?
That would solve the problem, but it wasn’t an option. They wanted to be self- contained. My constraints are atypical. They told me it was stand-alone, that’s it. There was one exception to that, and we used this avatar engine, so that you would get these talking heads within the simulations, and there’s a company called Oddcast. They do tools for people marketing if you want to have a talking head on your site, but it can be used in several different contexts, including in simulations like I’m using it. So within my HTML pages, it will load this avatar, but it’s a completely independent service, not required to be managed by the PMI IT staff. It’s almost like if I was loading any sort of resource off the net, if it was a chunk of data, or images, or videos, whatever. They don’t really care about it. There’s no visibility inside the manifest of that entity. It’s directly in the HTML, and it runs on time as long as the user, who of course is required to have a live internet connection — then everything works magically. But the idea of having a rich, customized back-end on my own, where the LMS could just point to things there, that wasn’t an option for me. They didn’t want me to host anything. Oddcast is sort of a public hosting thing, a hosting company.
Like Google Maps is a good example. Let’s say you embedded Google Maps into an application. Somewhere along the way, you just, your HTML pages do whatever is required, you’ve got an account set up and you authenticate, and the thing magically appears, very comparable to that.
So what I ended up doing is I dumbed down the user interactions, unlike the initial simulation that we built on our own platform, and it had to be much more simplistic.The way we set up there was a right answer, you have a chance to provide it, and then based upon whatever you put in there, you get an assessment of how well you matched up against the right answer. But once you get past that interaction in terms of using that data as part of that interaction, in future interactions, it was all static. We just used the right answer. The data was, in essence, hard-coded within the SCO. Which is much different than what we did in the other environment, where the user pumped in a bunch of stuff, there was no perfect right answer, and whatever they pumped in they would see later on. So you make a mistake, it has a ripple effect. You do something well, that has a positive ripple effect.
So that’s definitely a big hole in the data model, is — except not for 2004, kind of. I mean, it’s not the — there are these buckets in 2004, but in a way, at the very least, it would be nice if there were a bucket labeled “State” so you knew what you were supposed to be doing with it and it was there to keep track of the state of your environment that the learner was experiencing, that was not an interaction, was not a correct answer, or anything like that.
The interaction’s data structure was inadequate, it was too simplistic. I dumbed down my interaction so it fit into that, and it worked fine. But for what I would consider much richer interactions, it just wasn’t going to happen. The data capability would work, but it would be a lot of hard work.
So what the state stored — were the interactions still inadequate to — so assuming you had been able to store at each step, or am I making up this separation? It sounds like you have this data about what is the current state of the documents that the user is working with, but then you might want to start for your interactions, every time they modify that state, record the modification that they make and if it was right or wrong.
Right, the state, to throw curve balls at the user, you want to manipulate their state behind the scenes, for example, with a schedule task: you say that this thing starts January first, and ends January tenth and costs $1000. The user was given a set of options for what they want to do with that particular task. So virtual time is going by in the simulation and you say, “Okay, well, we’re on January third virtual time and let’s say a critical resources gets hit by a bus. Well, you’re going to have to replace it and there’s going to be a schedule delay, and you’re going to have to pay more money.” So you take that task that the user specified without any direct interaction on their part, and update it so the state has now changed. So by default, this thing is not going to finish on tenth, it’s going to finish on January twenty-fourth, and it’s gonna cost $5000, not $1000. And the user will go back in and make some other adjustments that reference back to that piece of data. With state data, I just want to get to it, I don’t want to have it tied directly to interactions per se. Again, with a database, it’s very straightforward to do things like that.
But what I want to get at is assuming you could store, you had plenty of other storage for your state data and you didn’t have to try to shoehorn that into interactions, are the interactions themselves still inadequate for the sort of interactions you actually want to record, and if so, what other items that aren’t there would you want going in there?
If I have the rich state data model, then the interactions are less relevant. For some basic things, I might take advantage of that. But if what I really care about is the state data, that’s where I would really look, that’s the record of interactions, whatever goal you might have the user try to be working towards. In our simulation, the user’s managing the schedule on a budget. They get a first shot at it, they get some initialized parameters and various constraints, they say, “Okay, here’s what I want to do, and then time starts ticking.” And as time ticks, they encounter new problems or things happen behind the scenes they have to react to. And so that state data changes over time. And it’s time sensitive, so that state data on the virtual date of February first versus data on the virtual date of March first, and so in a database I can keep track of those states very easily. If there was a richer state model inside SCORM, as long as I can do the same thing, the interactions are almost irrelevant. I don’t really care about those per se, I just into my state data and that’s what I’m going to reflect to the user about how they’re doing within the content.
To sum that up, the interaction thing is okay. It’s simple, it handles simple cases and that’s fine. But with this richer state data thing, there are cases I wouldn’t use the interactions.
Yeah, I guess I was thinking of it in terms of being able to report out to the LMS that the user did some interaction. That reporting really comes more from the objectives — did they do this successfully, did they not?
Right, so if you want to have, short of standardized reporting back on how the user has done, then there may be some challenges there. I never quite got into that. In our own simulation, we don’t have anything directly comparable. As I said in the — there is no perfect answer, you go through the whole thing, you sort of look at state data in the end of the virtual date cycle and you can assess it then. If you schedule came in ten days or less, then that’s good; if your schedule came in ten days to twenty days late, that’s mediocre, and so on.
And that assessment is done automatically?
Yes, in our stand-alone thing, we do that automatically. In the LMS thing, we use interactions but we have our own exposure of those interactions. The user doesn’t get any score, or we don’t worry about — they’re either completed or not as far as the LMS concerned. Within the interactions, we’re pumping information as they go through, and we developed our own interface to those interactions where we pluck out that data and show it to the user in an interesting way. So they can say, “Hey I had 15 interactions,” and then they can go back and see what the interaction was, and how I did on it, and the commentary or feedback they get isn’t score-oriented, it’s more subjective. Like, you did a good job here, not so much there, etc. Still using the interaction as a persistence mechanism though. If you looked from a generic perspective to say how they did on the interactions, they wouldn’t be as interesting, or at least, that’s not the way we pump the data into it.
Do you have more on your list?
Content update where during the development and test cycles, you say — let’s say I’ve got 55 resources and resource 17 I have to make a change to, and I want to just update that and have it automatically show up, that didn’t seem very trivial. In the LMSs, and I don’t understand if those I used are typical or atypical, but it is like pulling teeth to do that type of thing. Getting things into the LMS is very cumbersome and once they’re in there you’re in trouble. You don’t have an way to say get rid of the old one and put in a new one. And if you are deployed, you don’t want to be changing things under the covers, that’s fine; but in a — I worked with staging environments in both cases, and I talked to representatives of the companies saying, “Listen, all I do is, I got this new thing, I want to get rid of the old thing, and put my new thing in, and for people to use that for testing.” They offered no solution. I don’t know if they didn’t support it or if it was a lot of work on their part, or if they didn’t expose it to the outside world, or at least the interfaces I’ve had, the administrative interfaces. I’m sure you have more experiences with LMSs than I do; it’s horrible. I just want to do one little update. And be able to test with that new thing, or test a bug, or whatever it might be. I thought, I went through your suggestions, so in the Tin Can thing, you said go look at the ideas — there’s at least one person who mentioned the same thing, how it’s a pain to get some new content in there for whatever reason. It’s evolving, it’s in-test, or you’re fixing a problem.
It’s not exclusively this content update thing. The fact that you’re running inside this black box and you can’t get into the pieces that — the way with the web pages how at least with the — all the LMSs I have looked at, building up content and throwing it in there as a black box. The way they invoke your content, given that client-side JavaScript is the only thing that works, the way that the JavaScript hierarchy works, it’s got to run under the context of some master thing. So the LMSs all had this master thing, that would fire up your, in my case, the single SCO, and you could not exist outside of that. So if that master thing goes away, you’re in trouble. If the user accidentally subverts the master hierarchy, same thing. So management of state within the LMS given that it runs under that master JavaScript thing, is a real pain in the butt. It ends up with an ugly user-interface too. All these things I look and think, here it is in 2011, anyone with half a brain, with a sense of aesthetics, would think that this is is understandable for the average user. They fire up this to get into my SCO, you say, “Here’s the thing I want to run.” Great. Now pops up this master LMS window and you see the thing running inside it and it’s got this stuff going on the outside that uses Frames and whatnot, I think so they can maintain that JavaScript hierarchy. And it’s ugly, all this peripheral stuff that you don’t want to see, and you’re not looking at the content itself, just at these artifacts of the environment, because, I believe, of this JavaScript hierarchy requirement.
That JavaScript API being replaced by a web service API, that should solve these problems. And to give you an idea of the extent to which some people want to not have dependencies on the LMS, one of the ideas out there is that you should be able to launch the content, run the content, and when you are done, track that content back to the LMS. And not have to have communicated with the LMS before that point, what you’re doing.
So I can just call the web service from JavaScript, okay, that’s fine. And if I have a server-side thing, I can call it there too. I happen to use PHP in the platform that we built, but making it easier for people to do server-side processing seems to be desirable. There was one reference I saw, I think the person wanted server-side JavaScript, whatever the scripting language is; or, if you could plug and play with the scripting language, that’s fine, but the concept of — Hey, I’m kind of on the server, I’m preparing stuff to go back to the user, the client is an HTML client, and I want to manipulate that HTML based upon whatever state I have determined that the user’s in, or my environment, whatever it might be. Rather than the model where now, everything is JavaScript related; you have to assess things on the client side, and try to it to dynamically react as best you can. Which is a bit more restrictive and not nearly as efficient in most cases.
If you are able to run your content from a content-server as opposed to on the LMS server, either — or even, which could be the same server as the LMS, but essentially if you are no longer saying the content has to be imported into the LMS, if —
Not necessarily, you could do that. That would be fine. But what I was thinking, in a restrictive a manner as possible — just like I can put an HTML page inside the LMS, and somewhere along the way, someone is able to pump in a URL to get that thing out, why couldn’t that be a PHP page? A PHP page on the server, and there’s other scripting environments, obviously, that would work as well, but PHP is just a generic thing. And if I pump in a PHP page into the LMS, and there’s some processing there, all I know is that this is being processed within the LMS server, it’s not being processed on the clients’ side. And that would be a rich feature, and more efficient as I said. Because you can obviously scale the server-side a lot easier than you can scale the client-side. Right now, if you need to do something relatively complex, then for some clients it might be perfectly fine, because they have a sufficiently beefy machine. But if somebody has a lighter-weight machine, it’s going to be “dog-slow.”
Putting on my LMS developer hat for a moment, I don’t see LMSs as ever really going for that. Because what would have to happen and I guess you could just say PHP, but really there would probably be a whole set of languages people would want to argue in by the time we were done, and say that now in order to be a SCORM-certified LMS, you have to enable running X, Y, and Z script on your servers. Which then, the vendors installing the LMS, have to go to the customers and tell them that in order to get this LMS to be SCORM-certified, you have to enable X, Y, and Z scripting languages on your servers, and then their security people are going to hate that. It seems — that’s why I kept, with this sort of thing, saying that if the content now can have its own requirements, and then the person creating the content can separately have that discussion with whatever environment it’s running in, we lose a lot of interoperability.
I understand your concern, but to me, at least the way I use it, the LMS is my web server. And I want more of the standard web server capabilities out there, so — that’s what the persistence thing, as I said, with the back-end database, be able to get to that easily. It seems pretty straight-forward to me to provide some sort of sandbox, server- side script environment that someone could just, whether it’s PHP or a couple of languages, whatever, even server-side JavaScript, which, something that runs on the server, so that if a content-provider wants to build something, something very simplistic, I just want to have a zip-file, I hand it off to people, they put it in their environment and the thing just runs. That seems desirable to me. And you can close it off if there are any concerns: security concerns, or performance concerns, or whatever. Put a time limit on the execution of these things, and say, “Okay, if the page is being executed and it takes more than ten seconds, kill the thing.” You can sandbox it pretty easily from the LMS , but you’re giving the content designer the option of saying, “Hey, I’ve got a server, that’s a generic thing I don’t know all the gory details to, but I do know it’s a centralized server and I’ve got the client thing, and I can put my processing wherever it’s best suited to be.”
Which — is the case where the LMS or LMS installation would need to have that scripting enabled to be a SCORM-certified LMS or is it kind of a negotiation where the —
It’s something that’s configurable, that the LMS installation could decide — we will not allow any SCORM content that requires PHP. Or just like they do now with the edition; I was forced, being a content provider, to work within the constraints of the LMSs that my customer was using. They only supported 2nd edition, so I worked within those constraints. If there was a feature like server-side scripting, you could theoretically build a stand-alone content object that could be inserted into that thing if the target LMS environment that you’re running in could allow server-side scripting.
How would that be different from the current situation, because isn’t that really kind of a constraint of the LMSs right now, or would it just be kind of an advisory thing where we’d say, “Hey, by the way, in the standard documents, the LMSs should consider if they might want to allow server-side scripting.”
I may be misinterpreting, but I understand the environment that I have to work in right now — so I put together a zip-file and that is completely self-contained. That gets inserted naively by my customer into their LMS, there’s no magic going on. There’s nothing that the IT department for my customer has to worry about. They have to maintain the LMS, but they take the zip-file, put it in there, and the LMS takes care of everything. So it’s not like the customer is managing a web server that my HTML stuff is running, per se, as long as their LMS is installed properly; obviously it can act as a web server and people can get to that kind of content. So assume that they had an LMS, that was properly installed and the server-side script capability was available, I could give them this black-box piece of content and they could throw it in there, and the IT group doesn’t have to do anything else, except say that, “Oh, it’s an LMS feature. Just like anything else. PHP, yeah that works fine. HTML reference, that doesn’t get processed by the server, it just gets returned but everything’s fine.”
So, it seems like they could just turn on, if they just had PHP turned on for the directories that your LMS content was in, that —
That’s correct. But that would require, at least in my case, the IT group from my customer to do that. It wasn’t publicized, certainly as a SCORM feature.
So — they would not rely on any PHP interpreter as such on the web —
It would be in the LMS, an LMS knows how — if you pump in a URL into it that references an HTML resource, it’s able to get that back to the client. Well, same thing, except PHP, there’s a little stuff that happens in the server first.
So, PHP is explicitly spelled out as something that potentially would function in SCORM is some configuration option that the LMS can declare whether they actually allow you to run it or not.
That’s right. And there’s obviously a whole bunch of server-side scripting environments that fundamentally work the same way. I worked at the company that built Cold Fusion, and we had the CFM extension we plugged into the web server, and if someone served up a CFM page, just some stuff happened on the server side before the resulting page got sent back to the client. ASP works the same way, so it’s fundamentally worked the same way ASP, PHP, all these things. Same basic thing, but if that’s built into the LMS for all practical purposes, it’s just the switch that’s turned on. This — if I see one of these things, does it get processed the right way? Should it get processed by the appropriate engine? So it’s not like the LMS vendor needs to go manufacture their own PHP implementation, or JSP or ASP, etc., they just have to say, “Hey, if you install it, point me to — install the appropriate libraries or point me to an instance of getting these pages processed.” Everything works magically for the content developer, that gives you more options. You’ve got that whole rich content server-side environment.
And like I said, there was one I saw in your listings of ideas, where somebody else is basically asking the same thing. I think they were talking about server-side JavaScript. That would be another perspective.
I just got one more thing, and that’s the API compatibility, which was mentioned in your listing, too. With the existing API — so there’s something you want to accomplish, like in my case, the thing I’ve had with this simple shutdown. I want to be able to suspend my content object, or I want to shut it down; suspend being to maintain wherever in my currency, in my current content object. Somebody comes back in, they pick up where they left off, or exit where I’m saying, I’m just done with this completely, get rid of all my saved data. If the user comes in again they start from the beginning. I can’t get that to work from one LMS to another. They all have slightly different interpretations of — Well, you can set up these three variables but not those two, and vice versa. Sometimes it works, sometimes it doesn’t work. I can get it to work in some set of API calls, or I should say value settings, work fine, and then in some cases, they don’t. I don’t know if there’s anything you can do about that, to say, to an LMS — “Thou shalt support this in a particular way.” But there’s definitely a compatibility problem with how those things are interpreted.
That probably works better in 4th edition.
I’ve heard that 4th edition fixed it too.
What we can do with any new API is to make sure to really look at what are the ways you shut down, are they clear?
In any operation, with that crude value setting — set the value of this, set the value of that — you got to wrap things appropriately so it’s just one thing. If you have a particular action you want to occur, don’t have four steps required to pull it off. Make a single call and it’s done.
Looking forward, is there anything you haven’t done that you would like to do, that you think that a standard for learning communication could make easier or get in your way if it’s not done well?
I don’t need a whole lot more than an HTML provides me. It’s just, what the LMS provides you is deployment environment and so I’ve mentioned the things related there, particularly the server-side scripting thing, and of course, just interaction with the LMS. As long as that’s better than what it is now, and it sounds like you are saying, using more web service-oriented approach versus using the JavaScript one, that would give me much richer tools to work with and I think it would be easier to produce content that was completely stand alone. However you do it, you could just give it to somebody, they can just drop the thing in and it would just work.