Interview with Nina Pasini Deibler

Key Points:

  • need to separately track team performance and individual performance in team-based learning
  • need to link incorrect simulator input with appropriate remediation
  • more scoring options than just pass/fail & a measure are needed
  • Instructor needs to be able to grade an assessment, assessment should be in a “pending” status while waiting for grade.  Could involve an “Instructor API”
  • need a trigger mechanism to notify instructors of poor or unexpected learner performance
  • informal learning should be supported, formal learning is still important

(Nina, referring to “Hazmat Hot Zone” ): The issue that we were really looking at was, that was an instructor-facilitated session, but it was the team activity that the team could fail if one person’s knowledge wasn’t up to par. So, how can we account for the tracking of multiple learners in that kind of environment, whether it’s instructor facilitated or not, but have some kind of tracking model where the the team itself performs well, their interactions are good, their communications are good, but one person in that case has a failure to recognize a hazardous material would cause their entire team to fail when in fact it was really one person that needed some kind of remedial instruction. So do you hold the rest of the team back, because one person’s knowledge, that they should have had going in, or do you allow the rest of the team to move forward and then remediate that person. In which case, that person would need to be remediated to appropriate instruction, not another module in the game, because if they don’t recognize some fundamental concepts, no matter how many times they play the game, they’re still going to fail.

You were suggesting that it’s dependent on the individual’s situation or scenario whether you’ll allow the team to pass when only one team member fails, or not, right?

You would have to, so in the real case that the Hazmat Hot Zone has used, the instructor decides if the team gets to move forward or is the team going to have to go through another scenario to prove they can do this kind of thing. If somebody’s basic knowledge fails, then the instructor has to take them out of the class, and they have to take another course. So it’s the instructor that does it. But I think if you had the right algorithm, you could make those decisions based on whatever inputs the team made and the individual has made that would determine who moved forward or who didn’t, and when they did, etc.

So I have that case, and then I have the simulation case, from the simulator, because the same thing happens. My background is actually in aviation. I started in this industry with aviation, doing pilot training. We would try the same thing. So if someone got into the actual simulator and didn’t perform a procedure properly, or flipped the wrong switch in-flight, or whatever, there was not a way to track that, and remediate that. Again, it is incumbent on a human person to say, “Wow, you really messed this up, you don’t understand how the fuel system actually works, I’m going to reassign you to a fuel system module.” So the instructor would have to manually go into the learning management system and fail the person on the simulator, or reassign the simulator module, and manually reassign any kind of course-work they needed for remediation. So, if there was a way to link that up then when something like that happens, depending again on the severity or situation, they would be reassigned automatically to whatever instructional material they needed before they could be allowed to progress.

Do you think that adding the team base components, the collaborative components to the data model, is enough to support that sort of scenario at this point, or do you think we need more data in general, and if so, what sort?

I think adding the team base piece would go a long long way, I think there’s a lot of data- model elements that nobody uses anymore, and you know with the current technology landscape, I think adding a team base model and a multiple scoring type model would help, because it’s not just having the team based score, but having the ability to track both the individual’s progress and the progress of the team.

And I’ve been finding lately, I just did this giant content-migration, but one thing that would have really helped us was a more robust scoring model in general.  So I guess I’m saying we do need more elements that would account for scoring and different types of scoring models. If that makes sense.

Did you recall which data model element that you would have wanted for that in particular?

I think from a scoring perspective, we do need some better ways to come up with, we do need some model extensions that would enable more scoring options than just a numeric score or pass/fail. Right now we’re stuck with complete, incomplete, or unknown, pass/fail, and a numeric score.

One other feedback I’ve seen, they’d like to see a model where scoring doesn’t have to happen instantaneously, there’s a way to track what the response to an essay question is, not give the learner the score, later on the instructor can go in and score it.

That’s really important, our Defense Ammunition Center client is having that exact situation right now, where  after they complete a series of activities, we’re going to have to basically mark them as incomplete.  In that system, while they’re going through the instruction, they’re going to create a plan for an explosive storage site, and that plan has to be looked at by a human.  So we’re going to have to have their content sit there, marked incomplete until the human looks at their plan and goes back in and passes or fails them.

In the Army this is a problem because that incomplete score will get passed to Army Training Requirements and Resource System  (ATRRS). So they can’t just leave the Army learning management system status to go on to this next piece.  We want it all to be one course, but we’re going to have to just leave it as incomplete in their record, and then a human is going to have to go back into the ATRRS system, like a human administrator, and override their grade to mark it complete.

So there’s also a need for someone to be able to see the difference between a course that’s just not completed, and a course that’s complete pending approval?

Exactly.

Is there anything else you haven’t talked about yet, that you would like to?

I have been wanting this for a very long time, because the old training management system that I had years and years ago, at then McDonald-Douglas, now Boeing, did this. It would trip a flag after something like that happened, to the instructor.  Like in a formal schoolhouse setting, there are instructors assigned to groups of students and even though they’re doing web-based training, there is still sort of a lead instructor that oversees what they’re doing. It would be great to have a way to flag a human after someone’s performance has been poor for a certain amount of time. So if you’re doing training and you’ve taken, let’s say you have 10 courses to complete, and you pass the first 2 and then in one you barely pass, the next one you barely pass, the next one you barely pass, something’s wrong; you’re passing, but you’re barely passing. So it would be great to have some kind of automated trigger to notify a human with these problems. Because the human instructors don’t go into the system to check on you. As long as you’re passing, you’re passing.

But there are needs: one case would be with our Defense Ammunition Center customer, there are needs for times when they want to know when something is going on with a student and unless they physically go into the system for every student they have, and check every record for that student’s system, they don’t know that. But if there was a way to set up flags and this would be more at a curriculum level, but after so many scores in this area, send a notice to a human being and let them know that this student is struggling.

I think keeping the human in the loop, even in this distributed learning world, is really important in many domains. So, for reporting, most reports are pulled monthly. Nobody pulls reports daily. You might get some organizations that pull weekly, so if you have somebody who gets in that situation where they are requiring manual intervention, nobody might look at their records for days or weeks, or even longer.

If there was a way, I guess what I’m thinking is a much better integrated system or way to integrate things more, so you talk to me, you realize I do know what I’m talking about, I accidentally missed this question that caused me to fail. So you just want to, when you got that flag notice, you just want to hit a quick button where you assign me to new content or, we always called it “certified pass,” so you certify that you’re going to pass me. This is a different kind of passing, instead of just a passing score, it shows that it was a manual pass. So, you certify pass me and I move on. And for you as the instructor it’s all in one little encapsulated communication protocol.

I think we’ve tried so much just to take the human out of the loop on this, that we’re shooting ourselves in the foot.

So there could be, essentially, an instructor, API, which an LMS could build a UI on top of, or if the LMS doesn’t provide a good UI, then if the API is standard, then …

You could choose to do your own thing.

So it would be important to have not just the API between the content at the LMS, there should be other APIs.

Yeah, I’m all over the multiple API thing.

How should learning happen today and in the future?

Any way it needs to happen. I guess, from an instructional design perspective, I love the whole concept of informal learning, and collaborative learning, etc, but I want to be sure that we don’t forget about the formal learning experience, the formal designed learning experiences. Because especially in the environment like DOD, you’re teaching processes and procedures and equipment, etc, it’s very important to make sure you still have a robust learning model.

But I do think it would be great to find some ways to account for the informal learning that I do on my own.  If there was a button that could appear anywhere after I go and read something or do some activity somewhere online, I could click a button and it could store that to my performance record. Somewhere, showing that I have done that, and it may be that it only appears in certain contexts, maybe after I do that it asks me three questions about the article I just read. And if I get them correct, I get some kind of credit for that.

A big thing we’re seeing a lot right now is community of practice, we’re getting a lot of community of practice, like knowledge-sharing. In the whole domain of knowledge management, they talk about knowledge-sharing being a key competency. So your willingness to share information and how frequently you share information, so if I’m on the community practice for whatever topic, I’m on the SCORM instructional design community practice, and I spend half my day on there answering questions, posing resources, that kind of thing, there should be some reward for that somewhere, somehow. So I guess the ability to integrate those kinds of things back into the learning realm. Because not only am I learning by being on there and seeing what other people are doing, I’m helping others learn. So having a way to interface those different systems, so that that type of informal situation could be also tracked.