Ses. 2-1H, 2-3H, and 2-5H Simulation video

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: This video covers aspects of the Lean healthcare clinic simulation sessions. It describes the student experience and covers some of the lessons they learned. For educators, this segment also covers aspects of the learning simulation.

Instructor: Hugh McManus, Earll Murman, and Bo Madsen

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

HUGH MCMANUS: This video is intended to describe some of the key features of the Lean Enterprise Value Lean Healthcare Clinic Simulation. A video can't substitute for the experience of attending an intense learning simulation such as this one, and we won't attempt to do that. Instead, what we're going to do is describe what the students experience during the day, cover at least some of the lessons of what they learn, and for those of you that are interested in simulation-based education, describe some of the features of the learning simulation.

The simulation itself and this video are broken into three pieces, three modules. One is the learning of the simulation itself. It's accompanied by some background information in how Lean applies to health care and some simple tools for getting local processes under control.

The second is a structured improvement using Value Stream Mapping, Capacity Calculation, and a Planned Improvement Method to improve the local processes at each clinic to make them at least functional.

And the third attacks problems that cannot be handled at the individual clinics. And in this exercise, the entire group of students participates in a simulated Rapid Process Improvement Workshop where they break down into functional teams and do a complete reengineering of the clinic process across the whole room with cooperation between clinics in order to solve some problems that are unsolvable at the local level, and also create a overall very high-performing clinic environment.

Basically, what you are in our simulated world is a group of outpatient clinics. We have an old process that is not very good. There's high variation in workload in the processes. The overall performance is pretty poor.

We're going to actually let you experience that. You're going to play with it. Maybe for a little while you'll be learning how to do it. But it's not that complicated. You should be able to learn your own jobs pretty quickly. But then you will find that it still doesn't work very well because the system itself is set up not very well.

At that point, the simulation, once you've learned it, becomes a practice field for applying Lean techniques. What can we do to make this process better?

This is what we're going to do. We're going to have basically three segments. The segments will have one or two rounds of play each. We're going to first learn the simulation. That's what we're doing right now.

We're going to apply Lean locally at each of your tables. You will strategize and apply some Lean interventions to make your individual clinic work better.

And then we're going to do another round where we start thinking about, well, there's only so much one can do locally. What are some of the things that we can do by coordinating across our whole health care enterprise by expanding our vision of the value stream beyond just the one clinic so that the whole room can work together to make a more efficient system that, for example, shares resources or redirects patients to places that are better able to take care of them.

What we're gonna do in this module is first of all, just learn how to play the simulation because the idea that we would like you to get is to think about this as a process that mirrors-- perhaps does not exactly simulate-- but mirrors the kind of process that you'd see in a clinic or ER where a patient is flowing through the system. And we're trying to get that patient value stream to be lean.

So we're gonna learn the simulation. We're going to experience the frustrations of a not very good process and apply a couple simple Lean tools to try to get it to at least perform at a minimum level. Key thing right now, though, is to kind of wrap your brain around the Sim so that we can stop thinking about it, because later in the day, what we would like to do is think about this as a process and apply Lean tools and analyses to it so that the game, as it were, sort of fades into the background, and we worry more about the realities of the tools and using them to improve the process that's embodied in the simulation.

The rules are very straightforward. And in fact, they are pretty much in front of you. If everyone has a mat in front of them that says Process, and if you just read that and execute those actions faithfully, you'll be fine.

It's important that you do that because that's the basis for future improvements. If you don't understand the process, it's going to be difficult to improve it.

So Round 1. And the minute is the time. You will now be referring to this rather than making those times up. You have 12 minutes to get things done. If you are not done in 12 minutes, you can no longer process, you can no longer register any new patients. But you have 3 minutes of overtime to finish the existing patients. And I will reset the clock at that time, and then we'll count up to 3 minutes.

So ready and go.

This montage will illustrate some of the features of the simulation.

AUDIENCE: Oh, it's a long one.

HUGH MCMANUS: The simulation involves the processing of little LEGO people, LEGO patients, through a simulated clinic with various functions such as registration, triage, examination, diagnosis, and discharge. The simulation has a fair amount of paperwork. A lot of it is a little frustrating, a little bit redundant. Some of it is unnecessarily complicated, although the paperwork does have important information that affects the flow of the patients and their treatment.

Timers are used to represent the capacity of the system so it doesn't turn into a race to get things done, although physical dexterity is sometimes required. There are a lot of manual steps. But the basic processes are represented by the timers.

And those are ultimately the constraints on the system that allows the students to focus ultimately on the process itself rather than the details.

The variation in the system is represented a couple of different ways. One is by the patient himself. The little LEGO patient has different colors of head, body, and legs that affect the diagnosis. This is the "each patient is different" phenomena.

The actual process variation is represented by dice that create a unpredictable source of variation in each station. And these can affect both the time it takes to process the individual patient and in some cases, in the case of diagnosis, the dice can indicate whether a test is failed or passed, or whether the machine fails to perform and the test has to be done over.

AUDIENCE: There's a patient number at the top--

HUGH MCMANUS: The patients sometimes have to leave the table and go to a little LEGO hospital.

AUDIENCE: I'll be back.

HUGH MCMANUS: So there are some externalities to the system. Sometimes the clinic cannot handle a given patient. There are other times when patients don't have to go to the hospital, but nevertheless due to lack of either capacity or the correct equipment, they can't be treated at the clinic, and therefore, the clinic effectively fails to treat the patient. These are things that are eventually dealt with in the last round of the simulation.

As the first round or two go on, the students do learn to master their own processes. But it still remains a little bit confusing. The process is not a very good one. And so but the students get past the simulation rules and start thinking about what is wrong with this process.

What's the overall flow? And what is the purpose of the paperwork? And what are the sources of variation? And how can we make this process better?

OK. Regular time is done. We are starting overtime, 3 minutes. No new patients may be registered.

The simulation uses overtime basically to clear patients off the table so that it's not a continuing state simulation.

AUDIENCE: It failed test. Yellow timer.

AUDIENCE: This is like the blindfold where everybody [INAUDIBLE].

HUGH MCMANUS: Basically, the clinic closes down for the night. This allows a completion of the metrics for each day--

OK. So if there's any patients still out on the floor, they're going to be recorded as untreated.

--and also makes it simpler to execute the simulation. You don't have patients that are mysteriously trapped in the clinic overnight.

Unfinished patients and any unfinished paperwork to that person to just put away. That would be good. We're going to clear the table.

And what we'd like you to do is put patients and errors. That's untreated patients, not-- hospital's OK. But untreated patients or patients that were treated incorrectly.

We can use visual control. Anybody who's been in a hospital has seen these kind of scheduling boards.

Students now use visual controls and intuitive small improvements in standardizations in their process to make it perform a little bit better at least, to make it perform not quite so chaotically.

All righty. Round 2. Go.

And try the process again. Typically, improvements are noticeable but small. And at the end of this process, students are encouraged to share their various ideas with the larger group.

Let's total up our statistics, put them up.

So that they can take some ownership of it. So that the larger group can see the variety of different improvement ideas that people can come up with just sort of creatively in the first improvement attempt.

AUDIENCE: Pretend it was width to width.

HUGH MCMANUS: OK.

AUDIENCE: Yeah.

AUDIENCE: Yeah. We did the same thing. And we actually used sticky labels.

AUDIENCE: We did the similar kind of thing.

AUDIENCE: MV, lab, triage.

HUGH MCMANUS: Yeah. So great.

In the second module of the simulation experience, the students receive some lecture material and then embark on a major exercise. The process is analyzed using value stream mapping, some gemba visits where people look at how the process work in detail on the table. And also, they do some analysis by using capacity calculations and other calculations to try to understand the capacity and other features of the simulation quantitatively.

They then go through a structured improvement process where ideas are brainstormed, costs are figured out, and an improvement approach is agreed upon.

OK. It is Round 3. Let's go.

These improvements are done at the single table level, the single clinic level.

AUDIENCE: So Tom, the next patient--

AUDIENCE: Registration. We have a new patient.

AUDIENCE: Can you take that patient to registration, Tom?

HUGH MCMANUS: No interaction between the clinics is done at this time. So the attempt is to locally optimize the process.

With the new process, the tables are quieter, calmer. The work seems to be proceeding in a little bit more of an organized way.

AUDIENCE: Lab, don't forget your end time. You're gonna put you end time--

HUGH MCMANUS: There's even time for the students to improve their process and organize their workspace while the work is going on.

AUDIENCE: Who's doing discharge for me? Nobody? Clogged on discharge.

HUGH MCMANUS: The result is almost always better performance.

AUDIENCE: This is so fun.

HUGH MCMANUS: All right. That's it.

Although there is significant variation by table, not every clinic shows improvement in their process based on the ideas that they implemented.

Successfully treated? So six here, three there. Let's go, guys.

[SIDE CONVERSATIONS]

So we had the problem at the doctor getting worse. And that was an external condition, right? You just got more gray patients. Right?

You guys kind of had bad luck, more bad roles on the diagnostics.

You folks, things clicked. Right? The process change worked. Your luck was reasonable. Things clicked.

All right. Everybody ready? Seems very quiet. I think we're ready. Let's go.

To stabilize the new process and collect a little bit more information on its performance, the simulation is run again using a new process. Typically, performance is good or better than the previous round. However, the performance varies by table, still. And the students observe that there's high variation and factors beyond their control, which are still limiting their performance to a level lower than they would like.

Didn't ultimately have success in this round. They have identified a shifted constraint. These guys are definitely getting better, and same thing. Right? Shift. They've identified the same problem.

You guys, everything was going great. Is there anything-- if I asked you to do 10, what would be the problem?

Oh, OK. So it was a bottleneck at the discharge at the end there. OK.

AUDIENCE: We had a lot of bad luck in our failed tests.

In failed tests. OK. Everybody had bad luck. What does that tell you? Yep. It's the new normal. Right.

So we're going to do a mock RPW to make this work. Here is our electronic records. We don't have to do the paperwork anymore. That's good.

Let's set up some cross-functional teams. First of all, let's figure out what our plan is. OK. And there's your team. So go to it.

The challenge in the final round of the simulation is to succeed in only one round of play. There isn't a second round to stabilize the new process. This requires the students to focus on the process.

It's designed to do a good job, but it must be executed carefully and successfully in order for the students to have good performance in the first round after implementing a lot of changes. Visual controls and constant communication ensure success.

AUDIENCE: This patient is our team. Black.

AUDIENCE: Yeah.

AUDIENCE: Our team.

[SIDE CONVERSATIONS]

AUDIENCE: This goes to-- to that side. Thank you.

[SIDE CONVERSATIONS]

AUDIENCE: 7 is back.

AUDIENCE: Yes. Thank you. 7 is back. 7 is good. 7 is discharged.

AUDIENCE: 7 discharging.

[SIDE CONVERSATIONS]

AUDIENCE: This one goes to you.

[SIDE CONVERSATIONS]

AUDIENCE: Here we go.

HUGH MCMANUS: Six minutes. Halfway through.

[SIDE CONVERSATIONS]

HUGH MCMANUS: Clock here. We're going overtime. Yeah. The last patient is winding their way through the otherwise empty clinic. Right? It's gone from being a mob scene to being like, it's one person.

The thing that worked really well was the fact that you had this simple system. There was a lot of yelling, but everyone could see what was going on, and everyone was communicating. Like, it's they're finished! They're finished! Hello!

But the information got transferred. I thought that was key.

And the other thing is the diagnostician team did a super job of both figuring out how to optimize that system-- good job, diagnosticians.

Our final discussion highlights what works about the new process and also in the spirit of continuous improvement, mentions what could be improved if the simulation was to keep going.

AUDIENCE: The context of a [INAUDIBLE] OPIW. This is what would happen in a week. You would get to the end of the week with your new system tested. It's not yet ready for full scale implementation. You're going to have to refine it, get the training ready--

HUGH MCMANUS: It's probably going to be pretty ragged. Yes.

AUDIENCE: But at the end of the week, you would have gotten a whole new system developed and tested to this stage. And that's what you did. So that's what we're doing at the moment.

HUGH MCMANUS: It's about this frantic for a week.

AUDIENCE: Yeah.

GUEST SPEAKER: Yeah. That is why you need to test your implementation. This way we're spending half a year planning the perfect solution. It does not work because you can't anticipate those things.

So you launch, you look at what doesn't work, and you fix it. You relaunch. And that's how you just run through the cycles.

AUDIENCE: Yeah, [INAUDIBLE] cycles.

GUEST SPEAKER: Forget your health care upbringing with, let's study this for half a year or a full year. Let's spend another half a year designing the perfect solution and then evaluate after another year. That's not a good way of doing it. Do it this way.

Free Downloads

Video


Caption

  • English-US (SRT)