Home » Arras People » Resource Scheduling

Resource Scheduling


Over the years, I have written a number of schedule risk analysis programs, the latest being Full Monte for Microsoft Project. From time to time, I have been asked why they never take resource scheduling into account.

There are actually two problems. The more fundamental one is that there is a mismatch between the assumptions made in resource leveling and those made in risk analysis. Most, if not all, resource-scheduling algorithms do their scheduling in some priority order, which generally favors tasks with less total float. This means that they make decisions on tasks which may be well in the future before they make decisions on earlier tasks, and they do this on the assumption that they know when these tasks will be available for scheduling (i.e. when their predecessors are complete). Risk analysis, on the other hand, is all about the fact that we cannot know this with any certainty.

(Risk+ used to take resources into account, but this was more or less by accident, since it uses the Project engine to do its analysis. For example, Risk+ would sample all the durations and pass them to Project for processing. Projects would simulate resource leveling decisions in the future, based upon these sampled durations, which, in reality, could not be known at the time…which, frankly, makes no sense.)

For doing things like taking a shower, planning seems superfluous. Image courtesy JaseMan @Flickr

The other reason is more practical. The uncertainty over when tasks are available for scheduling means that the order of execution of the tasks can vary considerably (though nonsensically, given the first problem) between iterations. This results in the dates often having multi-modal distributions, and results which are very hard to interpret meaningfully.

I have been contemplating the first of these problems for a while, and have come to the conclusion that the only sensible way to deal with it would be to simulate the decisions that a project manager would make, based only upon the information he would have at the time. The simplest approach would be similar to a job-shop scheduling algorithm, in which the next job for a particular resource would be selected only from those jobs already available. A more sophisticated approach which might be applied to one or two very special resources (e.g. drilling rigs which are expensive and need to be booked well in advance) would be to make decisions based upon appropriate percentiles of when the job would be ready to start.

While all this would perhaps lead to a more realistic simulation, it would not fulfill the normal “optimization” objective of resource scheduling. But if you believe that task durations are subject to uncertainty, does this optimization really make sense?

I would be very interested to hear readers’ views on this. Comments below are very welcome.

Image courtesy JaseMan @Flickr

If you enjoyed this post, make sure you subscribe to the Camel feed here.! You can also follow me on Twitter here.

About Tony Welsh

Tony Welsh
Tony Welsh is the CEO/president of Barbecana, which is a member of the Arras People Project Management Software Directory. Tony has been in the project management software business for over 30 years and developed four different schedule risk analysis systems, including Full Monte Risk Analysis for Microsoft Project.

2 comments

  1. Are you suggesting that you attempt a Monte Carlo analysis, randomly varying Resource availability as well as the forecast durations? I guess the very high number of iterations could be handled via HPC services from the likes of Amazon

    • Tony Welsh

      HI, Stephen. No, I am not suggesting that. I was assuming that the resource levels were known but that the circumstances surrounding the scheduling decisions — i.e. which tasks are ready to go — would not be.

      What you suggest might also be possible, but is at least one more step ahead of where we are currently. But I do not see why the processing would be prohibitive; our Full Monte has handled networks of over 50,000 tasks and can do a million iterations on smaller networks in a reasonable time using a regular laptop. (It is between 100 and 1000 times faster than older systems like Risk+.)

      Btw, the recent grounding of Shell’s arctic drilling rig led me to thinking of another issue related to the one at hand. We can simulate the vagaries of the weather, but what we also need to know is the state of knowledge at the time decisions need to be made. In the Shell case the weather forecast was OK but it turned out to be wrong. Even more of a problem, the rig was probably booked a year in advance and is very expensive to keep idle. So, how much do you know when we make the decision to book? Again, simulating this is another step beyond current capabilities.

Leave Your Thoughts at the Camel

%d bloggers like this: