Introducing agility into traditional systems development processes is never easy. Firstly, you have got to want to change. Secondly, you need to have a vision of what to change to. Finally, you need the tenacity to forge ahead in the face of stiff resistance. It is usually the third that is the most difficult journey to undertake. The hardest part of the journey is during the transition wherein you show how to bring agility into executing projects. You are walking the fine line between traditional methodology and incrementally introducing change.
One of the challenges during the transition is the question,” how do I know if the project is on track?” Despite all the conversations around introducing agility, when the rubber hits the road, it always comes back to “are the tasks on the critical path late?” or “what’s the project CPI and SPI?” or a variation thereof. Critical path and earned value concepts are deeply ingrained into our psyche. It is not easy to let go.
I prefer to lead change by piloting. It is not enough to just talk about agility. You need to prove over and over again to the different groups that agility and flexibility can be introduced into any systems development project and not just software development ones. In fact, it can be used in any project subject to the constraints of the deliverables and the cost of change of these deliverables.
Process and measurement system
I kicked off a systems development project last year and wanted to introduce agility into the process. Scrum was out of the question. Our project team was spread across multiple locations to begin with. Secondly, none of the team members had any exposure to Scrum. And finally, none of the project team members, including myself, were 100% dedicated to the project.
But I had a plan. I knew I was going to use a visual board to begin with. I also knew I was going to limit project WIP within my team. (It is a not a Kanban system until you limit work in progress (WIP)). However, because none of the team members were dedicated to the project, I knew the project would probably get the short end of the stick in terms of prioritization of work. I also did not want to wait for weeks or months to start getting feedback on deliverables. Given the constraints, how could I bring more agility into this waterfall project? This nagging question coincided with my vacation plans. Needless to say, I traveled with this question as I embarked on my sojourn.
GPS to the rescue
Like any contemporary long distance road warrior, I never leave home without my trusted GPS. The device faithfully prompted us at every turn. Here’s some of the information my GPS provides:
- Current speed
- Distance to destination
- Time to destination (leading indicator) – it uses a simple math – Time = Distance/Speed
- Current location
- Points of interest, etc.
The estimate to arrive (ETA) was our drum. My speed, the frequency of stops and the length of the stops was all driven by the ETA. If we were stuck in traffic or were delayed, we could call the folks we were meeting up with and either changed the location or adjusted the time of the meeting. As I was mulling about my project one evening, it hit me:
“Just like the GPS in my car, I just need a simple indicator that will tell the team and how late or early we will be based on the current travel speed?”
I decided to explore the options available to me and see if it would fit my needs. I wanted to keep it simple. Critical chain can accomplish this. Adopting critical chain, however, requires a significant change in the mindset. Moreover, the team and sponsor needs to accept that every task has contingency built in and once identified they won’t be cut. I have had sponsors try to take away the risk contingency on my projects. I decided critical chain would not work here.
I could use Velocity. Velocity is the speed at which work is completed; i.e. distance/time. The distance is measured as story points left to be completed. The speed is the number of story points completed in the last iteration or it could be an average of the last few iterations. But using velocity requires a reasonably good estimate of the effort. Moreover, my goal was to achieve a continuous flow – I did not have sprints. Moreover, adopting a Scrum approach would mean a significant leading change and something I did not have time to do.
All I had was the waterfall process over which I hoped to lay the Kanban board over to help incrementally change. But the concept of velocity had merit. I wondered,” What if, I could use another variable instead of story point estimates? What if I substituted story points with the count of project tasks?” Something worth exploring!!
Adding the Kanban
I scheduled the project using traditional waterfall practices. I decided to add the following stages to each task: Requirements, Design, Build, Test and Deploy. Since the WBS was being built in the traditional way all tasks did not evenly flow through all these 5 stages. But this was a good start and I planned to introduce the team to concepts of user stories as we progressed.
Task sizes on waterfall projects are typically large. The PMI says the WBS task should be about 40 hours of effort – but in my experience they are anything but. Moreover, there is significant variation in the task size. Some tasks can be as small as 2 days effort while the other can be over 3 weeks. This creates a significant variation in the completion times of project tasks. I decided to make the impact of this variation visible.
In comparison to tasks on the waterfall projects, task sizes in agile projects are typically of similar sizes and small. If the team estimates large task sizes, then it follows that the number of tasks on the project will be low. On the other hand, if the team estimates small task sizes, then it follows that the number of tasks on the project will be large. So if I were measuring the task completion rate, then the agile project with small task sizes, and hence more number of tasks, would have a higher value then the project with larger task sizes, and hence smaller number of tasks.
If I consider the number of project tasks as the distance to be covered (as an alternative to the effort or story point), then the distance to be covered (number of tasks remaining) divided by the current speed (tasks per day actually completed) would give us the time (number of days) it would take to complete all tasks. Adding this time to today’s date would give us the projected end date at current speed of completion. By comparing this projected end date to the planned completion date, we would know if we are going to be early or late and by how many days.
In search of a Leading Indicator
If I only calculate this measure when tasks close, I have a lagging indicator. So how can we get a leading indicator? As each day passes without completing tasks, the rate of task completion should decrease. This rate when divided into the number of tasks remaining to be completed should increase the time it takes to complete all project tasks. This should push out the projected project completion date. On the other hand, when I complete tasks, the rate of task completion should increase and this in turn should bring my projected project completion date in.
With each passing day, if I could tell the team where the project stands, then I’ll be able to get them to focus on this project. Here is how I came up with predicting the project end date range based on the number of tasks:
- For each day record the number of tasks to be completed (backlog), WIP tasks, and completed tasks
- Record the number of WIP tasks in each stage that you track on your visual board or the Kanban board (there is difference between the two. A visual board with work in progress limits is a Kanban board)
- Calculate the average of the days open and standard deviation for all WIP tasks.
- Rate of completion = count of WIP tasks/average of days open for WIP tasks. The longer you take to complete tasks, the higher the denominator is and lower is your rate of completion. The only way to decrease rate of completion is to increase WIP. This is not a very good idea. We tried this with disastrous results. There is only so much you can increase the WIP to. And then your average catches up on you very soon.
- Days to complete all tasks = (Backlog+WIP)/Completion rate. What this tells me is that at my current speed (as determined by rate of completion), I will need xx number of days to complete all tasks.
- Add days to complete all tasks to todays date to come up with an estimated projected end date.
- Add and subtract the standard deviation to this date to get an estimated range. Remember we said – estimates were probabilistic in nature. But the moment we put it in the Gantt it becomes deterministic. That is the same case here. This project end date is probabilistic. By factoring in the standard deviation we establish the range of completion dates. By doing the math once, we are in effect saying that the range is one standard deviation from the mean. All this means that given the current rate of completion of tasks, we are 68% confident that we will complete the project within the calculated range.
- You would perform this calculation everyday to get the graph
Putting theory to practice
The adjacent graph is a sample from the project timeline. You are all familiar with the cumulative flow diagram. I have only shown the backlog, requirements/design and completed queue. This is on the primary Y-axis. On the secondary Y-axis, I plotted the number of days the project is expected to be behind schedule or early.
The other information I showed on the graph was the count of tasks completed and the total number of tasks, the backlog, the count of WIP tasks and how long these tasks are open for along with the standard variation. So based on the above logic, I also show the expected dates within which the project is expected to close. I also show the originally planned end date for comparison (the clock on my calculation has not stopped ticking and hence it shows the average of 117 days for the WIP with a high standard deviation. Going back to change all the formulas was tedious and hence I left it as it is. But this should not effect the understanding of the concept.)
WIP in July ranged from 6 to 13 as you can see. It took us an average of about 7 to 14 days to complete them. You can see the effect of not closing tasks in the dashed line. We kept trending down until we closed a bunch of tasks towards the end of July. As we got better at closing off tasks we got into a bit of a rhythm in August as an be seen by the sawtooth graph then. But we went off the rails by the end of August and early part of September. Other non-project priorities took precedence at this stage and we were not able to complete project tasks. The daily completion rate dropped from 1.54 on July 31 to 0.13 in Sept. And you can see its impact on the projected project end date.
Making this visible to everyone in the team 3-4 times a week increased the sense of urgency. Some of the observations through this exercise:
- Showing the team how their time on the project impacts the end date was very powerful.
- People are so swamped with work that they cannot help but work to deadlines. We wait until the last possible moment to get things done.
- Some people immediately see value using this method. Others cannot reconcile their experience in project management with this method and will vehemently oppose it. I found it useful. Task ETCs are implicit and are taken into account within the project task completion rate.
- Task completion delays do not have a linear relation to project completion; i.e. a one day delay in task completion mean more than one day delay in project completion.
- There appears to be a relationship between the slope of the cumulative flow diagram and the project prediction line. I’ll have to do some more analysis to determine the nature and the strength of the relationship
- There is nothing stopping you from increasing or decreasing the total tasks on the project. Just like anything the predicted closure graph is only good as the most recent information.
- You will probably get a saw-tooth waveform on the prediction completion graph. You will want to achieve this since this denotes a regular cadence of task closure. This means that your team is one step closer to achieving a continuous flow.
- The saw-tooth waveform also implies the release of the work into the processing system was controlled by a constraint. We did this on purpose – the release of work. In a more mature PULL process, you probably won’t need a constraint in the process to release work. Work will flow as people become available to do the work. This will smoothen out the saw-tooth – you may even get to see a flow.
A reason given to me by one of the change resister was, “Every task has different estimate to complete, and hence duration will be variable. This method will only work if every task has the same amount of effort and people are dedicated to the project. For a project with varying task estimates and duration, this will never work.”
This reason misses the point entirely. It is because of varying estimates and that I think we need some method like this. There is very little we can do about the variation. Systems development is a discovery process wherein the team uncovers new information as they progress along. The tool just shows the impact of these variations. As a project team, you now have to decide on how to act on this projected information.
We were able to keep the project on track despite the lack of dedicated people on the project. We were also able to get the team to take ownership of their respective tasks with very little follow-up. This, in my opinion was the most important benefit of using this measurement system. The team was able to adopt the principles outlined in the Agile manifesto using this tool. We were able to adapt to collaborate as a team and adapt. Every time, the projected end date started to creep up to the planned end date, the team rallied together to help each other out.
What do you think? Does this make sense? Agree? Disagree? Feel free to drop me a note to give your opinion.
“if a project were to have many low effort tasks at the onset would you encounter a risk of projecting / overselling a finish date that is not feasible? ” – Yes. If shared with customer and the broader stakeholder team, it is possible to oversell a non-feasible finish date. This is the reason why as a project manager, you need to decide how do you use this? If you work in a low-trust culture, you would restrict it to either yourself and some or all of the immediate project team members. (Now this is assuming you’ve done the initial change management around the process and how you plan to use the data.) Either way, you would not re-baseline the project just this metric. It is just a guide to help drive team behaviour.
“Is the main focus of the tool to expose variation as it is encountered and trigger action accordingly as opposed to illustrating the expected end date?” – (This is my view only. )Businesses cases are generated based on certain time assumptions and if there is a delay, for example, there may be instances not to throw good money after bad. Now, in my experience, change only happens if the pain of change is less than pain of status quo. Telling the project team that we could be 5 days late is a powerful motivator to change behaviour. You see, there are generally no rewards for getting things done early, but there are penalties for being late. Hence, in the interest of all stakeholders, I would choose to use the expected end date. Effort is again, just an estimate (guesstimate??). If I put my system thinking hat on, then completing the project to achieve business goals is my overall objective and not reducing variation.
Thanks for the article. I like your point on how this exposes and creates awareness of the amount of variation being encountered in order that the team make take action based on those findings. I do have a question regarding the GPS approach – if a project were to have many low effort tasks at the onset would you encounter a risk of projecting / overselling a finish date that is not feasible? Is the main focus of the tool to expose variation as it is encountered and trigger action accordingly as opposed to illustrating the expected end date?
Thank you Amy.
This is one of the best explanations I have seen about measurement, and predictability. I am going to go through the rest of your blog now and see what else I can absorb;)
Thanks for sharing!