This Focused Performance Weblog started life as a "business management blog" containing links and commentary related primarily to organizational effectiveness with a "Theory of Constraints" perspective, but is in the process of evolving towards primary content on interactive and mobile marketing. Think of it as about Focusing marketing messages for enhanced Performance. If you are on an archive page, current postings are found here.
Wednesday, January 28, 2009
Rob Newbold on Agile and Critical Chain -- I'm not going to quote it. You should read the whole piece on Agile and Critical Chain from a guy who was there at the beginning of Critical Chain.
"'The Shingo award establishes that the B1-B PDM is a world-class operation,' said Kim Roe, 76th Aircraft Maintenance Group Bomber Transformation chief. 'This award recognizes the mechanics and managers who have shown that critical chain theories are applicable at the depot maintenance level, repair and overhaul process.'"
Evidence of Critical Chain in the Wild -- One of the heartening things I've discovered during my job search is that of the two firms that I've had sufficiently successful conversations that they might be in my future, one manages their multi-project organization "the TOC way" and the other is seriously considering doing so. I knew about the first going in, the second was a pleasant surprise sprung upon me in an interview.
Making Parkinson's Law Work for You -- More from the book I mentioned yesterday, The 4-Hour Workweek...
If I give you a week to complete the same task, it's six days of making a mountain out of a molehill. If I give you two months, God forbid, it becomes a mental monster. The end product of the shorter deadline is almost inevitably of equal or higher quality due to greater focus.
This presents a very curious phenomenon. There are two synergistic approaches for increasing productivity that are inversions of one another:
1.) Limit tasks to the important to shorten work time. (80/20) 2.) Shorten work time to limit tasks to the important. (Parkinson's Law).
The best solution is to use both together: Identify the few critical tasks that contribute most to income and schedule them with very short and clear deadlines.
Of course this assumes you are capable of fooling yourself into honoring self-imposed short deadlines - kind of like setting your clocks fast to avoid being late.
Critical Chain and Agile Development -- I've recently had reason to review some of the things I've written about Critical Chain Project Management and Agile Development over the past few years of this blog. Highlights include...
Critical Chain Makes BusinessWeek - Almost -- Yeah, almost BusinessWeek, via its blog, and almost Critical Chain...
"The press covers news, stocks, companies and personalities. But try pitching a cover story on operations. People think it's ... boring. Trouble is, if we want to know where things are going, we have to understand how they work. And when the process is transformative, as it often is in OR, there's nothing boring about it. The winner of the annual Informs Franz Edelman award, by the way, was the Warner Robins Air Logistics Center. They overhauled the maintenance of jumbo C-5 transport aircraft, reducing repair time by 33%. This means that these monsters, which cost taxpayers $2.3 billion each, spend more time in the air and less time in the shop."
Note the link in the quote above...It goes to a summary that says...
"WR-ALC used an O.R. technique called Critical Chain to reduce the number of C-5 aircraft undergoing repair and overhaul in the depot from twelve to seven in just eight months."
Anyhow, good point about the business press, which is apparently like all the other flavors of the press, going for the headlines, numbers, and horseraces on the easy stuff but shying away from the guts of issues.
It's an excellent set of posts, covering topics not unlike those I did in a series a couple years ago from a critical chain perspective, where I talked about making project promises using range estimates and buffered schedules, and keeping those promises via risk analysis using buffermanagement.
I think we're talking about the same things, just using different tools to accomplish similar objectives in a shared view of the reality of uncertainty inherent in projects.
(By the way, when I look in my Bloglines aggregator, the difference in subscription number that I've got (380) compared to Glen (27) is totally disproportionate to the recent value of our respective content. If you're reading my blog for project management "insight" in Bloglines, or any feed reader, I strongly recommend you go grab Glen's feed as well. Now.)
Implementing the new approach has boosted performance of clinical supply operations:
Lead times were reduced from 8-12 weeks to typically 3 weeks, a reduction of about 70%.This is substantially lower than the industry average of around 6 weeks.
Due-date delivery was over 90% (for five consecutive months).
Without additional resources, about 50 studies were now packaged every month, a throughput increase of 150%.
Besides the quantitative improvements, there are also qualitative benefits felt by project participants. Managers feel in control of operations and now make decisions proactively to stop problems before they become problems. Another benefit seen is that as the rank and file do not need to multitasking (because they get clear task-level priorities), they can focus on delivering highest quality output. This is extremely important in clinical trials, as any quality problems can lead to the whole clinical trial results being rejected by FDA.
(I need to say thanks to Heath Row (I smile every time I say, think, or write that name) over at Fast Company for sending a lot of traffic to the series.)
OK. That was September.
A lot of October's going to be a bit slow for blogging here while I head off for a bitof awalkabout next week, although I might blog my travels over on my more personal Unfocused weblog. In the meantime, allow me to direct your attention to Tony Rizzo's soap box, where he seems to be fleshing out a nascent book on Robust Project Design (tm) with a coupleposts on Robust Project Planning (tm). As I've gushed entirely too much recently, Tony's one of my compadres and influences in the world of TOC, Critical Chain, and Multi-Project Management. I leave you in his capable hands while I refresh and rejuvenate on the other side of the world.
Multi-Project Management and Organizational Effectiveness VIII -- Managing the Present and the Future -- The present...
Today - “What should I be working on?” - Clarity of priority at the task level. If projects are not overloading the system, the question of which task to work is simplified by the mere reduction of active tasks in play. In-boxes are less loaded. However, due to the vagaries of project plans, and of variation in task performance, occasionally it might occur that a resource faces the need to choose one task to pick up and finish before addressing another one that is waiting. There are several options to providing such guidance.
Assuming that the individual projects are being actively managed via Critical Path or Critical Chain processes, one consideration is whether any of the waiting tasks are on the critical path or critical chain of the project in question. If so, that task would most likely be the appropriate first choice over a competing “non-critical” task.
If there is a choice of two or more “critical” tasks from different projects, the relative health of the projects in question can be easily assessed based on working one, then the other, and vice versa. The scenario that leaves the best combination of the resulting health of the projects’ promises in best condition (or maximizes the benefits associated with both projects) would be preferred. In an environment based on Critical Chain Scheduling and Buffer Management, project buffers provide not only the ability of projects to absorb such decisions, but also make the assessment process straightforward. Critical Path-based projects, usually relying on smaller, if any, schedule reserves, might have to add some additional recovery activities. (Note that this constraint-based approach to multi-project management comes from the same source as Critical Chain Scheduling and Buffer Management; the Theory of Constraints, and the two processes work together by design.)
If all queued tasks are “non-critical,” it’s less of an issue, and while usually a first-come-first-served process will suffice, a consideration of the general health of the project promise, or in the case of a Critical Chain project, buffer consumption, could also provide useful guidance.
"What should be of primary importance is the impact of those tasks on the project, not whether an individual task was completed 'on time.' One way that Critical Chain Project Management helps with this is to ask for 'how much more time do you need' to complete a task, rather than 'are you going to be done on time.' This lets you have conversations like, 'if you are able to finish a day earlier, we can get started on the subsequent activities and bring in the project sooner.' Or, 'that's fine, there is another set of tasks that we are focussed on completing in the next two weeks to bring the project in early.'"
After a couple months of getting comfortable understanding the operations at my new company (Oh what a life consultants have, not having to worry about the day-to-day while setting up for the future.), I've started having just such a series of conversations pointing out the exact same thing.
Software for Critical Chain Project Management -- In a recent exchange of blog postings on spiral project lifecycles, Brian at Projectified mentions looking into the various options for making MS Project work in a Critical Chain environment. They all use MS Project as an underlying database, and as a data (tasks, task resources, resource availablity, dependencies, duration estimates) entry vehicle. In general, they all use a relatively similar multi-step process of using the inputted information to 1) resolve resource contentions, 2) propose a recommended critical chain, and 3) size and insert buffers to develop a rational plan and project lead-time for the project in question. They also all provide a method for easily updating the estimates to completion of active tasks during project execution, as well as sets of tools for analyzing project networks both in the planning and execution stages.
Due to the simplicity of the critical chain planning process, resulting in similarity of the planning and analysis tools and processes provided, the major distinctions tend to fall in the means for assessing and reporting project health and in the presentation of the networks when viewed in their design for a Gantt view.
The single-project options include the following...
ProChain -- From ProChain Solutions, this is the grand-daddy of CCPM software, introduced shortly after the publication of Eli Goldratt's introductory book, Critical Chain, in 1997. I particularly like their ProChain Gantt View, which offers a nice parallel view of the baseline plan and the current projections, cleanly showing buffer consumption. The picture they provide is a nice view of the health of the promise at the end of the project buffer, which tends to emphasize the protection of that promise. It also offers a nice clean interface for macros and filters used to analyze why a project is where it's at. (Although I must admit that the cleanliness of interfaces are in the eye of the beholder; my appreciation for it may just be rooted in the fact that I cut my CC teeth with ProChain. ProChain works with two other offerings from ProChain Solutions for multi-project managment; the simpler ProChain Pipeline and the more sophisticated web-based ProChain Enterprise.
cc-Pulse -- From Spherical Angle, launched in the fall of 2003, ccPulse's view of the world seems to be less about protecting promises and more about projecting the possibilities of finishing at some point in the future. The emphasis seems to be less about worrying about buffers and more about assuring an easily updated model of current status of completed work and expectations of the work that remains to be done until the end of the project. Speed of completion is seen as the way to keeping promises. (This is not to say that it doesn't do a credible job of buffer management. It just does it from a slightly different perspective than ProChain.) Version 1.1 of cc-Pulse has just been launched, introducing a new "Looking Glass" reporting interface, which, while I haven't had a chance to look into it in practice yet, looks very interesting. The only thing holding ccPulse back from being a major player in the CCPM space right now is the fact that their multi-project solution -- cc-MPulse -- is still in development. If however, you want to explore critical chain-based PM on a single project, it's well worth looking into. It might even be acceptable to "bet on the come" with it for small multi-project environments, since the parent company is offering a multi-project scheduling service while their product is in development.
CCPM+ -- Just introduced recently, CCPM+, from Advanced Projects, Inc., is the latest entry into this space. All I know about it is what I see on their website, but it looks like a simple CCPM implementation for MS Project 2000 and later. There is no apparent multi-project application associated with it yet.
In addition to these single-project "plug-ins" for MS Project, Realization Technologies (formerly Speed-to-Market) offers their Project Flow software (formerly Concerto) for purely multi-project applications, and Sciforma, with its PS8 product, offers the only approach that combines single- and multi-project management capabilities in one package that does not rely on MS Project.
Promises and Prescriptions Part 8 - Combination Therapy -- Each of the individual prescriptions in this article is worth considering. Driving out pressures to multi-task, striving for clarity of dependencies between pieces of the project, and continuous refinement of that clarity throughout the life of the project are all common sense aspects of effective project management. Taken separately, they have the ability to provide improvement in project speed and reliability. Combined in a formal methodology, they can form a coherent therapy for troubled organizations, whose old habits and superstitions are at the root of their problems and sometimes conflict with these recommended practices.
One such methodology is the approach known as Critical Chain Project Management (CCPM). The basic premise of a Critical Chain schedule is that task due dates are avoided and uncertainty is managed separately from the rest of the schedule. The safety that used to be wasted protecting task promises from task uncertainty is now aggregated and concentrated where it counts. Some of the safety removed from tasks and iterations becomes a buffer protecting not task promises, but project promises. In addition to providing protection, the consumption and replenishment of this buffer as the project progresses provide feedback control used to monitor the health of promises and to manage accordingly.
Similarly, for multi-project systems, the pressure to multi-task in a CCPM environment is minimized through basic constraint (aka bottleneck) management concepts, which boils down to launching projects no faster than one or two heavily used, limiting resources can deal with them.
Much of what I've presented may sound like a basic, common sense (some might say uncommon sense) view of some of the problems faced in software projects. In the end, projects are all about dependent and interdependent efforts, uncertainty every step of the way, and the attention of resources to their component tasks in pursuit of a goal. If those are the things that are important, those are the things that deserve focused attention and common sense management.
(This is the final weblog-based installment of an article published in the January, 2004 issue of Better Software. Even if you don't get the magazine, I recommend you check out their StickyMinds site (free registration required for basic access). They feature a weekly column that often goes beyond the software development domain and is usually well worth a read.)
Promises and Prescriptions Part 1 - Introduction -- Projects--including software projects--are about promises.
Projects are about turning uncertain work efforts into reasonably certain outcomes. Project sponsors, customers, and stakeholders rely on project promises to carry out and coordinate larger strategies in support of organizational needs. Yet, making and keeping those promises are hindered by common problems: people on projects are reluctant to promise the unknown, plans are disrupted by rework, and schedules are thwarted by contention for resources that are involved with more than one effort.
In his classic business novel, The Goal, Eliyahu Goldratt introduced an approach to managing complex systems known as the Theory of Constraints (TOC). While Goldratt's novel revolves around a manufacturing plant, he offered management prescriptions that can help software projects as well. Later, he refined the application of TOC to the domain of project management with Critical Chain Project Management (CCPM). In this article, I'll offer you prescriptions from CCPM to help deal with common problems encountered in software projects. While there's much more to TOC and CCPM, these prescriptions will help improve project performance even if you don't pursue the full solution.
Within software projects, there are three common complaints. One is rework. Work efforts are designed in a highly intricate, interactive, and interdependent domain. Touching one piece of a work product can impact other pieces, frequently requiring rework of what was thought to have been completed. In addition to missed or misunderstood interdependencies, time pressures that compromise quality, uncontrolled changes in requirements, and miscommunication all contribute unexpected work and tend to extend the timeframe upon which the project promise is based.
Second, software efforts often exist in mixed- and multi-project environments. A limited pool of people and resources are assigned to mixed responsibilities, such as development and maintenance--or are shared across multiple concurrent projects. The resulting conflicts of priorities are a major source of difficulty in promising and delivering projects.
The third issue is a culmination of the first two, plus impacts of project work in an uncertain environment. Projects are about promises--necessary promises that help an organization to manage its future. The fear of promising the unknown results in either irrational promises that stress out those tasked with delivery, or unresolved promises that stress out those who must run the business or sell the product the project is intended to support.
(Watch for the thesethreeissues (and prescriptions for dealing with them) in future installments of this article, originally published in Better Software magazine.)
Estimates and Buffers in Critical Chain (Part 5 - Using Buffers) -- OK, folks. We're in the home stretch of this series on buffered promises, buffer sizing, estimating, and buffer management in Critical Chain-based project management (CCPM). Putting it together has raised a bunch of other thoughts for your truly, but enough's "good enough," for now; that is, it'll be enough after this final installment on the mechanics of Buffer Management. Let's start with a bit of history on the evolution of those mechanics...
(1) Way back in the early days (1992-97) of CCPM, Buffer Management started with a simple (many now say simplistic) tripartite "red-yellow-green" process in which the buffer was divided into three equal parts for the life of the project. If consumption of the buffer was less than 1/3 of it's original size, project health was considered OK (green). As it crossed the 1/3 line into the middle (yellow) third, it was deemed that appropriate action was to "watch and plan" possible recovery actions that would be implemented if the project deteriorated to the extent that the 2/3 consumption (red) line is crossed. The idea of delaying implementation of recovery action was to avoid unnecessary "tinkering" with the system and distraction of the project team.
(2) A refinement of this approach, to catch with the possibility of major problems going too far too quickly, added a watch on the trend of buffer consumption; sort of a SPC-like approach. If the rate of project buffer consumption proved to be consistently faster than the rate of completion of the critical chain, a "yellow-zone" watch and plan state would be triggered. If that trend continued for a number of reporting periods, the developed recovery plan would be implemented.
(3) Later, recognizing the fact that as the project approaches completion, less buffer is required to protect against uncertainty, the original straight division of buffer into thirds - (1) - transformed to a sloping set of thresholds, with larger and larger "green" portions of buffer as the more of the project completed. (note that the slope on a chart can go in either direction -- up or down -- depending on whether your tracking buffer consumption against chain completion or remaining versus needed buffer.)
(4) A variation on the sloping green-yellow-red concept - (3) - replaces straight-line borders with borders that reflect the varying amounts of buffer needed to protect the remaining tasks.
(5) These "fever charts" have also been adapted for quick views into the relative health of multiple projects.
One of the things that seems to vary from implementation to implementation is the positioning of the thresholds relative to the buffer size. The tradition of splitting the buffer into three equal components is strong with consultants and authors who tend to respect the theory associated with avoiding unnecessary tinkering with the project. However, in reality, I have yet to implement the process in which there is not a strong pressure to implement "buffer recovery" actions as soon as they are discovered, and to tell the truth, that's tough to really argue in most cases in which projects are a mix of date promises and ASAP efforts. As a result, sometimes the yellow zone is shrunken or even eliminated altogether.
But this still begs the question as to where the border between green and yellow should be. What I like to recommend is to take advantage of the strengths of the two buffer-sizing processes, sizing the buffer with the "Half-the-Safety" and defining the yellow zone at the "Square-Root-of-Sum-of-Squares" size. This results, in typical real life chains, in a nice comfortable green zone to allow the project to run while absorbing Murphy's Law before the SSRS 85-90% confidence point gets broken. For due-date driven projects, I also prefer a "sloping" approach, as in (2), (3), or (4) above, to allow a healthy project run without distraction or pressure to tinker into it's later life.
(Unfortunately, the CCPM software packages I've worked with recently don't easily mix "Half-the-Safety" with "SRSS" calculations, so therefore require a bit of simple Excel manipulation to accomplish this. That said, setting the green-yellow threshold at about the traditional 1/3 position works more than -- dare I say -- "good enough.")
For that matter, all of the above described buffer management methods tend to be based on the "Olympic Stadium" example; the kind of projects that have promise dates that are being protected by the buffer and buffer management, and that accrue little benefit from considerably earlier finishes. However, when one shifts to an ASAP (the sooner we finish, the sooner we can ring a cash register) project environment, the emphasis also needs to shift -- from the health of buffers protecting not-to-exceed dates to projections of anticipated completion and to the encouragement of focused performer attention for maximum speed with quality. Since many of these usually have some associated "not-to-exceed" date attached to them anyhow, the basics described above can still apply, but with a bit more emphasis is on "when can this get done?"
The original buffer provides an initial range of possible completions. As the project proceeds, learning and experience refine expectations. By performing rolling recalculations of the "buffer required for remaining work" (for which my preference is the more aggressive SRSS method), we get better refinements (narrower ranges) of when we can expect the project to complete. Also, by focusing on the movement of that range in time or the SPC/trend view of buffer consumption, we can be forewarned that things might be going awry, schedule speed-wise.
Projects are exercises in turning uncertain events into reasonably certain promises. The processes common to Critical Chain-based project management, 2-point range estimates, buffered schedules and promises, and buffer management of project execution all explicitly deal with the inevitability of uncertainty, variation, and risk. They can be used to good advantage for protecting promises and realistically projecting project completions.
This concludes this little (???) tutorial on buffers and estimates in Critical Chain environments. A number of auxiliary thoughts and ideas popped into my head as re-reading and writing these installments, which will probably arise in future weblog postings. (One for example, is the relationship of "old wine" PERT scheduling to Critical Chain's "new bottle.") Also, you'll notice that this last installment included some graphics. I might go back at some point and put pertinent pictures in preceding posts as well. I'll let you know if/when I do.
If any of these entries have triggered questions or comments, please use the comment links to submit them so we can turn my monologue into a dialogue.
Estimates and Buffers in Critical Chain (Part 4 - Buffer Management Basics) -- In the firstthreeparts of this series on estimates and buffers in the Critical Chain project management methodology (CCPM), the focus was on their development and sizing. In this installment, the subject shifts to using buffers to manage project execution. While, on the surface, buffers appear to be primarily a means to protect a project's schedule promise from inevitable uncertainty, they are also at the center of day-to-day decision-making in a CCPM environment.
Planning and scheduling is about making promises. Managing project execution is about keeping those promises in the face of uncertainty, variation, and risks both identified and unidentified. Once a schedule is developed and commitments are made, we enter the real world of project execution. A plan and schedule are merely models of expectations associated with the project. Reality will create deviations from those expectations as early as day one of the project. These deviations are the both the result and precursors of changing risks and opportunities associated with the project.
As reality deviates from the model of expectations, tasks will take longer or shorter than accounted for in the schedule. As tasks are worked, better understanding of the reality of the project is developed (including potentially significant changes in the details of the project's product). As the project progresses, more becomes known about later tasks as a result of findings in earlier tasks. These variations in performance, new knowledge, and resulting refinements in expectations need to find their way into the understanding of the project and its promises. In a Critical Chain-based project, buffers are consumed or replenished accordingly, acting as shock absorbers designed to protect promises from the unavoidable variation in task performance.
But sometimes those deviations are greater than anticipated. Sometimes, the shock absorbers threaten to "bottom out." Sometimes corrective actions are needed to mitigate the accumulation of anticipated and unanticipated variation. How does one assess the current risk and whether and how to act?
Critical Chain-based risk management and project "control" does not end with building the schedule and making the promises. The full name of the Theory of Constraints solution for single-project management is Critical Chain Scheduling and Buffer Management. Buffer Management is the key CCPM process for monitoring and controlling projects. It provides the basis for ongoing awareness of changing risk and guidance for when that risk suggests a need for action.
The use of Buffer Management is not unlike the use of statistical process control (SPC) in production environments, helping differentiate the impact of common cause variation (related to anticipated, accepted risk in the project world) from special cause variation (unanticipated or unplanned risks). It is based on straightforward methods of assessing both the consumption of buffers relative to project completion or the trending of that consumption, and requires minimal data gathering to facilitate its calculation. As a result, buffer reporting becomes a tool that is usable not only by the project management elite, but also by top management as well as project performers and their managers to assess and appropriately act on risks as they raise their head.
Like Project Risk Management, Buffer Management is a future-facing process. At its core is an approach to updating tasks that eschews emphasis on what has been done or “percent complete” in favor of what matters in terms of the promise – what remains to be done. Just as risks are potential events yet to be encountered, task updating based on estimates of the duration to the completion of active tasks reflect any remaining risks for that task. Similarly other traditional methods of soliciting concerns of risks for future tasks can also be translated to changes in expected durations of those tasks or to insertions of new tasks into the original chains in the project network.
Combining the cumulative previous buffer consumption with the current task’s remaining duration (or new understanding of future work) provides a new, current view of the state of the buffer. Comparing how much buffer remains to the amount of buffer required to protect the project’s promises from the variation expected in the remaining work allows an assessment of the health of those promises. (Technical details of various means of doing this comparison will be covered in Part 5 of this series.)
Risk assessment during project execution can be assisted by determining if buffer remaining is less than buffer needed, or if buffer consumption exhibits a troubling trend. Risk response planning can be based on thresholds of buffer consumption or comparisons of the rate of buffer consumption to the rate of related task chain completion. These thresholds (sometimes taking on the ubiquitous "green-yellow-red" nomenclature) are used to determine whether it is appropriate to act to mitigate the impact of these risks or accept the remaining risk as within the ability of remaining buffer to deal with it. Sometimes it can be even more important to avoid developing and implementing unnecessary corrective actions, especially when those actions require significant time and attention to develop. In that case, awareness of a healthy buffer allows for a comfortable and confident decision to do nothing.
In the development of risk responses, Buffer Management also provides input to how much response is necessary. The implicit understanding of risk inherent in the use of buffers carries through the project by allowing project teams to assess how much buffer is required to protect the due date promise from the remaining work. This allows them to determine how much buffer, if any, has to be recovered when faced with a previously unanticipated risk event, so that the effects on the project promise can be determined, reported, and appropriately addressed.
Before ending this installment of the series, another aspect of Buffer Management "basics" must introduced. Not only is it applicable to the protection of individual project promises; it is also a key to effective Multi-Project Management. In a shared-resource, multi-project environment, there are often situations in which certain resources are called upon by more than one project. The result is what I believe is the question that what project management is all about, "What should I be working on to assure maximum benefit to the organization?" With consistent, portfolio-wide buffer management, the relative state of the individual projects' buffers can be easily compared and "what-if'ed" to provide guidance for determining the right answer to that question.
Look for the next installment in this series, which will dig a bit deeper into the mechanics of Buffer Management, and discuss some implications of the methods of buffer sizing in that context.
Estimates and Buffers in Critical Chain (Part 3 - Estimates are Conversations) -- The firsttwo parts of this "tutorial" on estimates and buffer sizing in Critical Chain schedules addressed the use of "open" buffers as an explicit acceptance of schedule uncertainty and several common methods for deriving buffer size. This installment will introduce the recent commentary by Eli Goldratt that triggered the series, and deal with the implications of Goldratt's proposed "Half the Chain" approach to buffers versus the other two, which rely on 2-point range estimates.
In a recent piece in CC@Work (a webletter of Critical Chain software provider Realization), the author quotes Dr. Goldratt on the use of 2-point range estimates (which, with a nod to the nominal confidence levels that they cover, he calls 50/90)...
"50/90 is useless. In projects, no one will ever know what the distribution curve of task duration looks like for any task. Moreover, with chaos all around them, people cannot distinguish between the real variability of a task and the variability resulting from chaos...When implementing CC for the first time, the way to get safeties out of the estimates is to take the current task estimates and cut them in half. As an implementation matures, people get experience in giving tighter estimates and you want to cut the estimates by a number that is less than half."
Let's get the semantics out of the way first. While he has a point regarding the unknowable distribution of possible task durations calling into question the reality of 50% and 90% confidence levels of these estimates, many of us use those numbers (if we do) merely as additional description of "aggressive/average" and "safe/commitment-level" estimates in introducing Critical Chain concepts to a team or organization. The words that most of us put around the concept clearly admit the "unknowable" nature of the estimating process. With it's basis in Goldratt's early work on the subject, "50/90" has, at best, simply become an easy-to-remember shorthand for the 2-point range estimating process that is in common usage in many, if not most, Critical Chain implementations. At worst, it's an unfortunate, possibly distracting carry-over from the early days.
Regarding "real variability from a task" versus "variability resulting from chaos," there is little point at drawing that distinction unless and until the chief causes of "chaos"-driven variability can really be separated out (which partially occurs in successful CCPM implementations). In the early goings of a Critical Chain implementation, they are often both included in the "safe" estimate, at least to the extent the experience and past pain is not overcome by the promises inherent in the training on the approach. The distinction simply doesn't matter regardless of the process used for estimation and buffer sizing. (And anyhow, in execution, the work is going to take as long as the work takes, or is allowed to take, or is forced to take. If the performer behaviors that are the raison d'etre of CCPM are effectively effected, then the organization's learning will allow the recognition of the two sources of variability, but in all likelihood, not before then.)
All that said, 50/90 [2-point range] estimates are far from "useless!" They are the basis for important conversations in the planning process about the risks and opportunities related to the tasks being estimated. The difference between the two estimates highlight opportunities for and potential value of mitigating or even avoiding foreseeable risks associated with the tasks in question. Yes, at that level, they may constitute "noise" in the larger scheme of things if the appropriate work practices are not fully embraced, but there is, in the process, the opportunity to discover meaningful dependencies and to highlight potentially wasteful practices that can be eschewed.
But even more important than the numbers, the conversation that surrounds the range estimating process is important to show the respect for the team...respect for their concerns about the work...respect for their experience...and respect for them as members of the team. Goldratt's recommended practice of "cutting estimates in half" can be perceived as an affront to the team and its experience, and become an obstacle to a smooth implementation of non-trivial "cultural" changes that accompany the implementation of the Critical Chain methodology.
Finally, in the cited comments, he suggests that as "people get experience in giving tighter estimates...you want to cut the estimates by a number that is less than half." How much less? Who determines the necessary experience? When is it appropriate to do this? How much tighter? There is nothing in this "Half the Chain" approach that addresses these questions. At least with the methods that use range estimating, there is a smooth path for tightening or loosening estimates as appropriate and as performers and estimators become accustomed to working in the new environment that CCPM allows.
The citation of Goldratt's opinion continues...
"Along the same lines, advanced statistical analysis to optimize safeties is an exercise in futility. Moreover, since delays propagate across chains in execution (through resource dependencies), buffer sizes should reflect system-level variability, not task-level variability...Buffers should be 1/3 of the total lead-time or half of the length of a chain. This should be consistent across projects. So, next time you are tempted to use 50/90 or some statistical analyses, beware. Simplicity, not complexity, is the answer."
The mention of "advanced statistical analysis" is a swipe at the "Square Root of Sum of Squares" approach. Again, Eli is attacking the idea that the numbers have real meaning as numbers, rather than as views of possibilities. And again, he is assuming that system-level variability is excluded from the range-estimating process while it is very much embedded in the minds of people offering the "safe" estimates.
But to say, by flat out edict, that "buffers should be 1/3 of the total lead-time" regardless of the lesser or greater variation that may be encountered in a project smacks of black-box, top-down, top management versus project manager versus team, command-and-control mindset that is not an appropriate approach to guiding today's project performers (formerly known as resources). Sure, I get a bit nervous when buffers approach 1/4 of the lead time, and yes the 1/3 "rule of thumb" is a good guide for assessing the estimating process, but who is to say that a highly uncertain discovery or problem-solving project shouldn't have a buffer that approaches 1/2 of the lead time?
Introduced appropriately, with a sense of realism about whether the estimates have meaning as specific numbers or as "good enough" views of expectations, and with some minimal training regarding the differences between safe/commitment estimates and aggressive/average estimates, the methods that rely on 2-point range estimates provide as good, if not better (whatever that means) project promises without the buy-in and teamwork drawbacks of the "cut-the-estimate-in-half Half-the-Chain" approach. The "Half-the-Chain" approach may have proven "good enough" in early tests of CCPM, and probably still does technically work, but there is much more to be gained from the conversation that is at the heart of the range-estimating process. That said, you might have noticed the non-meaningful difference in the "Half the Chain" and "Half the Safety" in Part 2's examples. If the buffers are usually the same, and the latter approach is more team-friendly, I'll take the latter every time.
While this discussion might be seen as an interesting "difference of opinion" among Critical Chain practitioners -- actually, I suspect it's a difference of opinion between primarily Dr. Goldratt and most practitioners in the field -- the real concern that it raises for me comes in the message from Realization that closes the piece...
"Realization has introduced "buffering policy" in its software that helps institutionalize consistent buffering. Top management specify how much buffer projects should have, and the software makes sure that projects conform to that policy as they enter execution. In addition, we are seriously considering taking the option of two estimates (50/90) out from our CC Planning module by Q3 of 2004. If you have any feedback or concerns, please write to us at email@example.com."
In the name of "simplicity" such top management policies are troubling for the same "black-box, top-down, top management-driven" concerns I raised above. Yes, there should be a consistent means of planning, promising, and performing projects in an organization. But there doesn't have to (shouldn't) be a consistent absolute buffer proportion across all projects.
While I have no concern that other CCPM-savvy software solutions (cc-Pulse, Sciforma's PS8, and ProChain) will abandon the common sense and teamwork-friendly 2-point estimate processes, the combination of a major player doing so with the "blessing" of Dr. Goldratt can only muddy the waters for the further acceptance and growth of CCPM as it approaches a critical mass of acceptance beyond the early adopters.
I'd be very interested to hear comments from other CCPM practitioners, proponents, or users who might be reading this. Please use the comment link at the bottom of this entry for your thoughts. I would particularly like to hear from users of Realization's Concerto software to see if their possible direction down their proposed path has had any impact on the way projects are planned in your organization. Maybe you can shed a little more light on what they are thinking. If my concerns resonate with other CCPM consultants, educators, and users, you might also want to share your thoughts with them via the email link mentioned above.
Now that I've dealt with the reason for this series with this rant, the next and last part of it will go into more on the interpretation and uses of buffers as well as some thoughts on possible different applications of two range-estimate-based methods.
Estimates and Buffers in Critical Chain (Part 2 - Calculating Buffers) -- In the first part of this series, I started with a bit of Critical Chain "buffer basics." Today, we move back a step to describe the various means of sizing these buffers. There are three common methods, all supported in one form or another by most CCPM-friendly project scheduling software solutions...
The earliest proposed method, found in Eli Goldratt's original introductory book Critical Chain, is to take the estimated duration of the chain in question, cut it in half to account for the assumed task-embedded safety, and put half of what was cut back into the promise in the form of a buffer; project buffer if the chain of tasks in question is the project's critical chain, feeding buffers where non-critical chains feed into or merge with the critical chain. This obviously assumes that the original task estimates contain a non-trivial amount of safety within them to start with. For want of a better name, I'll refer to this method of buffer sizing as the "Half the Chain" approach. (The pros and cons of this approach, as well as the others, will be covered in detail in the third part of this series.)
Example of "Half the Chain" -- In a single chain project of ten tasks, with "safe" estimates totaling 200 days, the chain would be cut to 100 days, and half of what was cut -- 50 days -- would be added back as a buffer, for a maximum duration promise of 150 days.
The other two approaches to buffer-sizing start with a 2-point range estimate for project tasks. In both cases, the larger of the two estimates would be a "safe" estimate that the performer would be comfortable committing to, with a confidence level of 80-95% that it will be long enough for the anticipated work. This estimate respects the concerns of the person estimating (and ideally, doing) that anticipated work.
The second estimate is more "aggressive." It is typically described as one with about 50% confidence...half the time it will be beat, half the time it will be missed. I like to ask for an near "best-case" situation when soliciting this smaller estimate, suggesting the performer/estimator approach it with the assumptions that it is the only thing they are working on, that they are protected from interruptions, that all their inputs are ready and of good quality, that their boss is on vacation during the work, and that the task is done with minimal problems. It should be short enough that they're not really comfortable with it as a commitment, but long enough to allow them to consider striving for it (and not just blow it off).
(Note: Some Critical Chain practitioners are uncomfortable characterizing the smaller estimate as "aggressive." Given it's typically median position in the range of confidence levels, they prefer to refer to it as an "average" or "expected" time. All things being equal, I would agree, but all things aren't necessarily equal. I like the "aggressive" nomenclature due to the fact that in overcoming the memory of performance hindered by Parkinson's Law and multi-tasking, the historical "average" times can also be equated too closely with longer estimates as "safe" self-fulfilling prophecies of their experience. Also, considering the common and long-standing use of estimates as commitments, the "average" times that we are looking for are indeed "aggressive.")
It is not uncommon that the ratio of these "safe" and "aggressive" estimates is 2:1 or greater. (That said, if there is no real uncertainty associated with the task in question, they could also be very close, or even equal.)
As I said, the other two approaches to buffer sizing start with these "safe" and "aggressive" estimates. The most common approach is the "Half the Safety" approach. The project network is built and leveled using the smaller, "aggressive" estimates and buffers are sized with half of the difference between the sum of the two estimates for the chain of tasks in question.
Example of "Half the Safety" -- In our previous single chain project of ten tasks, with "safe" estimates totaling 200 days, the solicitation of the "aggressive" estimates result in a total chain length of 90 days.
The safety associated with this chain is 110 days of the 200. With the chain sized based on the "aggressive" 90 days, the buffer is added as half of the safety removed (55 days), for a total maximum duration promise of 145 days.
The third approach to buffer sizing is rooted in the statistical justification for why the other approaches provide a "good enough" buffer, and is known as the "Root Square Error," or more commonly, the "Square Root of Sum of Squares" approach (SRSS). It suggests that the minimum buffer size for a set of tasks that have been described by a pair of "safe" and "average/aggressive" estimates can be derived by taking the square root of the sum of the squares of the differences in these estimates. If the safe estimates are in the 90-95% confidence area, then the buffer size derived by this method should provide similar 90-95% confidence for the promise of the chain. Admittedly this approach makes some assumptions about the distribution of uncertainty and independence of the tasks in question, but still provides a mathematically discussable and reasonably "good enough" view of the minimum buffer size one should consider using.
Example of "SRSS" -- In our previous single chain project of ten tasks, with "safe" estimates totaling 200 days, the solicitation of the "aggressive" estimates result in a total chain length of 90 days.
With the chain sized based on the "aggressive" 90 days, the buffer is added as the square root of the sum of the squares of the differences (37 days), for a total maximum duration promise of 127 days.
The first two methods in the example suggest 50 and 55 days for buffer size (well within the "noise" for total project promises of 150 and 145 days), while the SRSS formula provides a suggested minimum buffer of 37 days for a 127 day schedule. The implications, pros, and cons of these methods will be discussed in the next part of this series.
Estimates and Buffers in Critical Chain (Part 1 - Why Buffers?) -- Over in APICS' Constraint Management SIG (CMSIG) discussion group (sadly, no web archive available), there's been an interesting discussion on estimates and the determination of buffer size for use in Critical Chain Schedules. On the surface, this can be seen as yet another discussion of the minutiae of a process that is meant to free us from worrying about minutiae. On the other, it can also be seen as an honorable exercise in understanding our preferred and proffered processes in an effort to improve their utility. I see it as an important discussion among TOC/Critical Chain practitioners who help to implement the approach in the real world, especially due to some recent comments from Eli Goldratt on the subject and the potential impact of those comments. But more about that later.
Buffers are the means of explicitly stating the uncertainty involved in a project in terms of impact on schedule or budget. The most common use of the terminology in project management is in the Project Buffer and Feeding Buffers found in critical chain schedules. Feeding Buffers are about protecting the critical chain/path from interference or delay from "non-critical" tasks -- to help "keep the critical critical." and to protect the project promise from the delays associated with integration of activities. The Project Buffer is meant to describe the anticipated range within which a project is expected to be complete, given the understanding of the project at the time of planning (or re-planning). The outer limit of the project buffer is often interpreted as a reasonable "not-to-exceed" promise. The greater the uncertainty associated with the work laid out in the network of project dependencies, the larger proportion of schedule duration we would expect the buffer to occupy.
Depending upon how estimates are derived -- in most cases through a bottom-up, 2-point range estimating process -- they provide the primary basis for sizing buffers. In the discussion I refer to above, Larry Leach, author of Critical Chain Project Management and PMI's frequent Critical Chain trainer, points out that by allocating some of one's estimate to a common buffer,...
A portion estimating the mean goes into the network, and the rest goes into the buffer. Since much of the buffer adds [in a statistical manner], the total buffer will be significantly less than the amount removed from the tasks...simple math. The more you move [from the tasks] to the buffer, the larger the buffer, and the shorter the overall plan.
In this way, buffers are not unlike an insurance pool that, in the aggregate, requires cumulatively less contribution from the individual components to protect against risks that would impact the schedule as a whole. The amount of "safety" needed to promise a project at the same level of confidence is less if the pool did not exist; less than if the safety was spread among the tasks. Larry continues...
If people get hung up on the duration used in the plan part [the network of tasks], they aren't getting it. No matter what number you use there, the probability of that exact duration is exactly the same as any discrete duration number: exactly zero. [...] The only stupid manager is the one who would present any estimate without a buffer. Because, if they promise to make it without a buffer, then I know they are sandbagging, and their estimate is much higher than it needs to be.
One of the major benefits of making promises based on such buffered schedules is that even if details of later aspects of the project are not perfectly clear at the time of planning and promising, "good enough" assessments of that lack of clarity can often be made, and explicitly laid out in the project's plan and schedule. Another benefit is the ability to use the buffers as the basis for assessing the health of project promises without the artificial priorities and undesirable effects that come from relying on task due dates for such purposes.
There are three commonly used approaches to sizing buffers in critical chain schedules. They will be the topic of the next part of this series.
Critical Chain - Old Wine, New Bottle? -- In the seven years since the publication of Goldratt's Critical Chain (sheez! Has it been that long?), one of the most common comments I've heard about it from seasoned project managers and project management thinkers is that it is "nothing more than old wine in a new bottle." I just heard it again last week.
I've got some argument with this assertion, but I must say, to some degree, there is something to it, as I see many project managers trying to do in an ad hoc manner many of the things that are institutionalized in critical chain-based project environments. I can even point to parallels of pieces of the critical chain approach in PMI's PMBOK Guide. But then, common sense is just common sense. The problem is, common sense is too often not common practice.
Having recently sub-contracted myself out to teach a non-critical chain, PMBOK Guide oriented class in estimating and scheduling, I came to realize that if one takes PERT to its logical conclusion -- something that is rarely, if ever done -- you've got a historical basis for rationally buffered schedules. But like I said, it's rarely ever taken to that logical end. Like much of the body of knowledge associated with project management, it's good practice, just not common practice.
But back to the oenological metaphor...
Old wine Critical Chain-based Project Management may be, but if it's been largely overlooked in the back dusty shelves of the wine-cellar, better it be put in shiny new bottles with modern, informative labels, so that it will be drunk and appreciated.
Agile Estimating and Planning (More Critical Chain from Outside the TOC Community) -- If you're planning to attend SD West 2004 (Software Development Conference and Expo in Santa Clara, March 15-19), you might want to check out a presentation by Mike Cohn. According to the session description:
Planning is important even for projects using agile processes such as XP, Scrum, or Feature-Driven Development. Unfortunately, we've all seen so many worthless plans that we'd like to throw them away altogether. The good news is that it is possible to create a project plan that looks forward six to nine months that can be accurate and useful.
In this class we will look at why traditional plans fail but why planning is still necessary even on agile projects. We will look at various approaches to estimating including unit-less points and ideal time. The class will describe four techniques for deriving estimates as well as when and how to re-estimate. We will look at how to use Critical Chain techniques to create a plan that dramatically improves the project's chances of on-time completion. Also discussed will be using velocity to track progress against the plan.
This class will be equally suited for managers, programmers, testers, or anyone involved in estimating or planning a project.
I wish I could be there. Between this presentation and the recent new edition of Death March, Critical Chain-based project management is getting a lot of attention from new sources.
Death March - 2nd Edition (Promotes Critical Chain) -- A while ago, I pointed to a "work-in-progress" page for Ed Yourdon's second edition of his classic book, Death March. It got my attention particularly due to his appreciation of Critical Chain Scheduling in Chapter 7. Ed's site no longer has the "work-in-progress" chapter downloads. That's probably because it was published in December and is available at Amazon.com.
Promises and Prescriptions - The Article -- If you happen to subscribe to STQE, a magazine/journal about software, testing, and quality, you might have noticed that with the January/February 2004 issue, they have changed their name to Better Software. And if you peruse that issue's table of contents, you'll find an article on project management entitled Promises and Prescriptions. That same table of contents pulls a quote from the article...
"It's easy to fall into multi-tasking because it sounds like it will make things go faster. It won't. It will only keep people busy -- and unavailable."
Good stuff. I wonder who the author is...
Gee, that name looks familiar. Could it be...?
Yes it could. Thanks to Technical Editor Esther Derby, I've made it on to another set of glossy pages (My first was with an article that introduced Critical Chain to the pages of PMI's PM Network mag. For those of you who don't get Better Software, I plan to put this new one out here on the weblog in serial form. Watch for the first installment in February. (After all, I really should give the "paying customers" first dibs on it.)
(I was going to delay this announcement until closer to February, but Johanna Rothman spilled the beans this morning.)
Agile/CCPM - Non-Meaningful Distinctions -- A recent thread in the TOCExperts YahooGroup has touched on the subject of SCRUM, one of the family of Agile approaches to [software development] projects. Being a Theory of Constraints oriented discussion group, it was not to be unexpected that at some point in the conversation, a "comparison" with Critical Chain-based Project Management (CCPM) would pop up. Clarke Ching, a frequent and knowledgeable member of TOCExperts, offered an excellent summary of SCRUM for the non-practitioner, but within it, he offers a few bullets on how SCRUM differs from CCPM that are worth some questioning comment. The first "difference" offered is that...
CC aims to get the project finished as quickly and reliably as possible. Scrum aims to get working functionality delivered as quickly as possible.
This is, in my opinion, a non-meaningful distinction, as the basis of the difference is in the comparison of "project" and "working functionality." In a CCPM-managed project, there is nothing to say that the objective deliverables can't be pieces of "working functionality." There is nothing to say that individual pieces of "working functionality," delivered via SCRUM practices, can't be assessed vis-a-vis expectations of cost and schedule via CCPM's buffer management, either as individual sub-projects, or as deliverables diverging from the mainline critical chain of the overall project. The management of the effort is related to the second offered difference...
CC buffers with time. Scrum buffers with functionality.
Now I've been known to utter similar comments about buffering with time versus buffering with scope, but reviewing some recent descriptions of "burn-down" charts common to agile environments (and perhaps viewing the recent PBS NOVA episodes on string theory), I've come to a view of scope and time as sufficiently intertwined to be only one of perspective. Like the idea of space-time, drawing too fine a distinction between the components of scope-time is a distraction at best. Too much scope left (for the time we wanted to deliver it it) and not enough time left (to complete the work we would like to) are the same thing. Any decisions about changes in the work content to recover our initial promises/targets or the deliverables that we finally produce in either approach immediately get into the question of time of work remaining (and vice versa).
At the practical, less metaphysical, level, the content of SCRUM meetings are nothing more than the equivalent of the content of CCPM's daily buffer management reviews of single projects. "Burn-down" charts, used to assess what is left in SCRUM and other Agile project environments can map directly to CCPM buffer consumption analyses. Both, with their forward-looking view, are superior to the backward focus on supposedly immutable baselines or completed work as the source of "earned value" in other approaches to projects.
The third difference is offered as...
CC says "Don't put the safety in the task; put it in the project." Scrum says "Don't try and figure it all out up front because you can't. Things will change too much as you go. Instead, build working software quickly, inspect and adapt."
Again, I don't see the "difference." I almost saw putting these two together as a non sequitor, but then there might be some connection, and again, more similarity than difference. SCRUM and other Agile approaches use plans for their efforts. They're just not detailed in a way that locks them into a calendar or to artificial dependencies to make promises, mapping them instead to a set of time-boxed sequential iterations or sprints. CCPM also frees the interim activities of a project from the calendar by removing the idea of task level safety, commitment, and due-dates.
As in most things, when there are two common sense approaches to a particular issues, there is often more in common than there is different.
Critical Chain Case Study - Application of Critical Chain on High Value Petrochemical Projects -- I frequently get asked about examples of the use of Critical Chain Project Management in specific environments. This link points to a case study of its use in two construction contracts completed by Krupp Uhde, a South African engineering contractor in the design and build business. It talks about two projects; one that finished "in 16 months vs. industry norm of 24 months," and one that completed...
"in 4 month including working over the Christmas period vs. an industry norm of 12 months for a plant of this size. This is phenomenal when one considers that the conventional equipment delivery time for a large distillation column, which was part of the plant, is 9-10 months."
The case study also offers a list of lessons learned...
- Always start with a simple plan.
- Do not allow the momentum of a large project lull you into a false sense of security in terms of progress. A large project is like a large ship with a huge amount of momentum, once you are going in the wrong direction it is very difficult to steer onto the right course. The critical chain approach makes for easy steering.
- Identify critical path and manage to keep one critical path.
- Identify "bottleneck" vendors/sub-contractors early on.
- Do not fast track by traditional methods like early orders and start of construction before you know what to order and what to build. These methods give a false sense of security in terms of schedule progress. The true result of such steps is the snowballing of errors that will exist in any project. It is cheaper and quicker to fix something on paper than in concrete and steel.
And if there's anyone else out there interested in bragging about their accomplishments via Critical Chain, let me know, and I'll pass them along here. I get a lot of questions about its use in software and pharmaceutical development environments.
Stage Gates and Critical Chain -- In a recent Sciforma Q&A column by Harvey Levine, the following question and answer about stage-gates, critical chain-based project management, and milestones appeared...
"I work for an Electronic High Tech firm and currently we utilize a Stage-Gate process for our NPD and have been researching the benefits/limitations of the ToC. In your paper you mention both and I would like to know if these can co-exist? From my research to date, it appears to me that the Stage-Gate methodology of NPD process management can co-exist with a ToC based task scheduling model? Can you advise?
...I assume that by "TOC" you mean that you are using critical chain methods. The early concepts for CCPM, as expressed by Goldratt, discouraged using milestones. Yet, milestones are at the very foundation of Stage-Gate. So, in theory, we could believe that TOC and S-G are incompatible. However, in reality, I think that this is not true. In practice, many CCPM implementers have ignored the original taboos that were proposed by TOC proselytizers and have incorporated most of the traditional CPM-type capabilities. While I cannot spell out the specific means of using CCPM with S-G, I can't see why these two processes cannot co-exist.
I'm in general agreement with Harvey's comments in the Sciforma Q&A column (although I'd be curious to hear from him more about the "CPM-type capabilities" he mentioned being incorporated by CCPMers -- I've dropped him an email query on the comment; if he responds, I'll pass it along), and offer a couple clarifications on the core topic from the CCPM perspective...
Early talk about milestones in the CCPM community should have drawn a finer distinction between "milestones" and "milestone schedules." Like the concept of "efficiency," the idea of milestones has had unnecessary abuse heaped on it by too many of my TOC brethren, especially those who, satisfied with the success provided by TOC solutions, fail to look beyond to the core concepts and definitions of preceding bodies of knowledge.
Milestones, aka, specific events of special importance -- such as stage gates -- are facts of life in a project, totally consistent with CCPM, and pose no problem whatsoever in the use of CCPM. The idea of pinning those milestones to target dates via a "milestone schedule," however (or of considering target dates for any task for that matter), leads to a setup for the impact of Parkinson's Law. Date-driven Milestone Schedules should be avoided if speed of delivery of the overall effort is important. What we want is a relay race, not a train schedule punctuated by stage gates attached to the calendar.
If there is some rare real reason to identify a target date with an intermediate milestone (usually some promise external to the flow of the project not associated with the completion of the project), that date would/could/should be buffered in the same manner as the final project promise date, with a "project buffer." Otherwise, stage-gates should float in time, uninhibited by any bogus "good enough, on-track" time targets, so that they can take advantage of early completions and avoid driving questionable quality. After all, the work is going to take as long as it takes. If the project environment assures efficient and effective behaviors in performing project tasks, unhampered by false priorities of task/milestone due dates, then the stage gates will be arrived at in a timely manner.
Effective projects are not about the schedule and plan. They're about the behavior and the work, and control mechanisms that don't get in the way of speed or quality and how, when faced with reality, performance matches or deviates from (or needs to detour around) the expectations that are the schedule and plan.
The Meaning of "Schedule" -- This exploration of the uses and misues of the word "schedule" by Sheryl Smith from StickyMinds.com (may require free registration) is closely related to what I was going to write as a follow-up to yesterday's piece on late projects and PMOs...the one on the Forrester study that defines project failure in terms of only one factor -- lateness.
This implies lateness of something specific, something planned, something promised -- a product scope, for want of a better term. The agilistas among us are probably going into apoplexy about such a survey, as they tend to do with the hoary old Standish "chaos report" that gets trotted out every time anyone want to point a finger at project management failure in the IT context.
The idea of a fixed scope with a fixed budget and a fixed schedule and due date seems to be anathema to those of the Agile/XP persuasion. Now don't get me wrong. I like agile methods -- they address very nicely a range of issues common to environments characterized by high uncertainty. Some of my best friends are agile. And for that matter, from where I sit (in the Critical Chain PM world located somewhere between the perceived - by agilistas - rigidity of "traditional" project management and the perceived - by traditionalistas - chaos of agility) the idea of fixed scope, budget, and schedule is something that doesn't exist in my reality either. And apparently, it doesn't exist for Smith either...
"An honest, real schedule won't be gospel, and will still slip...A real schedule is hard to predict, and puts a focus on accuracy that some of us don't want to see ahead of time. In our high-stress community, sometimes we expect rewards for busyness and speed, not for accomplishment and quality. People want to believe that longer hours mean more work gets done, whether this is true or not. Managers want to believe that a pushed project finishes faster, whether this is true or not. Studies have cast doubt on both these notions, but the notions live. We're not quite ready yet to give them up."
There may be, or rather, needs to be an initial target scope, budget, and schedule to define an effort as a project and to determine whether or how much of it to pursue. Part of the scope may involve discovery along the way, some of the schedule may seem rather nebulous up front, and the delivery budget may be based on range or as a "not to exceed" target, but they constitute a specific scope, schedule, and budget nonetheless. And they need to be considered as promises until and unless there are rational reasons to do otherwise. As such, they are models of expectations, subject to necessary and appropriate change.
When I hear some of the agile persuasion saying things along the line that when something gets done "doesn't matter as long as it meets business needs" and proudly claiming that this viewpoint is more "agile" than otherwise, I get confused. One one hand, I agree that, in many cases, if the effort is worth doing, it is worth doing. But on the other hand, "business needs" include aspects of timeliness and predictability. As Deming has been known to say, "Management is prediction." Projects -- even IT projects -- don't live in an isolated world of their own, and the highly uncertain parts are being pursued with the idea of delivering something reasonably certain in a reasonably certain timeframe to support a range of business needs, from external promises to customers to the internal means to manage resource capacity and the ability to deliver other projects as well. From Smith...
"High-tech projects are criticized for being "slow"—they're never criticized for being unpredictable. Yet aren't accurate predictions what business needs most? When there is no realistic schedule, people in the trenches conspire to invent one. They have no other choice because they need to do so to plan their work. The actual users want to know what the team in the trenches knows."
And the only way for either to know -- as well as they can at any point in time -- is to compare the reality of the current situation to the model of expectations that is the overarching schedule or plan for the effort.
Sheryl's piece is a definite keeper. The only thing I have to add is that, in her "Schedule=Schedule" section, part of an "honest, real schedule" needs to include explicit acknowledgment of the uncertainty that separates the expectations along the way from the ability to make a reasonable promise regarding final completion of the effort and the ringing of the project's cash register. Things will and should slip or pick up along the way from those interim expectations due to uncertainty and variation, but these slips and pickups are most likely within a reasonable -- and unmanageable -- range of "noise" that is not worth obsessing over. As a model of those expectations that includes a factor to recognize an acceptable noise level (and when it is exceeded), the schedule is a tool for assessing the health of what matters -- the final project promise.
"I occasionally come across the phrase 'project portfolio management', or sometimes just 'project portfolio', but so far it has failed to convey much concrete meaning to me. Perhaps because it is often found in the company of ethereal buzzwords like 'enterprise architecture', or perhaps because it is difficult enough to survive single projects, the agile crowd seems to be likewise unconcerned with the topic."
...and goes on with an insightful discussion of the commonly faced issue of whether to "redesign our product or rewrite it from scratch." He is quite correct that this is a portfolio management issue -- a product portfolio management issue. And it is one that needs to be driven from a strategic viewpoint. The process for upgrading and/or replacing product offerings must be tied -- through a clear strategic roadmap -- to the organizational goals, and communicated clearly to those doing the work.
That said, what got my attention was his comment that "...because it is difficult enough to survive single projects, the agile crowd seems to be likewise unconcerned with the topic." On one hand, this rings true with me, and makes sense in the context of my view of agile processes being most concerned with what I view as task-level practices, less so with project-level interdependencies typical of more complex projects, and somewhat oblivious of bigger picture issues of strategic import like portfolios and pipelines. But on the other hand, those very agile processes are also a not-unreasonable response to living in an ineffectively managed multi-project environment. The short-term planning horizon and the closely operating teams focused on moving from point to point in the effort is not unlike what we in the Critical Chain Project Management community refer to as "relay race behavior" -- the minimization of intra-, cross-, or extra-project multitasking in an effort to accelerate completion of handoffs through the projects. The focus that both agile and "relay race" brings to an organization's culture is a significant contributor of the benefits derived from both agile and CCPM, which, by the way, can work together nicely.
To a large degree, the pressures for agility come from the lack of project portfolio and pipeline management. When there are no clear priorities driven down from strategic plans, through product portfolios, to project portfolios and pipeline management, then individual project managers, resource managers, and resources are left to fend for themselves to answer the question "What should I be working on?" And if project management is the answer to anything, it is the answer to that question.
A picture that should be familiar to readers of this weblog...
Project Portfolio Management is the process of turning a (hopefully related) list of initiatives that come from a strategy into a prioritized collection of projects and programs that are funneled through a pipeline. The result of doing it right is a process that both maximizes benefit for the organization and minimizes undo pressures on the resources expected to deliver them. Too many organizations fail to recognize the major reason that it is "difficult enough to survive single projects" is that those single projects and the people that are working on them are buffeted by the needs of other projects, planned and otherwise. Unless and until shared-resource, multi-project shops, like R&D, Engineering, IT, and Product Development understand the impacts of living in such a system, they will continue to struggle with their individual projects.
For such organizations, getting particular single projects done quickly and reliably is good, but not enough. What is important is to synchronize the organization for delivery of an accelerating flow of valuable projects through the pipeline and to the bottom line.