Failure is an option
It's popular to operate as though "failure is NOT an option." But however pleasingly gung ho that sentiment is for a pep talk, it's not a very useful management philosophy.
I recently discussedto improve their likelihood of success. Underlying this is the reality that projects do fail often, at a greater rate than we'd like to admit.
Some failures are spectacular. After spending tens or hundreds of millions of dollars over a period of years, nothing ever really works. The entire investment of time, money, energy, effort, and focus has to be completely written off. Those are the legends. The laughing stocks.
But it's a mistake to conflate failures and catastrophes. Most failures are mundane and much smaller scale. They result from changing market conditions, a lack of timely progress, and/or erosions of expected outcomes. When you reset expectations about how much can be achieved, how quickly, with what outlays, risks, and opportunity costs, it's reasonable to reconsider "do we really want to be doing this, after all?" and "if not, what should we do (or would we be able to do) instead?"
Probably no project--even the most successful--has ever achieved all of our hopes and dreams for it. But when you're going along and decide, "you know, in retrospect, we wouldn't have done this had we known X, Y, and Z," it's time to consider modifying the plan--or even calling it quits. What's that saying? "Don't throw good money after bad"? I've seen many projects that, while eminently reasonable at the outset, simply didn't complete before the world around them changed. In that case, better to purposefully re-focus on what is working.
There's a lot of interesting experience out there on project failures, and how to avoid them--but let's not go there. Let's just take a moment to consider that project failure isn't always regrettable. Put another way: it's a good thing that some projects fail. Failure is not only an option, it's something you should hope for. And no, I'm not crazy.
First, some projects do fail. If you're being pragmatic, you'll realistically take that into account. Otherwise, you won't effectively plan for the possibility. You won't be as motivated to periodically evaluate how things are going, nor to work toward quality alternatives and exit strategies. I wouldn't ask you to start your team pep talks with "we might fail." And there certainly are a few life- and mission-critical projects into which you must bravely commit every last resource, even if the outcome is doubtful and battle uphill. But in general, planning for failure is an essential skill and task. The alternative--waiting until its obvious to everyone concerned that things aren't working out, and having someone abruptly "pull the plug"--that may be a popular approach, but it's stigmatizing and seldom optimal.
Beyond pragmatism, there's an even more important reason to hope some projects fail: so you can take an appropriate amount of risk. As any investor will tell you, low risk generally equals low reward. Those who wait until all the facts are known and everything is absolutely safe are invariably laggards; they avoid the risks of innovation, yes, but also forgo the early rewards that bolder organizations capture.
Thankfully, there is a happy middle ground between "absolutely safe" and "recklessly bold." The trick is taking an appropriate level of risk for the type of endeavor. If you're brainstorming, you want crazy bold ideas in the mix. Wildness there costs you nothing, and you may come up with something completely novel and really good. For doing basic research and first-of-a-kind explorations, you still want to be bold, looking over the horizon and trying things that may not pan out.
But the more resources and commitment a project takes, the greater probability of success is required before it's reasonable to begin. As a project effects more people, more customers, and more interactions, you naturally accept less risk, and require larger and more certain payoffs before taking those risks. Even "bet the business" projects sometimes fail, but when you're betting big, you want to do everything imaginable to minimize risk. Every organization will have its own take on what constitutes an "appropriate risk curve," but it makes sense for everyone to balance certainty, risk, and reward, not just reflexively seek certainty and minimal risk.
This kind of "match risks to activities" approach is nicely illustrated by McKinsey & Company's "three horizons of growth" model. The Three Horizons are current, emerging, and future business opportunities. The more mature a business, the safer you play it. The beauty of the model is that it lets you be appropriately conservative and incremental on established business, yet appropriately speculative and experimental for emerging and future business.
Figuring out how to take the right risks isn't just a good idea in theory. I've seen it work, brilliantly, on a massive scale. The best example is IBM, which in the early to mid-1990s was hidebound, making it challenged and confused whenever it tried to invest in new opportunities. Existing lines of business (e.g. the mainframe) tried to subvert emerging businesses (e.g. RISC/Unix servers) they found threatening; the company as a whole didn't have a clear understanding of why, when, and how to invest in one versus the other. Adopting the three horizons model made an enormous difference. It let IBM expand gracefully into new areas such as e-business and Linux/open source, and helped it see how to rationally manage overlapping product lines. IBM has since folded many of its emerging/future business initiatives back into its mainstream operations, so successful and work-a-day they have become. Others have been abandoned or minimized; IBM has learned to "feed the winners, starve the losers." No one can doubt that IBM circa 2010 is infinitely more effective at managing its portfolio of activities than IBM circa 1995.
Around IT, you can see many successful models that now admit measures of both uncertainty and failure risk; examples include agile software development, dynamic provisioning, and high-availability clustering and software fault-tolerance. Perhaps paradoxically, pragmatically accepting that we live in an uncertain world and that failure is an option makes us more successful.