Many software development and management methods are founded on a basic assumption - that constructing software is rather like building a bridge or a house. Once we've "done the design", actually generating the software ought to be a completely predictable, relatively low-skilled process.
However four decades of failure to achieve this vision might suggest that we should revisit the assumption.
In a paper entitled "The New Methodology" Martin Fowler, the guru of object-oriented development, suggests a couple of reasons why this might be. The first is predictability.
Essentially, Fowler argues that design, as a creative activity, is essentially unpredictable, whereas "construction" should be predictable. However, most of "code and unit test" has a significant creative element and should be regarded as a design activity, while only things like compilation or automated testing actually correspond to building construction. (Read Martin's paper for more detail.)
While I agree with Fowler's observation that that all development is unpredictable, I think there may also be a more subtle issue to explore. Formal waterfall methods are often claimed to be "more predictable". I think this is because they pad the project with predictable work (e.g. documentation) at the cost of creative or even productive work, which increases predictability at the cost of net productivity, precisely because more of the effort goes on non-creative activities.
Conversely, agile methods try to remove or automate many of the non-creative activities, and as a direct result will be less predictable. The outturn cost should be lower and speed to value higher precisely because of the reduction in non-creative activities, but the totals will apparently be less predictable.
However, it's important to note that since all development is unpredictable, we're not actually losing predictability / control, because we never really had it in the first place. We're just losing the pretence of predictability. The creative elements are still unpredictable, only the larger proportion of non-creative work becomes predictable.
It's also important to remember that only part of the unpredictability comes from the development style. Much of it comes from the environment, the stability of requirements and so on. See my paper "Order and Unorder" for more details.
There's a natural tendency to require and impose more formality on larger or more complex projects. If this isn't appropriate because the requirements are unstable, then you'll just waste more effort, and you should question whether such a development is appropriate at all.
Because software "construction" (detailed design, code and test) is basically a creative effort, it is much more critically dependent on the skills of its practitioners than if it were the mechanical, non-creative effort some managers would like it to be. This is the other reason Fowler identifies for the difference between software and building construction. I wholeheartedly agree. In fact I'd go further, and say that success in software construction is almost entirely related to the skill of individuals, and choices such as method are almost totally irrelevant.
Over the years I (and many others) have written a lot about the building analogy particularly in respect of software architecture (see, for example, Architecture and Engineering). If the analogy doesn't hold for planning, do we have to throw the rest of this away?
I don't think so. I think that other aspects of the analogy are still valuable. But an agile software architect needs to behave less like a "hands off" architect, and more like the builder who is also the architect, or the architect for a complex, changing structure who needs to make continuous decisions.
Make this mental adjustment and the analogy is still useful. But imagine that you are creating a completed design for mindless execution by plug compatible resource units, and you will fail.