The Standish Report has been a longstanding IT bugaboo, consistently indicating abysmally low project success rates in software engineering (and overshadowing any other research in the popular IT press on this topic). Now Robert Glass, author of Software Runaways, reports in the latest Communications of the ACM on growing evidence in the research community that its methodology is suspect and its conclusions spurious. (This is some of the biggest IT news of the year, I think.) Quote:
"...the Standish Chaos Report could be considered fundamental to most claims of crisis. What do we really know of that study?
"The question is of increasing concern to the field. Several researchers, interested in pursuing the origins of this key data, have contacted Standish and asked for a description of their research process, a summary of their latest findings, and in general a scholarly discussion of the validity of those findings. They raise those issues because most research studies conducted by academic and industry researchers arrive at data largely inconsistent with the Standish findings.
"Let me say that again. Objective research study findings do not, in general, support the Standish conclusions. "
Robert L. Glass, "The Standish Report: Does It Really Describe a Software Crisis?," Communications of the ACM 49:8 (August 2006). (Emphasis added.)
Glass goes on to cite other researchers' evidence that the Standish method was to solicit failure anecdotes from IT executives. It's not surprising they then reported a high incidence of failure! It is not a study in any rigorous sense of the word, if the growing academic consensus is correct.