Capability Maturity Model (CMM) – Watts S. Humphrey (1989)

Watts S. Humphrey’s Capability Maturity Model (CMM), developed at Carnegie Mellon University’s Software Engineering Institute (SEI) for the U.S. Department of Defense, transformed how organizations assess and improve software development capability. Published comprehensively in Managing the Software Process (1989), the CMM introduced the revolutionary idea that software development capability can be assessed through five distinct maturity levels, and that organizations move through these levels through deliberate investment in process improvement.

Rather than viewing software quality as dependent primarily on individual programmer talent, the CMM reframes software development as a management and organizational problem. Organizations can improve software quality, predictability, and productivity by systematically improving their development processes. This reorientation has profoundly influenced how organizations approach technology adoption and process institutionalization.

Why Was the Model Created?

Humphrey and SEI developed the CMM because the U.S. Department of Defense faced a critical problem: software development was unpredictable and unreliable. DoD software projects frequently exceeded budgets by 50–300%, took years longer than planned, and delivered unreliable software. Meanwhile, DoD personnel noticed that some contractors consistently delivered quality software on schedule and budget. What distinguished successful contractors from unsuccessful ones?

Initial theories suggested it was about talent—successful contractors employed better programmers. But investigations revealed that talent alone was insufficient. Contractors with excellent individual programmers sometimes failed catastrophically if organizational processes were chaotic. Conversely, contractors with well-established processes could manage projects reliably even with more typical programmer talent.

Humphrey’s insight was that software development capability was fundamentally an organizational and management question. Immature organizations operate reactively. When projects encounter problems, they solve them ad hoc, with no systematic process. When experienced people leave, institutional knowledge departs. Promising estimates prove wildly inaccurate because they’re not based on data about organizational capability.

The CMM was created to provide a measurement framework that would allow the DoD to assess contractor capability and that would help software organizations understand how to improve. Humphrey and SEI believed that software problems were primarily organizational and management problems—not technical problems that could be solved with better programming languages or tools. Better processes would solve the real problems.

The CMM emerged from and built upon several intellectual traditions:

  • Software Engineering as a Discipline (1960s–1980s):The software crisis of the 1960s–1970s generated calls for software engineering as a discipline. Barry Boehm’s Spiral Model (1986) and others demonstrated that large software systems required process discipline, management, and systematic approaches beyond programming skill.
  • Deming’s Quality Management Principles:Humphrey explicitly applied Deming-like thinking to software development—quality is not a matter of individual programmer excellence but of organizational processes and systems.
  • Manufacturing Process Maturity Thinking: The idea that processes could be characterized by maturity levels (initial, repeatable, defined, managed, optimized) paralleled thinking in manufacturing quality management.
  • Defense Department Requirements: The DoD was the largest software purchaser in the world and needed a systematic way to assess contractor capability and guide procurement decisions.

Core Concepts and Definitions

The central premise of the CMM is that software development capability is organizational, not individual. Organizations that depend on heroic individuals to succeed are inherently fragile; when key people leave, capability departs with them. Organizations that embed capability in processes, documentation, and institutional knowledge are resilient and improvable.

The model measures process maturity on a five-level scale, with each level building on the capabilities established at the previous level. The assessment framework enables organizations to understand where they currently stand, what practices must be established to advance, and how to prioritize improvement investments.

Key measurement dimensions across maturity levels include:

  • Level 2 Measurements: Project completion against schedule and budget, requirements completeness and stability, configuration management metrics, software quality metrics (defects, reliability).
  • Level 3 Measurements: Process compliance (are projects following the standard process?), process consistency (how much do projects vary in their tailoring?), organizational performance against process standards.
  • Level 4 Measurements: Process capability (what performance is statistically achievable from this process?), process performance trends, process control metrics, quantitative quality targets and achievement, productivity metrics and trends.
  • Level 5 Measurements: Innovation pipeline (how many improvement ideas are being tested?), improvement implementation rate, quantitative improvement trends (productivity gains, quality gains).

Internal Validity

The CMM’s core framework consists of five maturity levels, each characterized by specific capabilities and practices:

Level 1: Initial (Chaotic). Level 1 organizations have essentially no defined software development process. Projects succeed or fail based primarily on individual effort and heroics. Success is unpredictable; it depends on having the right people or getting lucky. Processes are undefined and frequently change, costs and schedules are unpredictable, problems are addressed reactively, and there are no metrics or data about process performance.

Level 2: Repeatable. Level 2 organizations have established basic project management practices. Successful techniques are documented and repeated on subsequent projects. Organizations track requirements, schedule projects, and manage changes systematically. Data about completed projects (effort, schedule, defects) is collected, creating a foundation for estimating future projects. Processes are repeatable but not yet standardized across the organization.

Level 3: Defined. Level 3 organizations have standardized their software development processes. Rather than different projects using different approaches, organizations have defined a standard process that all projects are expected to follow, tailored for specific contexts. Knowledge is institutionalized in documented processes rather than residing in individual heads. New team members can be trained on the organizational standard process.

Level 4: Managed.Level 4 organizations manage software development through detailed measurements and controls. Processes are not only defined but also measured. Organizations track process performance data and understand variation—both common cause variation inherent in the process and special cause variation from unusual circumstances. Estimates are based on measurement data; quality is managed through understanding process capability.

Level 5: Optimizing. Level 5 organizations are continuously improving their processes. Data is analyzed to identify bottlenecks and improvement opportunities. New technologies and techniques are piloted and, if successful, incorporated into organizational processes. Continuous process improvement is built into organizational culture; organizations see continuous improvement as a competitive necessity.

External Validity

The CMM has been applied successfully across a wide range of contexts, demonstrating strong external validity:

  • Organization Sizes: The CMM has been applied to small software companies, medium organizations, and large enterprises. The principles scale across organization sizes, though large organizations typically implement through divisional structures.
  • Technology Types: The CMM was originally designed for traditional software development but has been successfully adapted to distributed development, web development, embedded systems, and other specializations.
  • Industry Applications: Software organizations across defense, commercial, telecom, finance, healthcare, and other industries have implemented CMM-based improvements.
  • Geographic Scope: The CMM has been applied globally, including in developed economies and developing economies. It has been translated into multiple languages and adapted to different cultural contexts.
  • Sustained Application: Unlike management fads, the CMM has sustained credibility for decades. CMM-based assessments and improvements have continued from the 1990s through today, with evolution from CMM to CMMI (Capability Maturity Model Integration) that integrates multiple discipline perspectives.
  • Quantifiable Results:Organizations implementing CMM-based improvements have demonstrated significant gains in software quality, schedule predictability, and cost management. These measurable results support the model’s validity.

Key Contributions

Empirically Grounded:The CMM emerged from investigation of what distinguished capable from incapable software organizations. It’s not theoretical speculation; it’s based on studying successful and unsuccessful organizations.

Practical and Actionable: Unlike purely academic models, the CMM provides specific guidance about what organizations should do to improve. Organizations understand what practices are needed at each level.

Provides Structure and Roadmap: Rather than overwhelming organizations with numerous improvement options, the CMM provides a structured roadmap. Level 1 organizations focus first on establishing basic project management (Level 2), then standardizing processes (Level 3), then implementing quantitative management (Level 4). This staged approach focuses effort on the most critical improvements first.

Measurable and Assessable: Organizations can formally assess their maturity level, creating clarity about current state and progress. This objectivity is valuable; organizations cannot claim higher maturity without demonstrated evidence.

Connects Process to Business Outcomes:The CMM explicitly connects process maturity to business outcomes—project predictability, cost control, quality. Organizations understand the business case for process improvement.

Organizational Learning: The CMM embeds organizational learning and continuous improvement into the model itself, recognizing that improvement is not a one-time effort but an ongoing capability.

De-emphasizes Individual Heroics: By focusing on organizational processes rather than individual talent, the CMM provides an alternative to organizations that depend on heroic individuals. This creates sustainability and knowledge preservation.

Relevance to Technology Adoption

The CMM directly addresses organizational adoption of software development practices and processes. Software development organizations face distinctive adoption challenges: recruiting and retaining talented individuals, establishing repeatable processes in creative work, managing distributed teams, and ensuring product quality. The CMM provides a roadmap for how organizations systematically improve their capability to adopt and effectively implement software development disciplines and technologies.

For technology adoption broadly, the CMM’s maturity-level concept is highly transferable. Organizations at Level 1 for any technology adoption will struggle with unpredictability and heroic dependency; advancing to Level 2 by establishing basic repeatable practices, and then to Level 3 by standardizing organizational processes, dramatically improves adoption outcomes. The CMM demonstrates that adoption success is not primarily about the technology itself but about organizational process maturity.

The CMM also highlights critical barriers to technology adoption. Organizations that lack documented processes cannot effectively train new users, maintain consistency, or improve systematically—each of these is a barrier to successful technology rollout. The model’s emphasis on measurement at higher levels directly supports evidence-based technology management: organizations that measure technology performance can distinguish genuine improvement from variation and make adoption decisions based on data.

Several limitations should be acknowledged. Critics argue that CMM-based improvements can create bureaucratic, process-heavy organizations where compliance with procedures takes precedence over pragmatism and results. Individual talent still matters; process cannot substitute for incompetent people, and rigid adherence to process can stifle talented individuals’ contributions. The model was designed for traditional software development; applying it to agile, distributed, or open-source development requires adaptation. CMM assessments are also expensive and time-consuming, and organizations sometimes game assessments by documenting processes that do not actually reflect practice.

Despite these limitations, the CMM’s foundational insight—that organizational process maturity is the primary driver of software quality and technology adoption success—has been validated across decades of research and practice. Its evolution into CMMI and related frameworks demonstrates the enduring relevance of maturity-based thinking for organizational capability development.

Note: This article provides an overview based on the comprehensive literature review. Readers are encouraged to consult the original publication for complete details.

References

  1. Humphrey, W. S. (1989). Managing the software process. Addison-Wesley.
  2. Boehm, B. W. (1986). A spiral model of software development and enhancement. ACM SIGSOFT Software Engineering Notes, 11(4), 14–24.
  3. Paulk, M. C., Weber, C. V., Curtis, B., & Chrissis, M. B. (1995). The capability maturity model: Guidelines for improving the software process. Addison-Wesley.
  4. Deming, W. E. (1982). Quality, productivity, and competitive position. Massachusetts Institute of Technology, Center for Advanced Engineering Study.
  5. Software Engineering Institute. (2010). CMMI for development, version 1.3. Carnegie Mellon University.
  6. Krishnan, M. S., & Kellner, M. I. (1999). Measuring process consistency: Implications for reducing software defects. IEEE Transactions on Software Engineering, 25(6), 800–815.
← Back to Complete Bibliography