Decomposed TPB – Taylor & Todd (1995)

Model Identification

Model Name: Decomposed Theory of Planned Behavior (DTPB)

Authors: Shirley Taylor, Peter A. Todd

Publication Date: 1995

Citation Information

Taylor, S., & Todd, P. A. (1995). Understanding information technology usage: A test of competing models. Information Systems Research, 6(2), 144-176.

Why was the model made?

Taylor and Todd developed their competing models framework to address fundamental ambiguities and theoretical gaps in information technology adoption research. By the mid-1990s, multiple theoretical frameworks had been proposed to explain IT adoption—Davis’s Technology Acceptance Model (TAM), the Theory of Planned Behavior (TPB), Thompson et al.’s expected consequences model, and others. However, no comprehensive empirical comparison existed to determine which framework provided superior explanatory power or how different theoretical approaches related to one another. The authors note in their introduction that “different theoretical models have been applied to predict IT usage” but “little research has compared alternative models.” This research gap meant that IS researchers and practitioners lacked clear guidance about which theories offered the most robust understanding of technology adoption.

Organizations also lacked a coherent framework for understanding which factors truly drove IT adoption versus which represented theoretical artifacts or artifacts of measurement approaches. Taylor and Todd recognized three specific research questions their work would address: (1) How do different theoretical models compare in explaining IT usage behavior? (2) Can models be successfully integrated to provide greater explanatory power? (3) What underlying mechanisms explain why individuals adopt or resist information technologies? The motivation was also grounded in theoretical development. The Technology Acceptance Model (TAM) had become increasingly influential but also increasingly critiqued. Critics argued that TAM’s two primary constructs—Perceived Usefulness (PU) and Perceived Ease of Use (PEOU)— might be too simplified and that the theory neglected important factors like social influences, control beliefs, and consequences.

Other frameworks like the Theory of Planned Behavior incorporated additional constructs, but their applicability to IT adoption was less established. Taylor and Todd sought to test whether richer, more complex models would outperform Davis’s simpler TAM. They hypothesized that “the TPB and an extended TAM model would predict usage better than a simple TAM model” because additional constructs capture neglected influences. They posited that understanding IT adoption required examining not only individual perceptions of technology (PU and PEOU) but also social influences, control factors, consequences, and motivational variables. The research was conducted using two distinct technologies in two organizational contexts: email use among MBA students and new information system adoption among academic faculty. This multi-technology, multi-context approach allowed the authors to examine whether findings generalized across different IT adoption scenarios.

How was the model’s internal validity tested?

Taylor and Todd employed rigorous quantitative methodology with two separate studies to establish internal validity: Study 1: Email Adoption Among MBA Students The researchers surveyed 108 MBA students who were required to use email as part of their program.

  • They measured seven core constructs through multi-item scales: Perceived Usefulness (PU): Five items measuring whether email would improve job/academic performance, with Cronbach’s alpha = .94 Perceived Ease of Use (PEOU): Four items assessing perceived learning difficulty and ease of interaction, alpha = .88 Attitude Toward Use: Three items measuring overall evaluations of email, alpha = .93 Subjective Norm: Four items measuring perceived social pressure to use email, alpha = .75 Perceived Behavioral Control (PBC): Four items assessing ability and resources for email use, alpha = .87 Behavioral Intention: Three items measuring intent to use, alpha = .93 Actual Usage: Measured directly through email system logs, providing objective behavioral data The researchers established internal validity through multiple techniques: Convergent and Discriminant Validity: The authors examined factor loadings and construct correlations. All measurement items loaded significantly on their hypothesized constructs (t-values > 2.0), and constructs showed appropriate correlations with one another—related enough to indicate common underlying domain but distinct enough to represent separate dimensions
  • Scale Reliability: Cronbach’s alpha coefficients ranged from .75 to .94, demonstrating acceptable reliability. The relatively high alphas indicated that measurement items consistently measured each construct
  • Measurement Model Assessment: The authors compared structural equation model specifications using LISREL. They tested whether observed variables loaded appropriately on latent constructs and whether the measurement model fit adequately before examining structural relationships. Study 2: New Information System Among Faculty A second study with 223 academic faculty members adopting a new information system allowed replication and generalization testing: Perceived Usefulness: Five items, alpha = .89 Perceived Ease of Use: Four items, alpha = .81 Subjective Norm: Four items, alpha = .78 Perceived Behavioral Control: Four items, alpha = .74 Behavioral Intention: Three items, alpha = .95 Actual Usage: Three items measuring frequency and intensity of use, alpha = .78 Cross-Study Validity: The consistency of measurement approaches across studies allowed meta-analytic comparison of path coefficients and model structures. Similar patterns across different technologies and populations strengthened internal validity evidence
  • Structural Model Testing: For each study, the authors tested multiple competing structural models using LISREL and examined multiple goodness-of-fit indices: Chi-square (χ²) values and associated p-values Adjusted goodness-of-fit index (AGFI) Comparative fit index (CFI) Root mean square error of approximation (RMSEA) Standardized root mean square residual (SRMR) The models tested included: Original TAM: PU and PEOU → Attitude → Intention → Usage Extended TAM (TAM2): PU and PEOU → Attitude; PU and Subjective Norm → Intention → Usage Theory of Planned Behavior: Attitude, Subjective Norm, and PBC → Intention → Usage Integrated Model: Combined TAM and TPB elements with specific structural pathways Model Comparison: The authors compared models across multiple criteria: Explained variance in Behavioral Intention (R²) Explained variance in Actual Usage Path coefficient magnitudes and significance Overall model fit statistics All path coefficients were tested for statistical significance using t-statistics derived from LISREL estimation procedures. The models generally showed good fit to the data in both studies, with AGFI values above .85 and CFI values above .90, indicating acceptable structural validity

How was the model’s external validity tested?

Taylor and Todd employed a two-study design specifically to strengthen external validity claims: Multi-Technology Approach: Study 1 examined email adoption—a relatively optional technology where use provided immediate benefits but was not mandatory. Study 2 examined a new information system—a more mandatory technology implemented as part of organizational operations. This contrast allowed testing whether models generalized across voluntary versus mandated adoption contexts. The different technological characteristics created two distinct adoption scenarios: email represented communication technology that users could adopt incrementally and benefit from individual use even without universal organizational adoption. The new information system represented integrated business software where benefits depended on broader organizational adoption and integration.

  • Multi-Context Sampling: Study 1 used MBA students in a university setting, representing professional training contexts where email adoption enhances learning efficiency. Study 2 used academic faculty in a university setting, representing knowledge work contexts where the new system integrated with scholarly and administrative functions. While both university-based, the different user populations (students versus faculty) and different organizational roles (learners versus knowledge workers) created variation
  • Population Diversity Within Studies: Study 1 included 108 MBA students with varying prior computer experience. Study 2 included 223 faculty members across different academic disciplines with varying technical sophistication. This within-study diversity strengthened generalization claims
  • Objective Behavioral Measures: Rather than relying solely on self- reported intentions, both studies measured actual usage: Study 1: Email usage measured through system logs showing frequency and duration of email interaction Study 2: Usage measured through three-item self-report scale of frequency and intensity (though this was less objective than Study 1) The authors note that “usage was measured more directly in Study 1 through system logs… [while Study 2 used] perceptual measures of usage frequency and intensity.” This difference represents a limitation for Study 2’s external validity but allows comparison of findings across objective and subjective measurement approaches
  • Replication of Patterns: The authors explicitly designed Study 2 to replicate Study 1 findings in a different context. When similar path coefficients and model structures emerged across both studies despite different technologies and populations, this strengthened confidence that findings represent generalizable principles rather than context-specific artifacts
  • Temporal Separation: The studies were conducted at different points in time, with Study 1 representing an emerging email adoption phase and Study 2 representing a more mature IT adoption scenario. This temporal variation allowed assessment of whether adoption patterns differ at different innovation diffusion stages
  • Adequacy of Model Specifications: The authors tested whether their theoretical models and path specifications fit both datasets adequately. If models were misspecified or context-dependent, fit would diverge significantly across studies. Similar fit indices and path coefficients across contexts provided evidence that models captured fundamental adoption processes not highly dependent on specific contexts

How is the model intended to be used in practice?

Taylor and Todd provided managerial guidance for using their findings to understand and facilitate IT adoption: Model Selection and Assessment: Organizations can use the competing models framework to select which theoretical perspective best matches their adoption context. The authors note that “the extended TAM model performed comparably to the TPB model in predicting usage” and suggest that organizations should assess which factors dominate in their specific context. For mandatory technologies or contexts with strong organizational control, extended TAM elements prove sufficient. For voluntary or discretionary technologies, additional consideration of subjective norms and perceived behavioral control offers greater insight.

  • Diagnostics for Adoption Barriers: The competing models identify different leverage points for intervention. If an organization finds that Perceived Usefulness is the limiting factor, training and communication should emphasize functional benefits. If Perceived Ease of Use is low, technical support and system redesign become priorities. If social factors (subjective norms) are inhibitory, organizational communication and champion strategies become essential. The framework guides practitioners to diagnose which specific factors constrain adoption in their organization. “Understanding which factors most strongly predict usage in particular contexts allows targeted intervention” on the most consequential barriers
  • Expectation Setting About Adoption Patterns: The models provide insight into typical adoption trajectories. The strong relationship between Perceived Usefulness and Behavioral Intention suggests that users develop intentions based on utility perceptions. The positive effect of Perceived Ease of Use on both Attitude and Usefulness indicates that simple systems that require less learning effort are perceived as more useful. Organizations can use these relationships to forecast adoption patterns. If rollout plans generate low Perceived Usefulness perceptions despite objectively useful functionality, adoption will likely be poor. The model indicates that perception management is as important as objective system quality
  • Intervention Sequencing: The models suggest an intervention sequence. First, reduce Perceived Ease of Use barriers through training, simplified interfaces, and accessible support. As users gain capability, Perceived Usefulness becomes increasingly salient—systems that are easier to use are perceived as more useful. Then, as individuals develop intentions to use systems, social norms and organizational support become important for sustaining usage
  • Normative Strategy Differentiation: For voluntary technology adoption, Taylor and Todd suggest that “subjective norms significantly predict behavioral intention” indicating that “opinion leaders, peer champions, and visible organizational support matter.” Managers should systematically cultivate these social influences. For mandatory technologies, the findings suggest that usefulness and ease of use dominate, though social factors still contribute. Organizations implementing mandatory systems can rely less on social persuasion and more on ensuring the systems genuinely improve work processes and are easy to use
  • Comparative Model Assessment: Organizations can apply each competing model as a diagnostic framework: TAM: Focus on Perceived Usefulness and Perceived Ease of Use. If usage is low despite high scores on these, other factors omitted by TAM may be limiting adoption
  • Extended TAM: Include social norm assessment. If organizational norms are negative despite useful systems, norm-building interventions become necessary. TPB: Assess Perceived Behavioral Control. If users doubt their capacity to use systems effectively despite perceiving usefulness, then training and support become critical
  • Integrated Model: Assess all factors comprehensively to ensure no major barrier is overlooked
  • Implementation Planning: For new IT implementations, organizations can use the competing models framework to design multi-faceted adoption strategies: System Design Phase: Prioritize both usefulness (functionality) and ease of use (interface design) based on the models’ emphasis on these factors Training and Support Design: Design training to build both Perceived Ease of Use and understanding of Perceived Usefulness Organizational Communication: Build subjective norms supporting adoption through visible champion endorsement and management support Resource Provision: Ensure Perceived Behavioral Control is high by providing adequate support, access to systems, training resources, and technical assistance Cross-Technology Application: The research demonstrates that models apply across different technologies (email and integrated information systems). Organizations implementing different IT solutions can use the same competing models framework, adapting emphasis based on technology characteristics and organizational context

What does the model measure?

Taylor and Todd’s framework operationalizes multiple constructs across different theoretical perspectives: Core TAM Constructs: 1.Perceived Usefulness (PU): Operationalized through five items measuring whether the technology improves job performance, increases productivity, and enhances effectiveness.

  • Example items: “Using [system] would improve my performance” and “I would find [system] useful in my job.” Measurement uses 5-point scales. Cronbach’s alpha = .94 (Study 1) and .89 (Study 2). 2.Perceived Ease of Use (PEOU): Measured through four items assessing difficulty of learning, interaction ease, and skill requirements
  • Example items: “Learning to operate [system] is easy for me” and “I would find [system] easy to use.” Cronbach’s alpha = .88 (Study 1) and .81 (Study 2). 3.Attitude Toward Use (ATT): Three items measuring overall evaluative response to the technology
  • Example items: “I like working with [system]” and “Using [system] is pleasant.” Cronbach’s alpha = .93 (Study 1)
  • TPB-Specific Constructs: 4.Subjective Norm (SN): Four items measuring perceived social pressure and opinions of relevant others
  • Example items: “People who are important to me would think I should use [system]” and “My supervisor thinks I should use [system].” Cronbach’s alpha = .75 (Study 1) and .78 (Study 2). 5.Perceived Behavioral Control (PBC): Four items measuring perceived ability to use the system and availability of resources
  • Example items: “I would have the resources necessary to use [system]” and “I would have the knowledge necessary to use [system].” Cronbach’s alpha = .87 (Study 1) and .74 (Study 2)
  • Dependent Variables: 6.Behavioral Intention (BI): Three items measuring likelihood and willingness to use the technology
  • Example items: “I intend to use [system]” and “I will use [system].” Cronbach’s alpha = .93 (Study 1) and .95 (Study 2). 7.Actual Usage: In Study 1, objective system logs measuring frequency and duration of email system access. In Study 2, three self-report items measuring usage frequency and intensity, with alpha = .78
  • Integration Mechanisms: The framework measures how these constructs interrelate through multiple pathways: Direct effects: PEOU → Attitude; PU → Attitude; PU → Intention; Attitude → Intention; SN → Intention; PBC → Intention Indirect effects: PEOU influences Intention indirectly through its effect on Attitude and Perceived Usefulness Intention → Usage: The intention-behavior linkage The measurement approach operationalizes competing theoretical mechanisms, allowing empirical comparison of how different factors influence adoption across theoretical perspectives

What are the main strengths of the model?

Taylor and Todd’s competing models framework possesses several important strengths: 1.Rigorous Comparative Analysis: The research systematically compares multiple theoretical frameworks using consistent methodology across both models and studies. This comparative approach is more sophisticated than testing single models in isolation, providing evidence-based guidance about which theories provide superior explanations. 2.Multi-Study Validation: The two-study design with different technologies (email versus integrated information system) and populations (students versus faculty) provides stronger evidence than single-study research. When similar findings emerge across contexts, confidence in generalization increases substantially. 3.Objective and Subjective Behavior Measurement: Study 1’s measurement of actual email usage through system logs represents a methodological strength. Rather than relying solely on self-reported intentions or usage, the authors measured actual behavior directly, addressing a persistent limitation in IT adoption research. 4.Clear Practical Guidance: The paper translates theoretical findings into managerial implications.

Organizations can use the competing models framework to assess which factors most constrain adoption in their context and select appropriate intervention strategies. 5.Sophisticated Statistical Methods: The use of LISREL structural equation modeling and comparison of multiple goodness-of-fit indices (AGFI, CFI, RMSEA, SRMR) represents methodological sophistication. The authors tested complete structural models simultaneously rather than using piecemeal regression analysis, reducing concerns about multicollinearity and specification error. 6.Theory Integration: Rather than arguing that one theory is correct and others wrong, the authors demonstrate how TAM and TPB perspectives complement one another. This integrative approach is more theoretically mature than competitive positioning between frameworks. 7.Specification of Mediating Mechanisms: The research clarifies how constructs relate. For example, the finding that PEOU influences Intention partially through Attitude and partially through Perceived Usefulness reveals the mechanisms through which ease of use affects adoption decisions.

This mechanistic clarity is more useful than correlation evidence alone. 8.Replication Within Studies: Both studies show consistent patterns, with similar path coefficients and model structures despite different technologies and populations. This consistency strengthens confidence in findings. 9.Attention to Voluntary Versus Mandatory Contexts: The authors note differences in adoption dynamics between voluntary email and mandatory information system adoption, recognizing that adoption context affects which factors dominate. 10.Theoretical Foundation: Rather than empirically discovering relationships, the work is grounded in established theories (Theory of Reasoned Action, Theory of Planned Behavior, Technology Acceptance Model), providing theoretical justification for proposed relationships.

What are the main weaknesses of the model?

Despite its strengths, Taylor and Todd’s framework has notable limitations: 1.Study 1 Sample Size and Population: The email study used only 108 MBA students, a relatively small sample from a highly educated, motivated population. MBA students likely have greater computer skills and education than typical IT users, potentially limiting generalizability to broader populations. This narrow population reduces external validity. 2.Study 2 Subjective Usage Measurement: While Study 1 used objective system logs, Study 2 relied on self-reported usage frequency and intensity. Self-report measures are subject to social desirability bias, memory errors, and honest disagreement about usage patterns. The authors acknowledge this difference but do not fully address the validity implications. 3.Cross-Sectional Design: Both studies captured single time points rather than tracking adoption longitudinally.

The models show associations between constructs but cannot definitively establish causality. Reverse causality is possible—for example, frequent users might develop higher Perceived Usefulness perceptions rather than high usefulness perceptions driving usage. 4.Limited Context Variety: While two studies provide more context than one, both occur in university settings. Generalization to corporate, government, or other organizational contexts is unclear. Different organizational cultures, technology maturity levels, and adoption pressures might create different adoption dynamics. 5.Incomplete Model Specification: The paper does not address how models might be moderated by individual differences (e.g., computer anxiety, technology experience, age, education). Different individuals might weight constructs differently, suggesting that a single model specification may not fit all users equally well. 6.Limited Examination of Intention-Behavior Gap: While the models predict behavioral intention well, the transition from intention to actual usage remains incompletely explained.

The authors acknowledge that “behavioral intention is a more proximal predictor of usage than attitude” but note variance in usage even among those with high intentions. 7.Technology Characteristics Underspecified: The model does not examine how technology characteristics (complexity, compatibility with existing systems, relative advantage) might moderate relationships between constructs. Simple technologies might show different adoption dynamics than complex ones. 8.Social Influence Operationalization: Subjective norm measurement focuses on perceived social pressure but may not fully capture the rich social influence processes around technology adoption. Peer learning, informal training, and observational learning might operate differently than normative social influence suggests. 9.Perceived Behavioral Control Limitations: PBC measurement focuses on resource availability and capability beliefs but may not fully capture actual behavioral control or objective constraints.

Organizational policies, system access, and technical infrastructure represent objective controls not fully captured by perceptual measures. 10.Limited Examination of System Quality: The models treat Perceived Usefulness as a perception-based construct influenced only by other perceptions. They do not directly measure objective system quality, functionality, or performance. A system’s actual benefits might diverge substantially from perceptions, yet the models focus only on perceptions. 11.Insufficient Attention to Experience Effects: The models do not examine how findings might differ at different adoption lifecycle stages. Early adoption when systems are new might show different patterns than sustained adoption as users gain experience.

How does this model differ from older models?

Taylor and Todd’s work represents significant theoretical evolution from earlier technology adoption frameworks: 1.Comparative Rather Than Single-Theory Approach: Earlier research typically proposed and tested single theories in isolation. Taylor and Todd explicitly compared TAM, TPB, and integrated approaches, providing evidence-based theory prioritization rather than assumption-based positioning. 2.Integration of TAM and TPB: While Davis’s TAM and Ajzen’s TPB were developed in different contexts, Taylor and Todd demonstrated that both frameworks apply to IT adoption and can be integrated. This integration showed that theories from different research traditions can complement one another, advancing theoretical sophistication. 3.Explicit Testing of Extended TAM: Earlier TAM research had limited attention to social influences (subjective norms). Taylor and Todd formally tested whether adding subjective norms to TAM improved explanatory power, addressing a recognized theoretical limitation. 4.Multi-Technology Validation: Earlier technology adoption research often tested single technologies (email adoption, database adoption, etc.).

Testing whether frameworks generalize across email and information systems strengthened evidence for universal adoption principles. 5.Attention to Intention-Behavior Linkage: While older models often stopped at predicting behavioral intention, Taylor and Todd measured actual usage behavior, addressing one criticism of intention-based models. This brought models closer to practical relevance. 6.Specification of Indirect Effects: Earlier models often examined only direct effects between constructs. Taylor and Todd specified multiple pathways through which variables influence outcomes (e.g., PEOU affects intention both directly and indirectly through attitude and usefulness), revealing mechanistic complexity. 7.Systematic Model Comparison: The paper’s structured comparison of model fit indices across different specifications provided evidence-based guidance about theory selection—advancing beyond authors championing their preferred theory. 8.Population-Specific Validation: By testing models across different user populations and technologies, Taylor and Todd acknowledged that adoption might vary by context while also seeking universal principles applying across contexts.

What Barriers to Technology Adoption does the model identify?

Taylor and Todd’s competing models framework identifies multiple categories of barriers affecting IT adoption, organized across individual perceptions, social influences, and control factors: 1. Perceived Usefulness Barriers (Individual Cognitive Barriers) The model identifies low Perceived Usefulness as a fundamental barrier. When users do not believe that IT systems will improve their job performance or productivity, adoption remains low regardless of other factors.

  • This barrier manifests through several mechanisms: •Unclear Value Proposition: Users do not understand how systems apply to their work or what benefits they will provide. The model shows strong positive effects of Perceived Usefulness on both Attitude (path = .71 in Study 1, .75 in Study 2) and Intention (path = .40 in Study 1, .42 in Study 2), meaning that failure to understand system benefits fundamentally limits adoption. •Mismatch Between System Functionality and Job Requirements: Systems may be designed for generic use cases but not tailored to specific job functions. If users perceive that the system does not address their actual work priorities, usefulness perceptions remain low. •Perceived Risk or Uncertainty About Benefits: Users may be skeptical that promised benefits will materialize, particularly for complex organizational systems where benefits depend on broader adoption and integration. This skepticism reduces perceived usefulness and thereby intention to use. •Comparison to Existing Workflows: When established workflows are familiar and functional, new systems must demonstrate substantial improvement to be perceived as useful. If improvement is marginal, users perceive low usefulness despite objective functionality. 2. Perceived Ease of Use Barriers (Cognitive and Technical Barriers) The model identifies perceived difficulty as a significant barrier
  • Perceived Ease of Use affects adoption through multiple pathways: •High Learning Curves: Systems perceived as difficult to learn create adoption resistance. The model shows that PEOU influences Attitude (path = .45 in Study 1, .40 in Study 2) and indirectly influences usefulness perceptions. Users who perceive steep learning curves may avoid engaging sufficiently to discover usefulness. •Complex Interfaces: Non-intuitive system design that does not align with mental models creates PEOU barriers. Users must expend effort learning system logic that is unnecessarily complicated. The negative path from PEOU to Usefulness (path = -.49 in Study 1, -.51 in Study 2, indicating that more complex systems are perceived as less useful) suggests that complex systems undermine both adoption paths. •Inadequate Training and Support: Without accessible training and technical support, users face high barriers to developing competence. The model’s findings about PEOU suggest that barriers to learning will significantly constrain adoption. •System Instability or Technical Problems: If systems are unstable, unreliable, or frequently unavailable, users perceive low ease of use despite objectively straightforward interfaces. Technical issues compound learning burdens. •Insufficient Documentation and Help Resources: When reference materials are unclear or help systems are unhelpful, users perceive higher difficulty in system operation. 3. Social Influence Barriers (Normative and Social Barriers) The model identifies negative subjective norms as significant adoption barriers: •Peer Resistance or Skepticism: When respected colleagues resist or question system adoption, social norms inhibit individual adoption. The model shows subjective norm significantly predicts behavioral intention (path = .13 in Study 1, .16 in Study 2), suggesting that peer opposition constitutes a barrier

Note: This article provides an overview based on the comprehensive literature review. Readers are encouraged to consult the original publication for complete details.

← Back to Complete Bibliography