Personal Computing Acceptance – Thompson et al. (1991)
Model Identification
Model Name: Toward a Conceptual Model of Personal Computing
Authors: Ronald L. Thompson, Christopher A. Higgins, Jane M. Howell
Publication Date: 1991
Citation Information
Thompson, R. L., Higgins, C. A., & Howell, J. M. (1991). Toward a conceptual model of personal computing utilization. MIS Quarterly, 15(1), 125-143.
Why was the model made?
Thompson et al. Â developed their PC utilization model to address a significant gap in understanding personal computer adoption and usage within organizational settings. While prior models existed to explain technology acceptance, there was insufficient theoretical grounding specifically examining the factors influencing personal computing utilization behavior among end-users. The authors built upon Triandis’ (1980) theory of expected consequences as a foundational framework, which posits that an individual’s behavior is influenced by the relative influence of different components of expected consequences of use. The motivation emerged from recognizing that personal computing was becoming increasingly prevalent in organizational environments, yet organizations struggled to understand why some employees readily adopted PCs while others resisted or underutilized them. The model sought to identify the key determinants driving actual PC usage patterns, moving beyond simple acceptance measures to understand what drives sustained utilization behavior.
Specifically, Thompson et al. Â hypothesized that six primary constructs would predict PC utilization: social factors, affect (attitudes toward PCs), facilitating conditions, complexity, job fit, and long-term consequences. They recognized that these factors interact differently to influence how frequently and intensively individuals use personal computers in their work. The research was grounded in the organizational context, drawing 286 survey respondents from a single organization, making it highly relevant to managers seeking to understand employee PC adoption patterns. The model represented an important evolution in IS research by proposing that technology adoption is not binary (use versus non-use) but rather exists on a continuum of utilization intensity. Prior research had focused heavily on intention to use or acceptance, but Thompson’s work directly examined actual usage behavior and the factors predicting variation in usage levels among users.
How was the model’s internal validity tested?
The researchers employed rigorous quantitative methodology to establish internal validity. They conducted a survey-based study with 286 participants from a single organization and employed Partial Least Squares (PLS) analysis as their primary statistical technique. PLS was selected specifically because it is particularly suited to testing structural models without requiring the stringent assumptions of normality associated with regression models. To establish internal validity, the researchers first developed comprehensive measurement scales for each construct. They conducted exploratory factor analysis to identify underlying measurement dimensions, which resulted in seven factors rather than the originally hypothesized six. This discovery itself validated the multidimensional nature of the constructs being measured. The exploratory factor analysis extracted eight factors initially using traditional techniques, then refined these using PLS techniques to provide a more focused factor analysis.
Cronbach’s alpha reliability coefficients were calculated for all measurement scales. The alphas ranged from .60 (for complexity) to .86 (for facilitating conditions), with most scales demonstrating acceptable reliability above .70. The paper notes that lower reliabilities for some scales indicated that future studies should develop stronger measures. The researchers examined discriminant validity by conducting a principal components analysis of measurement scales. They confirmed that each construct loaded more highly on its hypothesized factor than on other factors, with only one exception—facilitating conditions items loaded slightly higher on social factors for some items. This finding indicated appropriate construct separation. Cross-construct correlations were computed and reported in Table 6 of the study. These intercorrelations between constructs revealed moderate relationships, suggesting that while constructs were related, they measured distinct dimensions.
For example, complexity showed correlations with job fit (.28), long-term consequences (.17), affect (.48), social factors (.19), and facilitating conditions (.07), indicating appropriate independence while showing expected relationships. The model’s path coefficients were tested for statistical significance using jackknifing procedures, which do not assume normality. Four of six hypothesized relationships were statistically significant at p < .01 level, providing strong evidence for the model’s internal structure.
How was the model’s external validity tested?
External validity testing was limited in this research, which the authors explicitly acknowledged. The study was conducted with participants from a single organization, which represents a significant limitation to generalizability. The researchers noted that future tests of the theory across multiple organizations were needed to establish whether results would generalize to other contexts. However, the authors did take steps to address external validity within their single-organization context. They selected participants across different job levels and positions, recognizing that PC utilization might vary by role and responsibility. They distinguished between job-related factors (how a PC fits one’s job tasks) and individual perceptions, attempting to capture variance across different job contexts within the organization. The study measured actual utilization behavior through self-reported usage patterns rather than relying on intentions or beliefs alone, which strengthened external validity relative to intention-based models.
Usage was operationalized through multiple indicators: number of visits to the computing facility, time spent using personal computers, and frequency of use in job-related tasks. The researchers also collected data on respondents’ previous experience with personal computers. They recognized that experience might influence the relationships between antecedents and utilization behavior. By examining experience as a potential moderating variable, they attempted to understand whether the model’s predictive patterns held across users with different levels of familiarity with technology. The relatively large sample size for a single organization (n=286) provided adequate statistical power for detecting relationships. The researchers explicitly stated that one limitation was that “the respondents were from one organization… [hence] the generalizability of these results to other organizations remains to be determined.” This acknowledgment demonstrates appropriate caution about external validity claims.
How is the model intended to be used in practice?
Thompson et al. Â provided extensive managerial implications for how organizations could apply the model to enhance PC adoption and utilization. The model is intended to help managers diagnose barriers to PC utilization within their organizations and identify lever points for intervention. First, the model provides a diagnostic framework. Managers can assess the strength of each construct within their organizational context: Do employees perceive good fit between their jobs and PC functionality? Are there sufficient facilitating conditions (technical support, training)? Is affect toward PCs positive? Do social factors support usage? Are complexity perceptions high? Do employees perceive positive long-term consequences? By evaluating performance on each dimension, managers can identify which barriers are most problematic. Second, the model prioritizes intervention areas.
The findings showed that social factors and job fit had the strongest effects on utilization (path coefficients of .22 and .26, respectively), suggesting these should be primary focus areas. Long-term consequences also significantly influenced utilization (.10), but affect and facilitating conditions did not show significant direct effects. This suggests that while technical support is important, it is insufficient alone—managers must also address the social environment and perceptions of task-technology fit. Third, the model suggests specific managerial actions. For social factors, managers can leverage early adopters and organizational champions to promote PC usage. The authors note that “visible organizational members to use PCs may be an effective way of championing use throughout the organization.” For job fit, managers should assess whether PC applications align with actual job requirements and communicate these connections clearly to employees.
The model also highlights experience as a potentially important moderator. The authors speculate that “experience influences expected consequences of behaviors. The influence of experience on expected consequences could be tested by comparing the paths in the model across samples of experienced and inexperienced PC users.” This suggests longitudinal tracking of utilization patterns as users gain experience. Fourth, the model provides insight into the insufficient nature of purely technical interventions. The non-significant relationship between facilitating conditions and PC utilization suggests that providing abundant technical support, while necessary, may not drive increased usage if other factors are not addressed. This counters conventional wisdom that better support automatically yields better adoption.
What does the model measure?
The Thompson model measures seven primary constructs, each operationalized through multiple items and scales: 1.Complexity: Measures perceived difficulty of using a PC. Four items assessed complexity (CO1-CO4), with Cronbach’s alpha of .60. Items measured perceived ease and difficulty in learning and using computers. 2.Job Fit: Measures the perceived alignment between PC functionality and job requirements. Six items (JF1-JF6) with alpha of .82 assessed whether PCs helped with job performance and whether the technology matched job tasks. 3.Long-Term Consequences: Measures perceived future payoffs from PC use, including career advancement and productivity gains. Six items (LT1-LT6) with alpha of .76 assessed beliefs about future benefits and career impacts. 4.Affect: Measures emotional attitudes toward PCs—whether individuals like or dislike them. Three items (AF1-AF3) with alpha of .61 measured affective responses and liking for PCs. 5.Social Factors: Measures the perceived importance of others’ opinions regarding PC use and social norms about technology.
Four items (SF1- SF4) with alpha of .65 assessed social influence and normative pressures. 6.Facilitating Conditions: Measures objective and perceived availability of resources supporting PC use. Four items (FC1-FC4) with alpha of .86 measured training availability, equipment access, and technical support. 7.Utilization: Measures actual frequency and intensity of PC usage. Three items (UT1-UT3) with alpha of .64 assessed direct usage behavior including frequency of use. The model conceptualizes these seven constructs within a framework where complexity, job fit, and long-term consequences are grouped as “expected consequences of use,” while social factors and facilitating conditions are treated separately, and affect is positioned as influencing utilization both directly and through its influence on other constructs.
What are the main strengths of the model?
The Thompson model possesses several notable strengths that contributed to its influential position in IS research: 1.Grounding in established theory: The model is anchored in Triandis’ theory of expected consequences, providing strong theoretical justification rather than representing purely empirical discovery. This theoretical foundation gives the model explanatory power beyond mere statistical association. 2.Comprehensive construct coverage: Rather than focusing on single factors, the model integrates multiple dimensions—affective, social, cognitive, and contextual—providing a more holistic view of PC utilization determinants. This comprehensiveness acknowledged that technology adoption is multifaceted. 3.Actual behavior measurement: Unlike many technology acceptance models that measure intentions or beliefs, Thompson’s model directly measures utilization behavior. This operationalization bridges the intention-behavior gap, addressing a known limitation of intention- based models. 4.Empirical validation with strong model fit: The model explained 24% of variance in PC utilization (R² = .24), which was substantial for a behavioral outcome and demonstrated that the identified factors capture meaningful drivers of usage. 5.Clear managerial implications: The model provides actionable insights.
By identifying that social factors and job fit are the strongest predictors while technical support alone is insufficient, the model guides managers toward evidence-based intervention strategies. 6.Sophisticated statistical methodology: The use of PLS analysis represented a methodological advance, allowing the researchers to test the complete model simultaneously while accounting for measurement error, more sophisticated than traditional regression approaches. 7.Recognition of complexity: The model’s finding that complexity perception negatively influences utilization (path = -.14) is intuitive and validates a widely held assumption while also quantifying its magnitude. 8.Integration of personal and organizational factors: The model synthesizes individual attitudes, social context, job characteristics, and organizational support systems, recognizing that utilization is determined by factors at multiple levels.
What are the main weaknesses of the model?
Despite its strengths, the Thompson model has notable limitations: 1.Single-organization design: The most significant limitation acknowledged by the authors is that all data came from one organization. This severely constrains external validity and generalizability. PC adoption patterns, social norms, job characteristics, and organizational culture vary substantially across industries and organizations, making it unclear whether findings would replicate elsewhere. 2.Limited operationalization of facilitating conditions: The authors acknowledge that facilitating conditions were operationalized narrowly as “technical support,” but the theory should encompass broader resource factors. They note that “we only measured one aspect of facilitating conditions” and that “other measures of facilitating conditions should have been used, such as access to a PC or ease of purchasing software or hardware upgrades.” This incomplete operationalization may explain the non-significant effect of facilitating conditions. 3.Low reliability for complexity scale: The complexity construct had the lowest Cronbach’s alpha (.60), below the conventional .70 threshold, indicating measurement issues.
The authors note the “relatively poor reliabilities” and state that “future studies should develop stronger measures” for complexity. 4.Non-significant relationships for some major constructs: Two hypothesized relationships were not statistically significant: the direct paths from affect to utilization (.02) and facilitating conditions to utilization (-.04). This contrasts with prior technology acceptance research and suggests either model misspecification or contextual differences. 5.Low R² for affect and facilitating conditions: The indirect paths through job fit and long-term consequences suggest these variables may be more important than direct effects, but the theory did not adequately specify these indirect mechanisms. 6.Self-report measurement: All data were self-reported rather than based on objective usage statistics. Respondents may have overestimated or underestimated their usage. The authors note that “a better approach would have been to obtain precise usage statistics through an electronic monitor to confirm or disconfirm the perceptions of the respondents.
This approach has been suggested by many IS researchers (e.g., Robey, 1979).” 7.Cross-sectional design: The study captured a single point in time snapshot. Longitudinal data would better establish causal relationships and understand how factors influence utilization trajectories over time as users gain experience. 8.Insufficient theoretical explanation of experience effects: While the authors discuss experience as potentially important, they do not integrate it into the model. Experience may change how all model constructs influence utilization, representing an important unmodeled mechanism. 9.Discrimination validity issue with facilitating conditions: One measurement validity problem emerged: facilitating conditions items loaded slightly higher on the social factors factor for some items, suggesting these constructs were not perfectly distinguished in measurement.
How does this model differ from older models?
The Thompson model represents a significant evolution from prior technology adoption frameworks in several ways: 1.Shift from intention to behavior: Earlier models like the Technology Acceptance Model (Davis, 1989) primarily measured behavioral intention as the outcome variable. Thompson’s model directly measured actual usage behavior, recognizing that intention-behavior correspondence is imperfect and that actual utilization is the managerially relevant outcome. 2.Integration of multiple theoretical perspectives: While prior models drew from single theoretical traditions, Thompson’s work integrated Triandis’ theory with concepts from expectancy theory, attitude theory, and social influence theory. This eclecticism provided broader theoretical coverage than single-perspective models. 3.Explicit modeling of social factors: Prior IS models gave limited attention to social influence and social norms. Thompson’s inclusion of social factors as a primary construct acknowledged that technology adoption occurs within social contexts where peers, supervisors, and organizational norms shape adoption decisions. 4.Comprehensive expected consequences framework: Thompson operationalized “expected consequences” comprehensively across near- term factors (job fit, complexity) and long-term outcomes (career advancement).
This temporal framing was more sophisticated than models treating all consequences equivalently. 5.Organizational context: While Davis’ TAM was developed in a general consumption context, Thompson explicitly situated the model in organizational environments where job fit and organizational factors shape technology adoption. This made the model more directly applicable to workplace technology adoption. 6.Affect as distinct construct: By including affect as a distinct construct from perceived usefulness or perceived ease of use, Thompson’s model acknowledged emotional dimensions of technology adoption—whether people like the technology independent of rational assessments of usefulness. 7.Path model structure: The model specified complex relationships where some factors (job fit, complexity) operate through effects on longer-term consequences, while others (social factors) have direct effects. This structural sophistication exceeded earlier models’ simpler linear structures. 8.Empirical evidence on theory of reasoned action adequacy: The model’s finding that affect did not significantly predict utilization (contradicting some implications of Fishbein and Ajzen’s theory) provided evidence that technology adoption contexts diverge from traditional attitude-behavior theory, advancing nuance in IS theory.
What Barriers to Technology Adoption does the model identify?
The Thompson model identifies multiple barriers to personal computing adoption and utilization that operate across cognitive, affective, social, and contextual dimensions: 1.Perceived Complexity: The model identifies complexity—the perceived difficulty of learning and using personal computers—as a significant barrier to adoption. The study found that higher complexity perceptions were associated with lower utilization (path = -.14, p < .01). This barrier manifests when users view PCs as difficult to learn or operate, regardless of the system’s objective complexity. The negative relationship between complexity and long-term consequences suggests that perceptions of difficulty undermine beliefs about future benefits. 2.Poor Task-Technology Fit: Job fit emerged as perhaps the most important barrier, with a strong positive path to utilization (.26, p < .01). The inverse—poor job fit—represents a significant barrier.
When users perceive that PCs do not align with their actual job tasks or requirements, they have little motivation to use them. This barrier is particularly acute when organizational mandates to use PCs do not match actual work processes. The authors note that communication about PC applications’ relevance to job performance is essential for overcoming this barrier. 3.Negative or Weak Affect: While affect did not show a statistically significant direct effect in their model, the research acknowledges that emotional attitudes toward computers represent potential barriers. If users dislike computers or feel negative emotions toward them, they may avoid usage regardless of utility. The authors note that “PCs do not evoke strong emotions, either positive or negative, among managers or professionals.
If PCs are seen simply as tools, and not as technology to be liked or disliked, then affect would not have an impact.” This suggests that in some contexts, weak positive affect represents a utilization barrier. 4.Unsupportive Social Context: Social factors showed a significant positive effect on utilization (.22, p < .01), meaning that negative social context represents a barrier. This includes: (a) lack of peer support or adoption by colleagues, (b) organizational norms not supporting PC usage, (c) absence of respected champions or role models using PCs, and (d) lack of technical leadership in the organization promoting adoption. The barrier operates through social pressure and normative influence—if respected others do not use PCs, individuals question whether adoption is appropriate. 5.Insufficient Facilitating Conditions: Although facilitating conditions did not show a significant direct effect on utilization (path = -.04), the authors recognize this as a measurement issue rather than evidence that support is unimportant.
The barrier here involves inadequate training opportunities, insufficient technical support, poor accessibility to computing resources, and lack of assistance in adopting systems. The authors emphasize that “technical support provided by the organization appears to be only one type of facilitating condition; others include the ease with which software or hardware upgrades can be purchased or the extent to which home computers are an advantage in the job package.” 6.Negative Perceptions of Consequences: Long-term consequences showed a significant effect on utilization (.10, p < .01). When employees perceive that PC use will not lead to tangible benefits—improved job performance, career advancement, productivity gains, or professional development—they have little incentive to invest effort in learning and using systems. The barrier manifests as skepticism about ROI, doubts about productivity improvements, and uncertainty about career relevance. 7.Organizational and Job Context Misalignment: Beyond job fit, broader organizational factors create barriers.
The authors discuss how “certain factors that have a significant influence on EUC success” relate to organizational context. If organizational structures, performance evaluation systems, or job designs do not reward or recognize PC usage, adoption remains low even when systems are technically sound. 8.Experience Gaps: While not explicitly modeled, the authors identify that inexperience with personal computers creates barriers. They hypothesize that “experience influences expected consequences of behaviors. The influence of experience on expected consequences could be tested by comparing the paths in the model across samples of experienced and inexperienced PC users.” New PC users may perceive higher complexity, less favorable job fit, and fewer positive consequences until experience accumulates. The model reveals that barriers operate at multiple levels.
Individual perceptions of complexity and affect represent cognitive-affective barriers. Social norms and peer behavior create social barriers. Organizational support systems and actual job requirements create contextual barriers. Addressing adoption barriers therefore requires multi-level interventions targeting individual cognitions, social influences, and organizational structures simultaneously.
What does the model instruct leaders to do in order to reduce these barriers?
The Thompson et al. Â paper provides explicit managerial guidance for reducing the identified barriers: For Complexity Barriers: The model suggests reducing perceived complexity through improved training and communication about ease of use. The authors note that “an important facilitating condition is the ease with which an individual can access a PC.” Leaders should consider the context in which PCs must be learned. Technical support should be readily available and responsive. The finding that training aimed at “strengthening the perceived usefulness of PCs, such as greater effectiveness and efficiency in performing job functions, could have a positive influence on utilization” suggests that leaders should contextualize training within job-relevant examples rather than generic computer skills instruction. The authors recommend that “education aimed at strengthening the expected consequences of using PCs, such as greater effectiveness and efficiency in performing job functions, could have a positive influence on utilization.” This means helping users understand not just how to use PCs, but why and how they apply to their specific work.
- For Job Fit Barriers: The strongest managerial intervention relates to job fit, which emerged as the second-strongest predictor of utilization
- The authors suggest: “One partially controllable factor may be the degree of correspondence between job tasks and the PC environment (i.e., job fit). Specifically, communication aimed at increasing the awareness of potential applications of PC technology for current job positions may influence the perception of job fit.” Leaders should: (1) conduct careful job analysis to identify tasks where PC usage truly enhances performance; (2) communicate job fit explicitly to employees; (3) customize training and applications to demonstrate job relevance; (4) modify job descriptions to incorporate PC-dependent responsibilities when appropriate. The authors emphasize that “communication aimed at increasing the awareness of potential applications of PC technology for current job positions may influence the perception of job fit.” This suggests leaders must be explicit and proactive in helping employees understand application relevance
- For Social Factor Barriers: Given that social factors showed the strongest direct effect on utilization (.22), the authors strongly recommend leveraging organizational champions: “Visible organizational members to use PCs may be an effective way of championing use throughout the organization.” More specifically, leaders should: (1) identify and empower early adopters and opinion leaders; (2) publicize the successes of enthusiastic PC users; (3) involve respected managers and subject matter experts in promotion efforts; (4) create peer learning communities where experienced users support newcomers; (5) establish organizational norms supporting PC adoption through visible leadership endorsement. The model suggests that “social factors may also be a partially controllable factor; for example, it may be possible to influence norms by publicizing the successes of early adopters of technology.” Leaders can shape social norms through deliberate communication and visibility management
- For Facilitating Conditions: Although facilitating conditions did not show strong direct effects, the authors argue this reflects measurement limitations rather than unimportance. They recommend that leaders should not rely solely on technical support but should ensure comprehensive facilitation: “If the organization has positive norms concerning PC use, it would be disposed to providing technical support.” Facilitating conditions improvements include: (1) ensuring convenient physical access to PCs and computing facilities; (2) providing responsive help desk support; (3) offering flexible, accessible training programs; (4) supplying documentation and reference materials; (5) considering the organizational normalization of PC ownership (providing equipment as needed). Importantly, the authors note that their operationalization was limited and that organizations should address facilitating conditions more comprehensively than their study measured
- For Affect Barriers: While affect showed no significant direct effect in the Thompson model (unlike in some other contexts), the authors acknowledge that emotional acceptance matters. They note that “PCs are seen simply as tools, and not as technology to be liked or disliked,” suggesting that affect may be context- dependent
- Leaders addressing affect should: (1) destigmatize technology for those with computer anxiety; (2) provide positive experiences through hands-on learning in low-pressure environments; (3) highlight enjoyable aspects of computing; (4) normalize emotion around technology adoption by acknowledging that comfort takes time
- For Experience-Based Barriers: The authors highlight that “experience influences expected consequences of behaviors” and recommend tracking how expectations and utilization patterns change over time
- Leaders should: (1) implement longitudinal tracking of user adoption curves; (2) provide ongoing support recognizing that barriers and facilitators change as experience accumulates; (3) expect initial underutilization while users gain competence; (4) design career development pathways that build computing skills progressively. Integrated, Multi-Level Approach: Critically, the authors argue that “future research on computer utilization within the IS context can productively use Triandis’ work as a frame of reference.” They recommend that managers understand multiple barrier types operate simultaneously. No single intervention addresses all barriers. Instead, “organizations seeking to enhance PC adoption should target multiple intervention points.” The authors specifically emphasize that “technical support provided by the organization appears to be only one type of facilitating condition” and that “if the organization has positive norms concerning PC use, it would be disposed to providing technical support.” This indicates that social support and organizational norms must be established alongside technical infrastructure
- The model suggests that successful PC adoption requires: coordinated attention to complexity reduction through training, clear communication about job fit and task relevance, cultivation of organizational champions and peer support, provision of technical resources and support, and management of long-term expectations about benefits and career implications. 7
- Following Models or Theories Following Models: Taylor and Todd (1995) - Understanding Information Technology Usage: A Test of Competing Models; Goodhue and Thompson (1995) - Task-Technology Fit and Individual Performance; numerous subsequent technology adoption models building on these foundations Following Theories: Subsequent refinements of the Technology Acceptance Model (TAM); extensions incorporating social influences and organizational factors; models examining utilization patterns rather than mere adoption Series Navigation This article is part of a Technology Adoption Models Literature Review series: 1. Ram (1987) - A Model of Innovation Resistance 2. Thompson et al. (1991) - Toward a Conceptual Model of Personal Computing Utilization 3. Taylor and Todd (1995) - Understanding Information Technology Usage: A Test of Competing Models 4. Goodhue and Thompson (1995) - Task- Technology Fit and Individual Performance References 1.Thompson, R. L., Higgins, C. A., & Howell, J. M. (1991). Toward a conceptual model of personal computing utilization. MIS Quarterly, 15(1), 125-143. 2.Triandis, H. C. (1980). Values, attitudes, and interpersonal behavior. Nebraska Symposium on Motivation, 1979. Lincoln, NE: University of Nebraska Press. 3.Davis, F. D. (1989). Perceived usefulness, ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. 4.Davis, F. D., Bagozzi, R. P., & Warsaw, P. R. (1989)
- User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982-1003. 5.Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley. 6.Porter, L. W., & Lawler, E. E. (1968). Managerial attitudes and performance. Homewood, IL: Irwin-Dorsey
Note: This article provides an overview based on the comprehensive literature review. Readers are encouraged to consult the original publication for complete details.
