Undoubtedly the second oldest concept in the study of innovation, next to that of "innovation" itself, is "adoption". The use of this word is so much a part of the common language that we all feel that we understand its meaning intuitively. Because of this assumption, the term has come to be used with rather less precision than the norms of social or policy analysis should allow. Nor have the various uses of the term been subjected to much systematic analysis. In fact, the problems with this concept have contributed to perpetuating some major misconceptions about the innovation process generally, and the techniques for "transferring technology" in particular. This paper explores some of these problems with the concept of "adoption", and suggests some ways of overcoming them.
Adoption is the original dependent variable in innovation research -- the criterion which is sought. It is the desirable property of innovative systems which change agents seek to enhance. "Innovation" is broadly defined as any change in structure, design, products, or processes in which there is a definable new element introduced into the sys tem; the process is essentially the same for technologies of all degrees of "hardness" al though the specifics may vary. By far the largest body of knowledge relating to innovation is the so-called "correlates of innovativeness" research the information about which characteristics of people or organizations are associated with higher levels of adoption. The voiced or unvoiced assumption underlying the examination of correlates of innovativeness is causal: If we manipulate the characteristics of organizations or individuals so that they more closely resemble those of the highly innovative, we will make the organizations or individuals themselves more innovative (that is, more prone to "adopt" our innovations).
There is little doubt that innovation research, and the technology-transfer programs based on it, have consistently exhibited what Rogers (1975) called the "pro-innovation bias" -- a tendency to assume that adoption of the innovation should be carried out by all possible adopters. In accounting for this slant, it is useful to remember that innovation research began as market research – that is, studies aimed at identifying ways to get people to buy more goods and services, particularly in agriculture. Its transition into “diffusion research”, studying the spread of new ideas or practices through a population, was a response to the observation that the existence of the innovation alone was not sufficient in most cases to produce adoption. The development of implementation research in recent years is a further reaction to the failure of diffusion or communication alone to promote adoption, leading in turn to an examination of the internal dynamics of innovative individuals or organizations (Giacquinta, 1978). In this transition, the vocabulary and guiding metaphors have also changed. The market research approach is essentially derived from economics; the diffusion approach, from a combination of cognitive psychology and political science. Innovation research has both benefited and suffered from not being the property of a single academic discipline.
The picture is further complicated by the tendency to transfer to the analysis of organizations the models of innovativeness originally developed for individuals, often without careful analysis of the ways in which the two types of "systems" were alike or unlike (Katz, 1962). Most of this paper is concerned with organizational innovation, and we will generally use the term "organization" as a referent. However, most of the same points might be usefully made in other contexts about the analysis of individual adoption
There is nothing inherently wrong with market research or even with a pro-innovation value system. Many innovations currently on the market are good ideas in terms of almost any value system, and encouraging their spread can be viewed as virtually a public duty. But any organizational innovation process involves many people and many possible sets of values. It is certainly proper for the analyst, or the innovator, to define a set of predominant values, such as "ultimate interests of the client", to take precedence over more local values. But one should not forget that others may not share the same whole hearted commitment, and that they tend to act and make decisions based on their own values not necessarily on those of the analyst. Evaluation of organizations and innovations requires a single value set; accounting for their behavior, on the other hand, must admit the operation of different value sets.
definition of the term "adoption" is found in
If these assumptions cannot be made, it is very hard to justify the use of "adoption" as a dependent variable interesting in any policy relevant sense at least. The first two assumptions constitute the basis for generalizing the analysis; the third constitutes the value judgment on which the analysis is based. Generalizability of findings and clarity of values and purposes are essential to useful policy research.
It is the appropriate function and responsibility of the analyst to define the criteria in terms of which he judges the innovation to be a desirable or undesirable result for society. These criteria may describe outcomes (future configurations of desirable relationships) or processes (suitable methods of achieving outcomes). These are approximately equivalent to Rokeach's (1973) elaboration of Dewey's "terminal" and "instrumental" values. When such criteria serve to define the desirability of "adoption", the implicit assumption is that the criteria hold in settings other than the one under immediate consideration. Thus, generalizability of the value setting is as important as that of the circumstances involved.
To say that one must clarify the value context in which an adoption decision is made is not to say that one must accept those values personally. It is to say that one must recognize those values as genuine for those who hold them. For Yin (1976) to speak of "bureaucratic self-interest" or Feller (1977) to describe "conspicuous production" in local governments is not to endorse these value systems as appropriate terminal values for public policy. But adoption modeling is an explicit or implicit decision making approach, and as such must embody values as the basis for such decisions. If the values of the potential adopters are not analyzed explicitly, the values used are likely to be by default those of the analyst. Whether they will provide information useful in helping predict the choices of innovation actors is likely to be a matter of chance.
We described "adoption" earlier as the basic dependent variable in innovativeness research. Its major value is to distinguish between the result-states of innovation and non-innovation. Innovators adopt; non-innovators do not adopt. It is a convenient dependent variable in social-science terms. Not only does it neatly embody a crucial value choice, as we noted earlier; it has the major analytical virtue of being coded dichotomously. Both its interpretation and its analysis are thus considerably simplified. Unfortunately, the counterpart concept of "rejection of innovation" is not so easily dealt with (and is, in fact, not nearly so frequently analyzed). It is used in the literature in two rather distinct senses, usually without clarification of the differences:
· Active rejection: Consideration, perhaps even trial of the innovation, followed by passing it over in favor of some other course of action.
· Passive rejection: Lack of attention to the innovation, leading to its never being really considered for incorporation into the system.
These two situations describe widely different sets of actions. Interpreting them as cases of the same behavior (non-innovation), as most adoption analyses do, says that these behavioral differences are insignificant in terms of the value choices being made. This assumption may be justified. However, it should be made explicitly rather than implicitly.
The issue of whether or not
there is a definable "adoption point" in the innovation
process should be carefully distinguished from the issue of the degree of
imitation, or the similarity of the innovation in different settings. The
question of similarity has received useful attention in recent years (e.g.. Hall and Loucks, 1978); it has
too long been assumed that "adoptions" in different settings were
identical without testing this assumption. There is no question that imitation
does take place.
If one is convinced that one has a good or "validated" innovation (in the sense of the National Diffusion Network's model for transferring educational technology between local school districts), then one is justified in concern over adaptations or modifications which threaten its essential character or its ability to achieve the purposes which led one to de fine it as "good" in the first place. “Imitation” is a only descriptive term. It refers to the similarity of the results of the process or the degree of parallel process in installing the innovation. It does not necessarily refer to both aspects together. Thus, two organizations may have what looks like exactly the same innovation without any imitation at all, simply as a result of parallel evolution (Naroll, 1965). By contrast, two organizations may go through much the same innovation-decision process, and end up with innovations which look very different because of the differences in context. Using "adoption" as a dependent variable, then, tends to obscure the process of innovation.
Social science research is accustomed to create and manipulate "concepts", or "abstractions from observed events...to simplify thinking by subsuming a number of events under one general heading". (Selltiz and others, 1965). "Adoption of innovation" is such a concept. One studies adoption, as we noted earlier, because one believes that one can then usefully generalize to settings other than the original. Thus, studying how farmers adopt hybrid seed corn can presumably help us learn how to introduce new reading programs into the public schools. Whatever value the concept of "adoption" has lies in its generalizability, since, as we will see, it means many different things in practice. It is not a unique and consistent act which simply happens to occur in many different settings.
It is a besetting sin of social science research, and of administrative practice based on that research, that the use and definition of key concepts are not subjected to periodic and rigorous criticism. The value of using any concept lies in its ability to generate useful understanding of human behavior, not in its inherent inner beauty. We have a tendency to use concepts again and again, to "reify" them, without carefully thinking about what their real behavioral referents are, or to what degree their meaning to participants is situation-specific. Meyer (1979) notes that interpretation of most measures of organizational phenomena depends heavily on the history and institutional contexts of the particular organization under study. Increasingly, analysts studying the transfer of technology are coming to the same conclusion: a technology in one setting frequently means and looks very different from the "same" technology in another (Feller, 1979), and implicit equation of the two will be misleading.
Closely related to the problem of clear conceptual definition is the problem of operational measurement of the concept in a form which can be qualitatively or numerically analyzed. The process of creating operational measurements is generally uncomfortable for most social scientists to talk about -- at least, very few studies spend much time discussing just why their measures are in fact valid reflections of the concepts under discussion. Technology-transfer practitioners are less ambiguous than researchers about defining what constitutes "success" but their measures are likely to be quite situation-specific, as we noted earlier, and even less generalizable. No operational definition can be any more satisfactory than the concept which underlies it. But even a clearly defined, bounded, and generally valid concept can be spoiled by a bad measurement, and its generalizability thus called into question.
The more the process of innovation is studied as a whole, the more difficult it becomes to define "adoption" unambiguously. "Adoption" as a concept refers to some definable act of decision (conscious or subconscious) on someone's part. The problem is locating such actions and interpreting them. Organizational innovation research has suffered in this area from its origins in individual market research. Where individual "adoptions" are considered, defining such a key act is usually relatively easy. It is usually the purchase or acquisition act which is centrally valued by the analyst. One can reasonably easily determine if a farmer has bought hybrid seed corn, or if a woman is using birth-control pills. A single act (or a limited set of acts) serves as the criterion for judging the outcomes of the process, and the process itself is usually unexplored.
But analyses of organizational innovation, particularly in relation to technology, are usually concerned (or should be) with more than a single act -- variables relating to use of technology are probably more important, as well as more complex, in getting at the impact of the transfer process. Once we admit more than one dependent variable, we are obligated to examine the processes that interrelate them, and to consider the decision making underlying those processes.
Recent studies of complex organizational processes which have tried to explore the dynamics of innovation have almost uniformly found that organizational innovation is not a matter of one or even a few decisions (Lambright and others, 1977; Eveland and others, 1977; Yin and others, 1978). Rather, it is typically a complex set of interlocking decisions or defaults: Some large in scope, some small, some made at the top of the hierarchy, some made lower down. Many of these decisions are not "optimal", in the sense of employing decision criteria relevant to the whole system under consideration; frequently, they tend to be sub-optimizations made in the interests of particular individuals or groups at particular points in time. Reconciling these different suboptimal activities is usually a "political" process. This should have come as no surprise; the general out lines of this analysis go back at least as far as March and Simon (1958), even though innovation analysts are just rediscovering them.
But if the innovation process is, as these studies suggest, a series of complex and contingent decisions, then the logical question is just which one of these decisions is in fact the crucial decision -- the one appropriately naming the point in time at which the organization moved from the category of not having the innovation to the category of having it. Reference to the concept of "adoption" requires conceptually that one be able to make this distinction.
In research practice, "adoption" in organizational cases is usually assessed largely in retrospect. That is, we look at an organization, determine by "the weight of the evidence” that the innovation in question is or is not present in it, and conclude that there must be some point in the past at which the organization "adopted" it. But it is usually almost impossible to define just what that past "critical decision" really was. In the absence of such a clear behavioral referent for the concept, "adoption" becomes frequently more of an analytical construct than an action description. This does not usually prevent analysts from proceeding to generalize to other behavioral situations" -- usually without clarifying their assumed basis for that generalization.
Measurement strategies and methods tend to reflect the problems of concept definition. When one is measuring individual "adoption", where the basis for the definition is a de finable and retrievable decision-act, one has a choice of looking at "hard data" sources such as purchase records, or using recall data of various sorts. Each strategy has its costs and benefits, but either can be relied on within certain limits to produce reliable information about the concept. But in a complex organizational innovation sequence, how should this problem be addressed?
The first problem is to find a decision at all. As we noted, innovation-process research suggests that the sequence is shaped as a series of decisions and non-decisions -- a set of acts of individuals and groups which affect all stages of the process. The adoption analyst has two choices:
· Identify one decision (or a few) which one believes to be critical in some sense, and use the occurrence of that decision as a test of adoption.
· Use a "commitment" measurement, assessing the degree to which a series of decisions aggregate toward the same conclusion; when the degree of commitment reaches a certain point, either quantitatively of qualitatively, assume adoption.
Either of these approaches can be employed with either a "hard" or "soft" data strategy. Organizational records can be searched for the key decision's occurrence (Gordon and others, 1974). The presumably crucial decision-maker (usually the top executive or line official) can be queried about the organization's adoptions (Becker, 1970). Aggregate in dices of commitment can be assembled from organizational records (Palumbo, 1969; Mohr, 1969), or from observation of the innovation as it is practiced (Hall and Loucks, 1978).
Again, each strategy has costs and benefits. Records are more precise, but tend to exclude certain items systematically -- items which may be crucial to the research aims (Garfinkel, 1967). Impressionistic data is easy to collect, but may be limited by the choice of people to supply it; how much a top manager, for example, knows about what his organization is really doing is a systematic function of the nature of the organization itself. No strategy for definition or measurement of adoption produces unambiguous results which make it possible to generalize "adoption of innovations" entirely.
When one defines an operational measure of a concept one implicitly assumes that the measure is sufficiently representative as to permit generalization to situations in which other measures might be selected. The concept of "adoption" in organizations has been subject to an extremely wide range of operational measurements, and it is questionable as to whether all of these are in fact appropriate representations of the concept in question. It is not within our present scope to provide a complete tabulation of possible measures, as Steers (1975) did with the idea of "organizational effectiveness". However, let us catalogue a number of specific actions which have been used in reputable innovation studies as embodiments of the general act of "adoption":
Many others could be cited. It is not our intention to criticize any of these operational definitions (although some seem more intuitively generalizable than others. Each makes sense within the value system of the research project that employs it. But the key problem is that each describes very different actions on the part of the organization and its members.
The ultimate contribution of social science research is not to explore relationships among abstract concepts. It is to help people understand the actions of individuals and organizations in the social context. Concepts are tools; we use them only because they help us to translate our understanding of one set of actions into an understanding of an- other set of actions. In any research, therefore, the burden must be on the analyst to define just why his chosen set of actions tells us about some other sets of actions. In short, he must define the common component of the different actions. In innovation research, this is seldom done. It is left to the reader to assume that conclusions drawn from, say, analysis of purchasing decisions, can be used to draw conclusions about policy acquiescence issues. This is unfair to both the reader and the field, and makes the development of theory more difficult.
The increasing attention to the questions of innovation process -- the sequence of decision-making and other acts which shape the eventual outcome -- has led to the development of a number of stage models of innovation. It is now common to trace the evolution of an innovation from its inception as a general idea to its "routinization" or full acceptance as an integral part of the organization's behavioral repertoire. While these models vary considerably in sophistication and complexity, they generally share a distinction between two general phases of the process -- what Zaltman and others (1973) called the "initiation" and "implementation" phases. Thus, most innovation process models require some act of "adoption" as a criterion for determining passage from one stage to the other.
This distinction has served effectively to preserve the utility of the familiar concept of "adoption" while at the same time recognizing that the innovation process is highly variable and contingent. The term "implementation" has come to be used as a code word for all things which can go wrong in the process -- and by implication, "adoption" refers to the original, correct, starting point. This approach is essentially an adaptation of the political or legislative-behavior model, which separates the act of legislating (adoption) from the execution of the law (implementation). This analysis is used explicitly in analyses such as that of Pressman and Wildavsky (1973) and Bardach (1977), and strongly questioned by Elmore (1979). The crucial assumption in this mode of analysis, of course is that there is some form of correct implementation which, if it were to be achieved, would "truly" carry out the law or adopted policy. Clearly, the more complex the technology which underlies the prescription, the more closely the correct implementation can be specified.
As we noted in the original definition of the term "adoption, both the idea and its uses are elements to be accounted for. The duality of this definition has until recently been less than fully explored. At first, any variation in either element was "noise" in the analysis. Then, implementation analysts highlighted that variation in uses was a fact of life. Recently, attention has been given to the possibility of variation in the idea and its meaning as well. Not only do different people perhaps hold different views about the innovation and its characteristics; they may even change the idea itself in the course of working with it. This concept of "reinvention" (Berman and McLaughlin, 1974; Larsen and Agarwala-Rogers, 1977; Rogers, 1977) calls into question the basis for any real adoption analysis. If people define the innovation differently, and create new elements for it during the innovation process, to what degree do they meaningfully "adopt" the same t thing? At a bare minimum, any reasonable definition of "adoption" ought to include distinct measurements of the two aspects which Eveland (1977) called "tool” and “use", and Pelz and Munson (1979) call "technological content" and "embedding content". Given that each of these elements is in itself the product of a number of decisions, made at different times by different people in different ways, finding specific unambiguous operational measures for them may be difficult.
A related question is the problem of accounting for variation of opinion and activity within the organization under study. Most of the studies noted earlier have considered "adoption" to be a variable property of the organization as a whole, analogous to its "size" or "complexity". Thus, they have sought a single measure to categorize the status of the organization as a whole. But intra-organizational process analysis reveals clearly that behavior relating to innovations is not uniformly distributed through the organization in most cases. Some parts are likely to be highly committed to the idea, while others have never heard of it. If one interviews only one person in the system, gathering data on "adoption" is likely to be easy. Interpreting that data is not so easy, since it is extremely risky to project that opinion over the whole system. As Roberts and others (1978) note in their review of recent findings on data aggregation, the use of aggregate measures frequently conceals as much interesting behavior as it reveals.
Given the theoretical and methodological difficulties with using the concept of adoption which have been outlined here, what can we do? At the very least, our observations should suggest that a healthy degree of skepticism should greet any statements about "adoption of innovations". Whenever we encounter findings which refer to "adoption" and “adopters", we should ask certain questions:
In the long run, perhaps the field of innovation research would be better served by an indefinite moratorium on the use of the word "adoption". If researchers were to describe the actions they choose to study not as versions of some abstraction called "adoption" but rather as straightforward behavior, in the context of other organizational behavior, it would probably be easier to develop a body of knowledge about innovation as a process. The problem of generalizability is no greater when speaking about actions than when using concepts which do not have clear behavioral referents.
Coombs (1964) outlined a "theory of data" in which there were three general stages:
Most innovation research has been conducted at Stage 3. In this paper, I have suggested that perhaps some attention to a central Stage 2 topic is needed -- that is, how do we decide that particular observations which we make in an organization constitute a datum called "adoption"?
In a policy sense, the issue is "what constitutes the behavior which our policies ought to encourage". If one is in the business of transferring technology from one group to another, how does one know when one succeeds? The present contribution of innovation process research is largely to call into question many of the casual assumptions of past studies (and programs) that this criterion behavior was easily and clearly defined. Without a simple dependent variable such as "adoption" -- and this paper has tried to show that it is neither simple nor even usable -- program designers and researchers alike will be forced to specify criteria explicitly. The effective utilization of technology cannot help but be served by this development.
(1977). The Implementation Game.
Becker, M. H. (1970). "Sociometric Location and Innovativeness: Reformulation and Extension of the Diffusion Model". American Sociological Review. 35:267-282
Berman, P. and McLaughlin, M. W. (1974).
Federal Programs Supporting Educational Change: A Model of Educational Change. RAND Corporation,
Coe, R. M. and Barnhill, E.A. (1967). "Social Dimensions of Failure in Innovation". Human Organization. 26(3):149-56
Coombs, C. (1964). A Theory of Data. NY: Wiley
Elmore, R. (1979). "Mapping Backward: Using
Implementation Analysis to Structure Policy Decisions". Paper presented to
Annual Meeting, American Political Science Association,
(1967). Studies in Ethnomethodology.
B. (1978). "Educational Innovation in Schools: Some Distressing
Conclusions about Implementation". Paper presented to the American
Educational Research Association, March, 1978,
Gordon, G. (1974). "Organizational Structure,
Environmental Diversity, and Hospital Adoption of Medical Innovations", in
Kaluzny, A. (ed.). Innovation in Health Care Organizations. Chapel Hill:
Gross, N., Giacquinta, K. B. and Bernstein, M. (1971). Implementing Organizational Innovations. N.Y.: Basic Books
Hall, G. and Loucks, S. (1978). "Innovation Configurations:
Analyzing the Adaptations of Innovations". Unpublished
Kaluzny, A. (1974). "Innovation of Health Services: A Comparative Study of Hospitals and Health Departments", in Kaluzny, op.cit.
Katz, E. (1962). "Notes on the Unit of Adoption in Diffusion Research". Sociological Inquiry. 32(2):3-9
J. and Agarwala-Rogers, R. (1977). "Reinvention in
Lambright, W. H., Teich, A. H., and Carroll, J. D. (1977). Adoption and Utilization of Urban Technology: A Decision-Making Study. Report to the National Science Foundation, Syracuse Research Corporation
March, J. G. and Simon, H. (1958). Organizations. N. Y.: Wiley
Meyer, M. W. (1979). Personal Communication.
Milio, N. (1971). "Health Care Organizations and Innovation". Journal of Health and Social Behavior. 12:163-73
Mohr, L. G. (1969). "Determinants of Innovation in Organizations". American Political Science Review. 63(1): 111-126
Mytinger, R. E. (1978). Innovation
in Local Health Services. PHS Publication 1664-2.
Naroll.R. (1965). "Gallon's Problem: The Logic of Cross-Cultural Analysis". Social Research. 32:428-51
Palumbo, D. J. (1969). "Power and Role Specificity in Organizational Analysis". Public Administration Review. 29(3): 23748
Pelz, D. and Munson, F. (1979). "The Innovating Process: A Conceptual Framework". Unpublished paper.
J. and Wildavsky, A. (1973). Implementation.
K., Hulin, C., and Rousseau, D. (1978). Developing an
Interdisciplinary Science of Organizations.
Rogers, E. M. with Shoemaker, F. (1971). Communication of Innovations. N.Y.: Free Press
Rogers, E. M. (1977). "Reinvention During the Innovation Process", in Radnor, Feller and Rogers, op.cit.
Rokeach, M. (1973). The Nature of Human Values. N.Y.: Free Press
Sapolsky, H. (1966). "Organizational Structure and Innovation". Journal of Business. 41
Selltiz, C. and others (1976). Research Methods in Social Relations. N.Y.: Holt, Rinehart and Winston
Steers, R. M. (1975). "Problems in the Measurement of Organizational Effectiveness". Administrative Science Quarterly. 20:546-558
Tushman, M. (1974). Organizational Change: An Exploratory
Study and Case History.
Walker, J. (1969). "The Diffusion of Innovations Among American States". American Political Science Review. 63:880-889
White, M. (1975). Management Science in Federal Agencies. N.Y.:
Yin, R. (1978).
Changing Urban Bureaucracies: How New Practices Become Routinized.
R. (1976). A Review of Case Studies of Technological Innovations in State and
Zaltman, G., Duncan, R., and Holbek, J. (1973). Innovations and Organizations. N.Y.: Wiley