Diffusion, Technology Transfer, and Implementation: Thinking and Talking About Change

J. D. EVELAND

Cognos Associates

 

Knowledge: Creation. Diffusion. Utilization, Vol. 8 No. 2, December 1986 303 -- 322

 There is today an increasing consciousness that our technology has, in enough cases to worry us, outstripped the ability of many organizations and individuals to make productive use of it. In almost any scientific field one cares to mention -- from agriculture to robotics, computing to genetic engineering -- the refrain of practitioners is the same: "We know so much -- why can't we get people to use it right?" The degree of frustration and uncertainty surrounding the effects of technology on society generally has reached serious proportions for both technology developers and users. And yet it is also clear that there is a substantial body of knowledge, both theoretical and practical, that bears on precisely this set of issues -- namely, how ideas move and become modified in the course of being used by people and groups to accomplish purposes. Why then the frustration with the applications of these ideas in the day-to-day world? Does it matter? And what might be done about it through social action? Are there generic mechanisms whereby knowledge might be moved from place to place more effectively? And would it be a good thing if there were such mechanisms? And given what we have learned in 50 years of systematic analysis, what might we as "knowledge workers" do about the problem?

It is my contention that the problem of making productive use of technology -- generically called "implementation" -- is essentially a phenomenological issue -- that is, one of understanding how people think about technology in relation to their lives and interests, and how thoughts lead to human action (Cochran, 1980). It is, after all, basically fruitless to look at technology outside of the context of human systems. Technology application is a problem only for people -- it does not bother a machine at all not to be used, or to be used as a fancy doorstop; it matters only to those who paid for it and do not get a return on their investment.

In this article I will outline first what I see as some of the basic dimensions of the technology transfer issue generally, then look at some of the implications of those dimensions for action. The theme throughout is the centrality of the problem of meaning in technology utilization, and how we can use the phenomenological viewpoint to organize commonly recognized problems in diffusion, technology transfer, and implementation. There is a long and rich tradition of analysis revolving around these social issues, and my purpose is less to add entirely new insights to this tradition than to suggest how certain recurrent themes in the literature -- both theoretical and "wisdom" -- can be used to scientific and practical advantage. Helping others to think and talk creatively about change requires that we think as creatively ourselves, and find the appropriate organizing vision for our knowledge. It is to these points that my final conclusions are addressed.

 

Defining "Technology Transfer"

If we are to understand how technology transfer should be conceived and understood, we have to begin with the words themselves. First, technology. The concept of technology has to be used in the broadest possible sense if it is to make any sense at all. Technology is not simply hardware or physical objects; rather, it is knowledge about the physical world and how to manipulate it for human purposes. This point is absolutely critical -- technology is essentially information.  The physical objects usually regarded as "technology" are important only insofar they embody and convey this information.  At a minimum, they must encompass both the tools (sometimes physical, sometimes procedural) and the uses -- the purposes to which that tool is put (Eveland Rogers and Klepper, 1977). All technology is essentially behavioral; tools cannot be understood aside from the things they are used to do -- the purposes of the individuals and groups that use them. This is essence of the "sociotechnical system" concept (Taylor, 1975; Cherns 1976).

Both tools and uses are defined at varying levels of abstraction -- "hammers," "computers," "hybrid corn," and "flexible manufacturing systems" all can refer to extremely generic concepts, or to highly specific objects and procedures, or to a vast range in between. Choosing an appropriate degree of specificity is critical to the technology implementation process. Over time, uses help define tools and tools help define uses, iteratively (Pelz, 1982).

The critical dimension is forming an appropriate level of specificity which to define facilitation -- that is, how does this tool help me something valuable? Technology transfer depends critically on facilitation -- if what you seek to transfer does not facilitate the achievement of goals, you are not likely to succeed (Bikson et al., 1985). As we note later, goals for technology may be defined in many different ways and from the point of view of many diverse groups and individuals. A consequence valued by one person may be a disaster from the viewpoint of another. Understanding how the utilities of technology as seen many diverse points of view interact with implementation process one of the major advantages of a phenomenological approach to technology transfer.

The term transfer is also problematical. Since technology is essentially information, "transfer" is essentially communication of information -- both within individuals and groups and between them -- and the use of that information in the recipient system (Bikson et al., 1984).1 Technology transfer, accordingly, cannot be understood out of the context of technological change or innovation (Eveland, 1979). The term transfer tends to encourage a focus on physical relocation.  But the movement of physical objects from one place to another is meaningless unless the recipient does something with that object and information it embodies; "utilization" is both the target and the test of the process (Larsen, 1980). Concentrating on the transaction itself rather than on what happens as a result of the transaction is a notable shortcoming in technology transfer as it is currently practiced (if not in the conceptual literature itself).

Technology transfer is in large measure an exercise in the use of language to communicate, and an appreciation of the role that language plays in leading to individual, organizational, and social action is essential (McHugh, 1968; Blumer, 1969). Looking at how language functions in technology transfer requires looking at how people, either individuals or groups, understand new things. Such understanding is substantially a process of metaphor formation -- that is, understanding how the new thing is both like and unlike things already familiar (Bandler and Grinder, 1975; Lakoff and Johnson, 1980). Each metaphor carries with it a set of affective and substantive associations that for good or ill carry over to the new thing (Meyer, 1982).

Different metaphors create different responses. Consider the personal computer during its introduction to an organization that has not had such tools before. Three commonly used metaphors for such computers are "typewriter," "calculator," and "terminal." Seeing PCs as typewriters implies one-to-one access, usually by secretaries, on desks or in typing pools ("WP Centers") with relatively little consultation by system engineers with those who use them except about aesthetics or ergonomics. The "calculator" metaphor implies that the tools will be used one-on-one in professional offices, with choices about both equipment and usage left largely to the individuals. Others see PCs as "terminals", an approach that implies they should be scattered around, spaced roughly equally apart, for open use by anyone who wanders by. None of these metaphors is precisely wrong -- but each tends to limit the choices of users in critical ways (Englebart, 1982).

"Myths" are sets of metaphors used for explanation in circumstances where empirical evidence is lacking. They help with sense making while such experience is accumulating. Eventually, myths are gradually confirmed or disconfirmed. Thus, metaphors are continually changing, usually in the direction of more specificity -- for instance, what kind of typewriter, or calculator, or whatever. Once you have decided what something is, it is often difficult to go back and decide it is really something else. This is true for both physical tools and social roles. Eventually objects and practices become their own things, and serve as the basis for subsequent metaphors for new ideas and objects (while, of course, retaining their own metaphors) -- that is, they become familiar constructs whose meaning is generally assumed to be shared and not generally discussed. For many organizations now, a PC is just a PC -- and we compare, for example, shared -- logic systems to PCs in an attempt to understand their meaning.2

Sharing information among people (and organizations) requires that all be operating on somewhat the same general level of abstraction, and be using something like the same variety of metaphors. It does not require perfect information, or precise specificity, to be effective --  sometimes ambiguity and generality can be very effective, particularly one does not know just what sorts of metaphors an information recipient is applying. This is a lesson known to all good salesmen, but only latterly has it been understood equally well by the research community (Havelock, 1973).

In some critical ways, therefore, the term 'technology transfer" is an unfortunate one -- almost as unfortunate as 'diffusion," which also gets applied to these phenomena. Both terms have the disadvantage of erroneous metaphorical connotations. Speaking of "technology" tends to lead us to focus on the hardware, the physical object involved, which is, as I noted earlier, almost the smallest part of the question. The term focuses our attention away from the behavioral dimensions of tools and their interactions with human purposes. "Transfer" emphasizes the movement of physical objects from one place to another, with the implication that the object moved is the same at the beginning and at the end. "Diffusion" is even worse; it implies some sort of anonymous at inexorable physical process of spreading something across the landscape, rather like a disease.3 If we as analysts persist in using terms whose connotations are directly opposite from what we wish to convey, cannot really blame an audience of practitioners trying to apply the concepts for drawing the wrong conclusions.

Our understanding of technology transfer systems was shaped in this awkward way through a perfectly reasonable and logical chain of events. Like many parts of behavioral science, the "diffusion of innovations" started out as a real-world problem, and only later turned into a field of study (Rogers, 1983). The original problem was simple market research, in this case how to sell hybrid seed corn. In the course of finding out that what farmers thought about corn really did affect what they decided to do about it, Ryan and Gross (1943) and their followers also formulated a set of categories and models that soon came to be seen as generalizable. Generalizing came first from individuals to organizations, then to a lot of other situations -- first fluoridation and health practices (Becker, 1970), then school programs, public works, and social policies (Feller and Menzel, 1976: Bingham, 1976; Berman and McLaughlin, 1977; Lambright, 1980), recently computers and related tools (Johnson et al., 1985). The number of such studies is now incalculable, and there is a well-established "literature" in the field (Doctors and Stubbart, 1979). What is less clear is how deeply the best ideas in this field have penetrated into the applications literature, still less into field practice in transfer.

Likewise, the practice of innovation diffusion was critically shaped by marketing. It is impossible for anyone to speak ten words about diffusion without two of them being "agricultural extension." Expectations about what technology transfer systems should and should not do and look like have, for good or ill, been critically shaped by our understanding of that program, its practices and its effects (Rogers, et al., 1976; Feller et al., 1985). In many ways, it constitutes the defining metaphor for all technology transfer efforts. I will not attempt here to define or describe all its features -- only to note that what extension really is is virtually impossible to untangle from all the things people think it is or should be. Untangling extension-as-an-organization from extension-as-a-concept is more readily accomplished in the literature than in the field.

This point is evident when one looks at how agricultural extension served as the basis for a large number of Federal programs in the 1970s aimed at replicating extension's success in other technical areas (Roessner, 1975; FCCSET, 1977). Agencies such as NASA (Chakrabarti and Rubenstein 1976), the Department of Defense (Hetzner and Rubenstein, 1971), the Office of Education, and the National Institute of Justice (Blakely et al., 1983), among others, all started "diffusion" programs aimed at industry or governmental users of technology. There are ebbs and flows in these movements; lately, direct transfer efforts appear to have been overshadowed by an emphasis on "university-industry cooperative relationships" and the various approaches to this goal (Eveland and Hetzner, 1982). While transfer remains a significant political symbol, it is clear that its content has and will continue to shift considerably.

In summary, each phase of the development of the field of technology transfer -- both conceptual and practical -- has contributed new insights and complexities that have enriched subsequent, developments. But there has been a consistent tendency to focus on the content of the change rather than on the meaning of the change for those who changed. If one's research is being sponsored by seed companies, it is reasonable to concentrate on the seed as the central focus -- but fundamentally limiting to let the meaning of the seed expected from users to be defined entirely by the meaning as perceived by developer/sellers. Only by taking an oblique look at the problem -- from the point of view of the recipient systems -- are we likely to be able to take our understanding of technological innovation to its next productive stages (Havelock and Eveland, 1985).

 

Generic Problems with Understanding Technology Transfer

Having gone to great lengths to define what I think "technology" and "technology transfer" really are, it is now appropriate to consider what we might do with this formulation. In the remainder of this article I sketch some of the generic factors of the context of technology transfer that make the formulation and application of generic models of process rather problematical, and then suggest some principles might guide us to a new and more effective formulation of the the issues involved.

Let us begin with two main sets of problems/issues that complicate understanding of technology transfer – cross-sectional problems (not posed by context and organization) and dynamic problems (those posed by processes evolving over time).

Problems of context/organization. The first problem is that of deciding what the technology really is. A great deal of the conceptual history of diffusion research was focused on the development of lists of "innovation characteristics," aimed at defining "adaptability" of different technologies (Tornatzky and Klein, 1982.  We only understood enough things about technologies, it was felt, if we could predict efficiently where and by whom they would (or at should) be used.

By the mid -1970s, we had come to see that this approach was terminally complicated by differences in perceptions, or, in the language used earlier here, by varying metaphors for the new ideas (Downs and Mohr, 1977). This became particularly apparent when the innovations under study were "social technologies" such as educational or social programs (Larsen, 1982). One way around this was to conceive of innovations as sets of specific "elements," bundled in various ways -- like a car that can be bought in any number of different configurations (Hall and Loucks, 1977). The more specific these elements are, the better chance they stand of being "transferred" in some form recognizable to their original definer (Blakely et al., 1983). While this approach makes the job easier for the analyst, it does little to resolve problems for the user.

The second set of issues revolves around the fact that most technological innovations of any interest are embedded in organizational contexts (Chakrabarti, 1973). Each change has repercussions for the whole system, "ripple effects" across both space and time moderated by the degree of "coupling" of the system but always present to varying degrees. Understanding how different parts of the system are interdependent can help a lot in accounting for unplanned and unanticipated effects, which can be both positive and negative. Often when we fail to understand such interdependencies, we sub-optimize a system, making one part work a lot better and others work a lot worse. The degree to which this is satisfactory depends on whether you are talking to one in charge of the first part of the system, or to one in charge of the others, or to one who has to balance the interests of the whole system.

As noted earlier, the choice of an appropriate level of aggregation to look at organizational behavior is a key issue both analytically and practically (Roberts et al., 1978). Organizations are at bottom made up of individuals, who are at best "partially included" in the organizational system -- that is, they participate in many other systems as well, and must relate what they do in one system to what they do in another to maintain some degree of personal integration. When we speak of "organization's behavior" we sometimes lose sight of the fact that such behavior -- however useful as an analytical construct -- is a composite average of the behavior of lots of individuals each acting out of their own context and responding to their own imperatives and interests. Ultimately, technology transfer is a function of what individuals think -- because what they do depends on those thoughts, feelings, and interests. Choosing a higher level of aggregation to look at transfer phenomena can sometimes obscure this key concern.

Understanding the interplays of individual and collective judgments 'about costs, benefits, and behavior is essentially the dimension of perceived characteristics of organizational politics. I use this word here in its relatively strict sense to refer to the interactions of interests among parties in a relationship (Weiss, 1973; Benson, 1975). Commitment to goals in social group always relative -- people embrace goals with positive consequences to themselves with considerably more fervor than they do goals, the payoff which are more personally tenuous. The problem is complicated by culturally induced embarrassment in talking about values and value conflicts; but such issues do not go away simply because we avoid them.

Any technological change -- indeed, any technology at all -- involves an unequal distribution of these costs and benefits in the system; some people must pay the costs, and others receive the benefits. If all the costs are to be paid by the lower hierarchical levels of the system and all benefits appropriated by the upper levels, "resistance to change" is merely understandable but positively rational (Mechanic, 1962). The problem should rather be phrased as a question of why should some make a change -- that is, what is in it for them. As I noted earlier, research tends to confirm that functionality is a critical determinant the acceptance of new technology; people do things that reward them. Any analysis of technological change that does not address explicitly cost/benefit distribution or allows the costs and benefits to be defined according to the perspective of only a limited part of the participants will be fundamentally misleading.

Since all organizations have a range of purposes, they also have reasons why those purposes have not been reached -- the set of things they define as "problems" (Walker, 1974). This agenda is a constantly shifting set, redefined as circumstances change. Innovation, as a part of the general system demand for adaptability, is only one of the system problems to be addressed -- others include integration, coordination, and the achievement of output. In fact, most organizational decisions have very little to do with technology as such, but with things like finance, personnel, scheduling, and resource management. That is sometimes hard for a change agent (or even someone researching change) to appreciate. No one else takes your changes as seriously as you do. On the other hand, you do not take the organization's problems as seriously as it does. Eventually, the interaction balances out.

Culture has recently become a word in organizational analysis with many diverse meanings. The term is more an umbrella concept for looking anew at a range of social phenomena that have previously been looked at atomistically than it is a brand-new insight, but it is none the less valuable for that. Essentially, the concept is a way of stating that the shared meanings, remembrances, patterns of activity, and particularly expectations about what other people will do in organizations really matter to what takes place. The idea is a somewhat broader system understanding of the concept of "role" -- if you will, an anthropologist's view of the relationships rather than a sociologist's or a psychologist's.

Technology affects culture dynamically. For example, as we noted earlier, personal computers have a wide variety of potential meanings to those who use them, meanings that change over time. These meanings are part of organizational culture, and both are shaped by it and shape it in turn as they evolve through experience. Consider a hierarchical, controlled organization introducing PCs -- a potential anarchic," power-to-the-user" situation. Such organizations often respond with elaborate control systems, sets of passwords, procedures for controlling access to disks, and the like. Results are often circumvention of the rules, frustration on the part of managers, and general failure to achieve the promised benefits of the technology. Sometimes this just produces paralysis; sometimes it can lead to a new culture more adapted to being able to use the technology, as, for example, professionals begin to keyboard their own work and clerical personnel are freed for more valuable and productive tasks. Sometimes there is a "synthesis" in which old patterns are reinterpreted in light of new conditions, such as is evident in the recent trend for data-processing managers to reassert control over stand-alone computing equipment. The point is that lots different outcomes are possible, but no one outcome is necessary.

Over time, cultures and patterns of technology usage both change; new information based on experience is incorporated into the mental sets of the participants in the culture. The process almost always involves friction and costs; the degree to which those costs are worth the positive consequences of the change is a function of the change process itself as well as of the inherent features of the technology and the context. Appreciating the role that culture plays in organizations, and how culture can be dynamically shaped by the organization's own intelligent sociotechnical choices, can vastly improve the efficiency innovation utilization (Johnson, 1985).

Problems of Process.  Issues related to the staging and dynamics  of implementation have intrigued researchers for a long time. It is self-evident that putting technology into place in an organization is not matter of a single decision, but rather of a series -- usually a long one – of linked decisions and non-decisions. People make these choices, and these choices condition future choices. While the researcher may identify one particular choice as a focal point of "adoption," he only fools himself he believes that choice has the same meaning to the user as it does to him A concept of the leverage exerted by some decisions over other decision is critical to making intelligent choices about where one might intervene creatively in the process to enhance the likelihood of consequences o desires (Hall and Nord, 1984).

Researchers have developed the idea of innovation "stages" as a way of categorizing decisions and defining how this leverage operates -- that Is, seeing how some decisions of necessity precede and shape those later on.  There are many different formulations of such stages; the question is not which one (if any) is "true," but what the relative utility of a particular formulation might be to you (Tornatzky et al., 1983). One basic difference in frameworks relates to whether you prefer to focus on the content of decisions (such as the technology itself) or on the nature of the action being taken by the system. These different approaches lead somewhat different ways of categorizing behavior.  While the same general phenomena are under discussion in each mod the categories tend to highlight rather different focal issues.

Two Views of Innovation

The action-centered approach essentially considers change as process of gradually shaping a general idea, which can mean lots of different things to different people, into a specific idea that most people understand to mean more or less the same thing. Five general stages or categories of decisions to be made in sequence can be distinguished:

·         agenda setting,

·         matching,

·         redefining,

·         structuring,

·         interconnecting.

The first stage is one of establishing the "agenda of problems and solutions," a set of ideas known to the system but that do not necessarily provoke the system to action directly. When a problem and a solution come together in the mind of a person or persons in the system, a "match" is made and organizational action commences. Rather than a defined "adoption" point, this model emphasizes a more or less gradual “redefinition" in which both the proposed innovation and its potential uses come to be understood in sequentially greater detail. When both "tool and use" are defined clearly enough to be communicated to others, a process of creating the organizational structure to embody the innovation can begin. When the structure is generally understood, it can be interconnected to other parts of the system as its relationships to them become clear. The whole process, in these terms, is one of defining the innovation in successively greater detail, distinguishing both what it is and what it is not.

Regardless of what focus is taken for defining stages, they cannot be stretched too far out of shape; nor can they be anticipated in great de before they take place. The principal value of stage models of any sort lies in helping the analyst and the change agent to understand that he or she can encompass or affect only a relatively small part of the process any given time. Analytical humility is generally to be encouraged.

A second dynamic problem relates to the assessment of effects of technological change. First, whose criteria are to shape decisions? As I have noted, organizations are made up of multiple people (and aggregates of people), and therefore multiple criteria -- ways of evaluating outcomes based on goals are the rule in complex decision sequences (Mintzberg, Raisinghani, and Theoret, 1976; Nutt, 1984). Multiple criteria can affect even individual decisions aimed at a similar purpose. Sometimes such complex decision criteria are compatible with allowing "win-win" solutions to be formulated; sometimes situations are truly zero-sum, and someone has to lose (Quinn and Rohrbaugh, 1981). Moreover, criteria change in salience and applicability over time (Prien, 1966; Kimberly and Miles, 1980). In any event, the problem of multiple criteria of assessment is the dynamic problem posed by the political nature of organizations described above.

A related time-based issue is that of horizons -- when do you choose to make your valuation of outcomes, given that there is never any if defined end-point to a change process? Short-term and long-term criteria are both appropriate (and used) depending on the perspective of the analyst and his or her interests (Hayes and Abernathy, 1985).  Reinterpretation of past results is a constant phenomenon, as new information about decision consequences remote in space and becomes available. Again, there is no single answer about what "true" outcomes are, only the need to remember that the issue cannot be unequivocally resolved, either by the participants in the process or of the analyst. This does not mean that perceptions about consequences do or should not shape decision making, only that such perceptions should not be "reified" beyond their limits.

The Bottom Line

Where does all this leave us in our quest for efficient and effective ways to increase the utility of knowledge transfer research for organizational and social management? In some ways, it is easy to feel that we almost know less than we did 30 years ago; at least, we are probably a good deal less certain of what we do know than we used to be. A more realistic assessment is that we are a good deal more conscious now of just what are the limitations on the utility or prescribabilty of any particular analytical paradigm or organizational model. The more we study the technological innovation processes that underlie technology transfer, the more complex and contingent they seem, and the less clear it is that any model, regardless of its sophistication, can adequately represent more than a small part of the whole range of processes of interest to us. Even agricultural extension has proved singularly inapplicable to most other situations -- and perhaps even, today, to agriculture.

What I would like to suggest here is a set of propositions that must underlie any effective approach to understanding technological change, regardless of context or content. Any administrative system that we create to distribute and apply knowledge must take these principles into account or fail.

First, technological change is a process without beginning or end. Individual people and tools and purposes come and go, but the sequence is iterative and evolutionary, and linear patterns are always artificial constructs generated by the analyst (Eveland, 1979). If the working model for organizational research is the novel, a model with a clear starting point, defined characters (variables), a plot (the model), and an ending (dependent variables), the working model of organizational life must be the soap opera, where characters come and go, their roles are constantly changing and being reinterpreted, and what seems good today is bad tomorrow and good again day after tomorrow.

Like the characters, the technology is constantly subject to modification and reinterpretation. "Routinization" of technology takes places only in the sense that one tends after time to forget that one ever thought of a particular tool as "new," given all the other new things that have come along in the meantime. As we noted earlier, over time even a technology as unusual and even shocking as personal computers becomes accepted and even ignored; the keyboard is today as ubiquitous and unremarkable as the telephone, and this is in barely five years. But technology is never "routine" to the point that it is not subject to change and modification. If we aim our efforts at routinization, we are likely to damn ourselves with success. Organizations that carefully implement state-of-the-art computer systems tend to have a great deal of difficulty taking advantage of changing technology; they have too many "sunk costs" in the old systems (Bikson et al., 1985). It is well to remember that every old, outdated, ossified tool or practice in any organization was once an innovation" that got "routinized" all too well. We would do well to remember this in our zeal to fasten new things on organizations.

Second, the context of change is vitally important. Because organizations are systems, any action or choice has repercussions across both space and time, and even across the borders of systems we are trying to affect. Members are aware (sometimes) of these -- a change agent/sales person must be equally so. The organization's culture and its connections with the rest of the world provide the context within which all external messages -- including those dealing with technological change -- get filtered and interpreted. Meaning must of necessity be generated internally by people; only in the most general terms can it be supplied by an external source.

The one thing we have rather conclusively demonstrated in the course of 20 years of public programs intended to promote technological change -- in fact, through the long years of agricultural extension as well -- is that one cannot pay people enough, long enough, to get them to do things or use tools that do not have intrinsic worth and value to them. "Incentives" that do not institutionalize a clear long-term yield have only short-term effects. While one can through "demonstration programs" or other subsidy mechanisms induce the temporary use of a technique or policy, it will not outlast the subsidy unless it become structured as part of the system and interconnected to it in multiple ways, because it provides such value. External sources cannot provide that value; it must be the value to those who practice it. This is one of the hardest lessons all change agents must come to terms with. It implies that change agents must concentrate far more attention on how people think about the change than what actually changes.

Third, what matters most to organizations, whether they realize it or not, is process, not technological content. From the point of view of; given organization, the key problem should be less choosing an implementing the "right" technology than it is developing and putting into place a procedural set for making technology choices intelligently.  Computers are today perhaps the most extreme of a technological area where no single choice remains valid indefinitely; those organization that cope well with computer technology are those where the system has the capacity to remain experimental (Johnson et al., 1985). What organizations need is to encourage is continuous learning about technology and sociotechnical interactions on the part of members, and to maintain and use that learning without being paralyzed by it. Remembering too much, after all, can create so many metaphors that the system can never work through to an understanding of the change itself.

An organization that understands the strategic nature of innovation choices, and can approach the process systematically rather than as series of individual and discrete decisions, will always have an advantage. A technology transfer system that can facilitate change processes rather than sell specific technologies is one that will have long-term success.

Finally, the purpose of innovation/diffusion research is not to prescribe but to raise consciousness. To the extent that research can help organizations understand that they have the power to make good choices, and help them understand the implications of those choices, it will contribute to social goods. To the extent that research creates new and better ways to manipulate individuals and organizations into adopting other people's views of what is a "good thing," it will contribute by contrast to a dissolution of social progress. I realize that this may be a difficult point to swallow for those who legitimately believe they have a "good thing" other people really need -- a group that includes most of the "true believers" in technological and social innovation. On balance, however, we are all likely to be better off by encouraging the development of the capacity for effective and purposive internalized self-directed evolution and control than by relying on any "diffusion system" to overcome the shortcomings of organizational and individual change processes. As Peters and Waterman (1983) tell us, one of the key lessons their "excellent companies" have all learned is to appreciate the validity of their customers' needs and understanding of those needs. Surely public mechanisms for "technology transfer" can do as much.

 

Notes

  1. "Information" is usually defined as something that reduces uncertainty about the world (Miller. 1965). In fact, technology information not infrequently increases uncertainty about applications as it expands the horizons of the possible. Uncertainty should not be confounded with ignorance.
  2. This may or may not be a good thing. In fact, as we note later, one of the major failings in many technology implementation processes is a tendency to assume that meanings are shared without exploring them. This leads almost inevitably to confusion, frustration, and costs beyond what needs to be incurred.
  3. In fact, classical diffusion models are in practice largely indistinguishable from epidemiological models in terms of parameters and underlying dynamics (Hamblin et al., 1973).

 

References

BANDLER, R. and J. GRINDER (1975) The Structure of Magic. Palo Alto, CA: Scie arid Behavior Books.

BECKER. M. H. (1970) "Sociometric location and innovativeness: reformulation extension of the diffusion model." Amer. Soc. Rev. 35: 267 -- 282.

BENSON, J. K. (I 975)"The interorganizational network as a political economy."Adm. Sc,. Q. 20: 229 -- 249.

BERMAN, P. and M. W. McLAUGHLIN (1977). Federal Programs Support Educational Change: Factors Affecting Implementation and Continuation. Sai Monica, CA: RAND Corporation.  -- l589/7 -- H)

BIKSON, T. K., B. A. GUTEK, and D. MANKIN (1985) Understanding the Implmentation of Office Technology. Santa Monica, CA: Rand Corporation.

BIKSON, T. K., B. F. QUINT, and L. L. JOHNSON (1984) Scientific and Technical Information Transfer: Issues and Options. Santa Monica, CA: Rand Cor orati Report to NSF, Grant ft N -- 2131 -- NSF.

BI NG HAM, R. D. (1976) The Adoption of Innovation by Local Government. Lexingt MA: Lexington.

BLAKELY, C.,J. MAYER, R. GOTTSCHALK, D. ROITMAN,N. SCHMITT, and DAvIDSON (1983) Salient Process in the Dissemination of Social Technolog National Science Foundation Grant #ISI -- 7920576 -- 0l.

BLUMER, H. (1969) Symbolic Interactionism: Perspective and Method. Englewc Cliffs, NJ: Prentice -- Hall.

CHAKRABARTI, A. K. (1973) "Some concepts of technology transfer: adoption Innovations in organizational context." R&D Management 3: 111 -- 130.

CHAKRABARTI. A. K. and A. H. RUBENSTEIN (1976) "Interorganuzational tran' of technology: a study of the adoption of NASA innovations." IEEE Transactioni Engineering Management. EM -- 23: 20 -- 34.

CHERNS, A. B. (1976) "The principles of organizational design." Human Relations, 783 -- 792.

COCHRAN, N. (1980) "Society as emergent and more than rational: an essay on Inappropriateness of program evaluation." Policy Sciences 12: 113 -- 129.

DOCTORS, S. I. and C. STUBBART (1979) A Review of the Research Literatur Technology Transfer. Working Paper WP -- 344, Pittsburgh, PA: Graduate Schoc Business, University of Pittsburgh.

DOWNS, G. and L. B. MOHR (1977) Toward a Theory of Innovation. IPPS Discuss Paper No. 92, University of Michigan.

ENGLEBART, D. C. (1982) "Evolving the organization of the future: a poinl vIew," pp. 287 -- 307 in R. M. Landau, I. H. Bair, and J. H. Siegman (eds.) Emerj Office Systems. Norwood, NJ: Ablex Publishing Corp.

EVE LANI), J. D. (1979) "Issues in using the concept of 'adoption' of innovations."J Technology Transfer 4, 1:1 -- 14.

EVELAND, J. D. (1981) "Implementation: the new focus of technology transfer," ii l)octors (ed.) Issues in State and Local Government Technology Transfer. Cambric MA: Oelschlager, Gum, and Ham.

EVFI.AND,J. I)., F.M. ROGERS, and C.M. KLEPPER (1977)The Innovation Process in Public Organizations: Some Elements of a Preliminary Model. Ann Arbor, Report to the National Science Foundation, University of Michigan, Grant No. R75 -- 17952.

EVELAND, I. D., L. G. TORNATZKY, W. A. HETZNER, A. SCHWARZKOPF, and R. COLTON (1983) "University/industry cooperative research centers." Grants Magazine.

Federal Coordinating Council for Science, Engineering and Technology (1977) Directory· of Federal Technology Transfer. Washington, DC: Government Printing Oflice.

· FELLER, I.. L. KALTREIDER, P. MADDEN, D. MOORE, and L. SIMS (1985) The Agricultural Technology Delivery System. University Park, PA: lnstitutc for Policy Research and Evaluation, Pennsylvania State Univ. Report to USDA, Contr.53 -- 32R6 -- I -- 55.

FELLER, I. and D. C. MENZEL (1976) Diffusion of Innovations in Municipal· Governments. University Park, PA: Penn State University, Center for the Study ofScience Policy, Report to NSF, Grant No. RDA -- 44350.

HALL, G. E. and S. F. LOUCKS (1977) "A developmental model for determining· whether the treatment is actually implcmented."Amer. Educational Research 1. 14,3:263 -- 276.

HAM BLIN, R. L. et al. (1973) A Mathematical Theory of Social Change. New York:Wiley -- Inteiscience.

HAVELOCK, R. G. (1973) Planning for Innovation through Dissemination andUtilization of Knowledge. Ann Arbor, Ml: Center for Research on Utilization ofScientific Knowledge, University of Michigan.

HAVELOCK, R. G. and J. D. EVELAND (1985) "Change agents and the role of thelinker in technology transfer." pp. 35 -- 56 in Proceedings of the Federal LaboratoryConsortium for Technology Transfer Fall Meeting. Seattle, \'TA.

HAYES, R. H. and W. J. ABERNATHY (1980) "Managing our way to economic

decline.'' Harvard Business Rev. 58 (July -- August) 67 -- 77.

HETZNER, W. A. and A. H. RUBENSTEIN (1971) An Analysis of Factors Influencingthe Transfer of Technology from DOD Laboratories to State and Local Agencies.Army Research Office: Program of Research on the Management of Research andDevelopment

JERVIS, P. (1975) "Innovation and technology transfer -- the roles and characteristics ofindividuals." IEEE Transactions on Engineering Management. EM -- 22: (19 -- 2)7.

JOHNSON, B. M. et al. (1985) Innovation in Office Systems Implementation. Universityof Oklahoma: Report to National Science Foundation, Productivity ImprovementResearch Section.

KIMBERLY, J. R. and R. H. MILES (eds.) (1980) The Organizational Life Cycle. SanFrancisco: Jossey -- Bass.

KRAEMER, K. L. and J. L. KING (1979b) "Problems of operations research technologytransfer to the urban sector." Presented to the American Society for PublicAdministration, Baltimore.

LAKOFF, G. and M. JOHNSON (1980) Metaphors We Live By. Chicago: Univ. ofChicago Press.

LAMBRIGHT, W. H. (1980) Technology Transfer to Cities. Boulder, CO: Westview.

LARSEN, J. K. (1980) "Knowledge diffusion: what is it?" Knowledge 1, 3: 421 -- 422.

LARSEN, J. K. (1982) Information Utilization and Non -- Utilization. Mental HealthServices Development Branch, NIM H grant ~ 25121 American Institutes for Researchin the Behavioral Sciences.

McHUGH, P. (1968) Defining the Situation. Indianapolis: Bobbs Merrill.

MECHANIC, D. (1962) "Sources of power of lower participants in complex organi --  -- zations." Admin. Sci. Q. 7:349 -- 3M.

MEYER, A. D. (1982) "Mingling decision -- making metaphors." Milwaukee: University of Wisconsin -- Milwaukee, School of Business Administration, Working Paper.

MILLER, I. G. (1965) "Living systems: the organization." Behavioral Sci. 10: 193 -- 237.

MINTZBERG, H., D. RAISING HANI and A. THEORET (1976) "The structure of unstructured' decision processes." Admin. Sci. Q. 21: 246 -- 275.

National Academy of Engineering (1974) Technology Transfer and Utilization: Recom -- mendations for Redirecting the Emphasis and Correcting the Imbalance. Washington l)C: Academy Report No. PB -- 232 123.

NUYr, P. C. (1984) "Types of organizational decision processes." Admin. Sci. Q. 29, 3:414 -- 450.

PELZ, D. C. (1982) Use of Information in Innovating Processes by Local Governments. Ann Arbor, Ml: University of Michigan, Report to the National Science Foundation, Grant No. ISI -- 79 -- 20575.

PETERS, T. J. and R. H. WATERMAN (1983) In Search of Excellence. New York:Harper and Row.

PR I EN, E. P. (1966) "Dynamic character of criteria: organization change." I. of Applied Psychology 50: 501 -- 504.

QUINN, R. E. and J. ROHRBAUGH (1981) "A competing values theory of organi -- zational effectiveness." Public Productivity Rev. 5, 2:122 -- 140.

ROBERTS, K. H., C. L. HULIN, and D. M. ROUSSEAU (1978) Developing an Interdisciplinary Science of Organizations. San Francisco: Jossey -- Bass.

ROESSN ER. J. D. (1975) 'Federal technology transfer: an analysis of current program characteristics and practices." Washington, DC: Committee on Domestic Technology Transfer, Federal Council for Science, Engineering and Technology.

ROGERS, E. M. (1982) Diffusion of Innovations. New York: Free Press.

ROGERS, E. M., J. D. EVELAND, and A. S. BEAN (1976) Extending the Agricultural Extension Model. Stanford, CA: Institute for Communication Research, Stanford University.

RYAN, B. and N. C. GROSS (1943) "The diffusion of hybrid seed corn in two Iowa Communities." Rural Sociology 8:15 -- 24.

TAYLOR, J. C. (1975) "The human side of work: the sociotechnical approach to work system dcsign." Personnel Rev. 4, 3:17 -- 22.

TOR NATZK Y. L. G. and K. J. KLEIN (1982) "Innovation characteristics and innovation adoption -- implementation: a meta -- analysis of findings." IEEE Transactions on Engineering Management. EM -- 29: 28 -- 45.

TORNATZKY, L. G., J. D. EVELAND, M. G. BOYLAN, W. A. HETZNER, E. C. JOHNSON, D. ROITMAN. and I. SCHNEIDER (1983) The Process ofTechnolog -- ical Innovation: Reviewing the Literature. National Science Foundation

WALK ER. J. L. (1974) "The diffusion of knowledge and policy change: toward a theory of agenda -- setting." Presented to the American Political Science Association, Chicago.

WEISS, C. H. (1977) "Research for policy~s sake: the enlightenment function of social research." Policy Analysis 3,4: 53 1 -- 546.