The problem of quality is enduring. Robert Pirsig’s 1974 book, “Zen and the art of motorcycle maintenance” (take a look here, if you’re not familiar with the allusion), has its interesting way of analyzing the concept called “quality.” The book explores the idea of quality as an underlying dimension of things, or perhaps THE thing itself — largely indefinable but generally immediately recognizable. In Pirsig’s usage, “quality” is essentially similar to “value”, designating the aspect but not necessarily conveying attached affective characteristics. The important point is much less the substance of the book than it is the very idea of systematically inquiring into a concept like quality — one that we all employ a thousand times a day in our decision making with nary a thought as to its nature or definition.
This inquiry into quality needs to invade our research space. You have been and are being told time and again that we have a “high quality” PhD program, that only dissertation proposals and dissertations of “high quality” will pass muster, that our faculty are all of “high quality”, and that our actual PhD degrees are printed on “high quality” paper. When you conduct your literature reviews, you are asked to assess the previous research work preferably from “high quality” academic journals, and written by “high quality” researchers in the field. Each of these statements is in fact true, let’s stipulate. It is pretty clear that some rather different things are being implied here — despite the fact that, leaving the paper aside, the others all have in common a grounding in the idea of behavioral and organizational research. It appears that the concept of “quality” in research can refer at the least to characteristics of the content of the activity, the people who carry it out, or the social context within which it occurs. If it can’t, we’re only deceiving ourselves that our program can be distinguished from anyone else’s, or that we have any basis for: 1) distinguishing a good piece of previous research that you can build your own research on from another one that is not even worthy of mentioning in your literature review; 2) distinguishing a dissertation proposal that can proceed on to the degree from another one that cannot; and 3) offering an impartial recommendation that a manuscript of research to be accepted or rejected for publication in an academic conference proceeding or journal when you are asked to do a peer-review.
Since the latter discriminations are exactly what the next phase of your PhD studies is all about, in this Module we spend some time thinking about just what it means operationally and conceptually to say that a research study has “high quality”, and how we tell the difference between, say, this study and that one. It is self-evident that there are in fact criteria that can be applied to differentiate among them, and the same species of criteria will be applied, to one degree or another, to determining if your dissertation proposal is acceptable, or if a manuscript is publishable. This issue of acceptability is not trivial; it can be and occasionally is the case, even at TUI, that a student can go all through PhD coursework scoring high grades, and then utterly fail to write an acceptable dissertation proposal, let alone a dissertation — and it is really heartbreaking to see it happen. The first thought is that mistakes were made — the student shouldn’t have earned the grades, or the review of the proposal draft was flawed, or the student was a victim of academic politics. Possibly. But further thought will show that the skills and abilities necessary to do well in courses — analysis, attention to fine details and deadlines, following directions closely — are only hazily related to those required for a dissertation — synthesis, attention to the big picture, original thought, and self-direction, among others. “High quality coursework” is one thing; a “high quality dissertation” is something else. And the latter is all that matters for you from here on out. To learn how to produce “high quality” research (proposal, dissertation, manuscript for publication, and so on), it will be beneficial to be exposed to how research grant agencies and academic journals evaluate the quality of research.
If you’re going to succeed in pulling together your dissertation, you’d better be able to tell good research from bad or flawed research, and to understand how your high quality product can and probably will emerge from multiple iterations that often start with something pretty awful. Being able to recognize “awful”, particularly in one’s own work, and being willing to listen to advice about how it can be made better, are capabilities critical to the research process — and neither comes easily or natural to most students (or faculty, if the truth were told). Increasing your comfort level with judging research and having yours judged in return, and with accepting advice and feedback toward product improvement with grace and thanks instead of resentment, are what this Module is all about. Get through this one, internalizing what it’s all about, and you’ll be well on your way. Slough this one off and miss the point, and you’re setting yourself up for a lot of grief.
One relatively minor area in which quality matters is that of the manner of presentation. While it is certainly not true that papers that are well formatted, properly spelled, and laid out according to the dictates of APA style are inevitably high-quality products, it is almost certainly the case that papers not presenting these characteristics will be evaluated as having significantly reduced quality. Clothes may not make the man, but poor clothes almost certainly unmake him, and the same is true for intellectual products. APA-Style may be arcane and sometimes apparently pointless, but it really doesn’t take all that much effort to apply carefully and that application in itself shows a commitment to the quality of the product. We have supplied resources to help you master this art of APA style, and if you still feel yourself uncertain about it, we urge you to take advantage of these resources before the end of this term, since errors in style rapidly become unforgiving once you enter into the dissertation domain. Like much else in the quality realm, attention to format and appearance is as much about respect for your audience as it is about the substance of the material. Behind the first layer of format and appearance, your audience will also infer about the quality of your research from the rigor and elegance of your writing.
Regarding the quality of your own research, ultimately, the only judgment that really matters is your judgment of yourself. You know when you have done well even if that assessment is not universally shared; on the other hand, you also know when you have done a less than stellar job. Under those circumstances, even rave reviews are less than satisfying, and justified criticism can be really seriously wounding, the more so the more accurate you know it to be. It’s not too hard to take negative feedback if you view it as a way of making something that is good, better. The more academic work you do, the more you’re likely to have experienced both varieties of feedback about quality. And there will be a lot more feedback of all sorts before you get done.
One of the hallmarks of quality research is verisimilitude — that is, the appearance that the study really reflects the situation studied. This is the global equivalent of face validity — that most basic element of validity that says it makes sense. If it doesn’t make any sense, then it really doesn’t matter how sophisticated the structural equation modeling might be, or how elegant the sampling frame. Our judgments about verisimilitude are made primarily on the basis of our own experiences, augmented by training and reinforcement. They’re generally forgiving judgments; that is, you have to be pretty dramatically off base to be noticed. But once you’ve lost your basic sense credibility, you never get it back, at least with that audience. This is why it is important to frame your own studies in a common-sense as well as research-based context.
So why multiple levels? What is there about organizational research that compels us to this fairly complicated kind of approach? The answer is, in a word, verisimilitude. Our own experiences of organizations, in which we’ve been embedded sometimes consciously and sometimes unconsciously for virtually our whole lifetimes, have taken place on many levels simultaneously. Even as we plan our individual careers and advancement, we know that we have to take into consideration the people with whom we work (group), the overall policies of the system (organization), and the social and economic environment (industry/society). To try to abstract one of these levels out and plan our career by that alone would be ruinous. The fact is, we can think quite easily about one level while holding the others in the back of our head or at worst in our scratch notes.
But formal research doesn’t have the luxury of holding things back; it has to embrace everything in more or less the same fashion. Thus, when we are confronted with research that does not address the multilevel nature of organizational phenomena, it is at best vaguely unsatisfying and at worst credibility-destroying. This doesn’t mean that each and every study — yours included — has to implement the full rigors of a multilevel approach, but it does mean that in some way the research must acknowledge the complexities of organizational life and the degree to which different kinds of aggregations of people, systems, and tools condition and effect how we act and react.
So how do we as researchers move toward reflecting this multi-level organizational context? First, by being clear about what we mean by “level”. Different organizations are likely to employ different names for the levels within their hierarchies, but they all generally fall into categories based on size. For researchers, the key aspect is “level of analysis” — a term simply referring to the units from which data are collected and their position relative to a generalized hierarchy, either official or imputed according to the size of the unit. For research purposes, we typically distinguish at least between the individual level of analysis, the group level, the organizational level, and the sector or industry level. Obviously, there are finer distinctions to be drawn and we often find ourselves in a position of having to draw them. That is, we may have data on individuals, gained through a survey, that we then aggregate to perhaps a workgroup level, then to a division level, into a plant level, before we reach the level of the organization. Levels of analysis pose problems that are both conceptual and methodological; each time we aggregate data to produce new composite variables, we introduce error, misattribution, and potentially even misrepresentation of the meaning of the data. In this module, we examine some of the problems posed when there are multiple levels of analysis in research, and explore some of the potential solutions.
Make no mistake; you cannot escape issues of levels of analysis in your research. You will be asked about this question and you will be expected to be explicit about your analysis and why you believe that the level you have chosen is the appropriate level at which to form conclusions related to your research questions. It is in fact highly probable that you will need to use some variety of multi-level model for at least part of your work, and these models are particularly difficult to specify and then to analyze. So pay particular attention to this module, and you will come to, if not exactly a sense of security about levels of analysis, at least an awareness that the problem exists and that there are ways around it. It is precisely the issues of levels of analysis that makes organizational research distinct from other kinds of work. Therefore, it is incumbent upon us to honor and embrace these problems, not try to sweep them away.
Robert McCallum, a professor of organizational psychology at Ohio State University,*, has neatly described the problem:
“As a final major topic of comment, I wish to turn to the problem of level of analysis. This is of course a long-standing and difficult issue in the I-O field because many research questions involve individuals functioning in groups. Difficulties arise with regard to defining variables and levels at which they should be measured (e.g., climate, leadership), as well as in defining the level/s at which the research question or theory is to be investigated. A number of important papers in recent years (Klein, Dansereau, & Hall, 1994; House, Rousseau, & Thomas-Hunt, 1995; Rousseau, 1985) have emphasized the point that most problems studied in organizational research are inherently multilevel in nature and that researchers should take this into account at all stages of research, from theory development to data collection to data analysis. Katzell (1994) identifies the study of multilevel phenomena as a meta-trend in the I-O field.
When a problem is multilevel in nature, micro or macro views will lead to misspecification of theory by ignoring the multilevel nature of the phenomena and the relevance of variables at one level to variables at another level. For example, in studying individual job performance, there are undoubtedly both individual level variables (such as motivation and ability) and group level variables (such as climate and norms) that are relevant predictors. When a theory appropriately takes into account the multilevel nature of the problem, the gathering of appropriate data is facilitated. Units can be sampled at whatever levels are relevant (e.g., sampling organizations and sampling individuals within organizations), and variables can be measured at appropriate levels. Data can then be analyzed so as to take into account the multilevel structure of both the research questions and the data. There is a considerable methodological literature about problems caused by aggregation and disaggregation of measures, thereby ignoring the multilevel structure of the data. Such procedures can yield severely biased results as well as invalid conclusions, such as the well-known ecological fallacy wherein one uses group-level data to draw conclusions about individuals.”
Given the kinds of research questions that we ask in our dissertations in business, it is virtually inevitable that we will run up against the methodological difficulties referred to here. This is a classic double-bind — our theories are in significant measure incompatible with the strictures of the research methods that we rely on to test them or otherwise accumulate evidence regarding their utility.
Organizational research, whether focused primarily at the individual level or the organizational (or higher) level, almost inevitably becomes entwined with constructs and variables that properly apply to other levels. Unless the methodological concerns are faced squarely and dealt with, analysis of these mixed-level models can lead to very misleading conclusions. The trick for side-stepping these problems is, unsurprisingly, careful specification of the model in question and application of some design and analysis approaches adapted to these situations. Easier said than done.
Attending to these concerns about levels of analysis serves at least two purposes. First, the issue is a serious problem in its own right, and a number of students have floundered over how to manage it. Second, and perhaps even more important, looking carefully at how researchers have thought about and then applied in the field some theories that are complex (many and multiple connections) even when they are simple (few variables), and managed ambiguous operationalization of constructs in creative ways, is a very good way to review the process of creating and applying what we have called the “analytical framework” or “model” — that which distinguishes true scientific research from mere accumulation of data.
Individual level models of behavior in organizations are generally relatively simple in structure and in definition. That’s why a lot of organizational research focuses on individuals. But when we decide to make the group or the entire organization our unit of analysis, we enter a different world. In the 1880’s the US Supreme Court decided that a corporation was legally a “person” and therefore entitled to the same rights to due process as anybody else; pretty much ever since then we’ve been explicitly or implicitly using the language of “people” to talk about organizations: “Dow Chemical was a BAD boy!”, “The Department of Justice thinks…”, “TUI has been growing so fast that it’s already gone through three pair of shoes…”, or something like that. All the Supreme Court did was codifying the process we’ve been going through since prehistory (“Gog’s warriors are like a giant cave bear!”) — personifying the decisions of organized groups of people (“Rome has decided…”) whether they are identified as decisions of individuals (“It is the will of Caesar that…”), of groups acting in the name of many (“The Inquisition sentences you…”), or of a person or persons unknown somewhere in the system (“The President of the US has decided that…”). Somehow organizations manage to feel like people to us, and take on characteristics that function for both outsiders and insiders as a sort of “personality” — the “look and feel” aspects that we sometimes term “organizational culture”.
Some people are strong personalities, some are pretty bland; but all are equal in the sight of the law, and all are equal when handed a questionnaire, a Scantron® form, and a #2 pencil. Likewise, whether an organization has a strong culture or not much at all, it is equally researchable at the organizational level of analysis. As we saw in relation to the individual level, certain problems are generally understood to be best addressed by considering the group or organization to be a single unit interacting with other like units and possessing characteristics of its own. These characteristics, which form the constructs and variables of organizational-level analysis, may be global (pertaining to the overall properties of the unit itself), shared (pertaining to properties of the unit’s components that all possess equally), or configural (pertaining to properties of the unit’s components that may vary across them). They may be measured directly (for global and some shared properties), or (for shared and configural properties) by aggregating or averaging features of the components, or even by attributing to the unit characteristics of a larger unit of which it is a part. [In the background information for this module, you’ll find a presentation that reviews and expands upon these distinctions.]
Some of these characteristics mean much the same thing whether they are measured at the individual level (e.g., age) or at the collective level (e.g., the average age of group members). However, some exist only at the collective level — for example, the “age of the workgroup” — AKA the time elapsed since its establishment as a distinct unit). The variables that are used to describe social networks are good examples of such characteristics; although they are often derived from data collected from individuals, the manner of their aggregation changes their interpretation. In a social network, for example, an individual has the characteristic of “centrality”, often expressed as the proportion of other members of the network s/he is in contact with. The overall network, on the other hand, has a characteristic called “centralization” reflecting the dependence of the overall connection structure on differing proportions of members; while it may be calculated from the centrality indices of its members, a centralization score has no direct meaning in reference to any of those members as such, but only in reference to the group as it is compared to other groups. Any topic that involves relationships among units, including the critical themes of power, conflict, and attraction, is likely to have such ambiguous variables that can be defined, with somewhat different meanings, at multiple levels of analysis. Recognizing that we are dealing with multi-level phenomena, and that we need to make appropriate adjustments is not always easy. For example, “power” at the organizational level is rather different from, and more complicated than, “power” at the level of the individual; we will mislead ourselves if we estimate the power of an organizational unit as simply the aggregate power of the individuals that comprise it. Even the similarity of words can confuse; perhaps we should be talking about poweri and powero, or something like that.
The “behavior” of groups and organizations (holding off for the moment in addressing the fact that “behavior on the part of an organization is analogous to but conceptually distinct from “behavior” on the part of individuals) is of interest to us in general because such behavior shapes the contexts within which we all live in critical ways — physically, socially, economically, and culturally. Students of business care about organizational behavior in particular because collective behavior is what our field is all about — business IS transactions. It’s OK to think of a one-person firm, but there are no one-firm economies. While models of individual-level behavior are important to the field, the behavior in question is generally behavior in relation to some organization or social setting. The core problems of business research are problems posed by, with, for, and to organizations.
We have called the place where we study such collectives the “organizational” level of analysis, but the same features actually describe the analysis of any aggregation of units beyond the individual, from the 2.5-person “nuclear family” to the entire “family of humankind”. The term “organization” lacks any precise meaning as to its size. In general, the appropriate unit is defined for researchers by the nature of the behavior we want to study — the unit of analysis is generally the smallest aggregate capable of undertaking such behavior as a unit. This might be a workgroup, a branch or division within the hierarchy of a larger organization, the organization itself (in its capacity as a “legal person”), or some larger conglomeration of organizations acting collectively (sectors, industries, nations, worlds (First through Fourth, at least), etc.) For example, if we want to conduct a study on “corporate crime”, we need to be at a level where there is something definitely “corporate” about whatever crime is being committed in terms of collective actors, and where the crime is not purse-snatching at something rather more far-reaching in its implications.
To complicate matters further, groups are almost always “nested” within other groups, just as individuals are nested within groups, and such nesting forms a critical part of the environment or context within which behavior takes place. However large or small, any aggregation of human beings is more complex and more difficult to understand and predict in some ways than an individual. They are also, somewhat paradoxically, simpler and easier to understand and predict, provided that we accept certain limits on what we can hope to comprehend. While it may be an impossibly complicated task to understand how an “organization” “makes” “a decision”, if we accept that each of these terms represents a sort of “black box” whose innards are incomprehensible but which produces recognizable and consistent output, it is then possible to create models of reasonably simple and parsimonious proportions, using these “black box constructs” as variables, that actually account for large parts of the variance of much organizational behavior, usually much more than can be accounted for in any individual-level research however many variables we choose to include. It is partly for this reason that group and organizational level studies comprise the bulk of business studies.
The other main incentive for organizational-level research is of course what we might term the “800 pound gorilla effect”. As we have noted, the immense impact that even small aspects of the behavior of large human aggregates can have for us all. If the fluttering of the wings of a butterfly in the Amazon can cause tornadoes in Kansas, as the chaos theorists would assert, then when IBM gets the organizational sniffles, an enormous number of people wind up sneezing or even with pneumonia, whether or not they have anything to do with IBM directly. There is a strong perception in many quarters that we cannot now even predict the actions of many organizations, let alone control them, and the complex social web within which chaos theory would see us as all embedded rather guarantees that our lack of even marginal understanding of many organizational phenomena will have more or less unpleasant consequences for substantial proportions of the population. Our very survival may well be bound up in our ability to control or at least predict how organizations, both large and small, will shape the world of today and of tomorrow.
Conducting research at the aggregate level — group, organization, industry, etc. — introduces or compounds some problems such as sampling, identification of respondents, time horizons, research logistics, meaning and interpretation of data, and more. While many of the more interesting problems of business are properly addressed at this level. the practicalities of research often prohibit studying them directly, particularly for dissertation-scope projects. There are creative alternatives available — and the first step toward them is understanding that “normal science” prescriptions for research, primarily formulated in the simpler world of individual units, often need to be reinterpreted in the more complex environment of the organization.
Let’s return our focus from the generic issues of research to the more particular world of your own research involvement and interests. The overall theme of this discourse, as we said at the beginning, has been two-fold: first, to review a range of very good current research from a range of very good research publications and sources, and second, to do so in the context of some of the more intractable issues that face students in preparing their dissertation proposals. For this term, we have chosen to attend primarily to three of these issues: the close interdependence of research questions, theory, and research methods, the nature of quality research, and the choice and consequences of appropriate levels of analysis at which to address particular questions.
For other incarnations of this course, the choice of foci may differ, as will the choice of particular articles for study. At this place and this time, however, it has seemed to us that these were the problems whose resolution would have the highest overall payoff for you as prospective dissertationeers. In short, if your prospectuses reflect a crisp research question, a coherent body of theory that you have effectively used to support your hypotheses, a consistent level of analysis extending through your question, your theoretical frame, your research methods, and your data, and a general flow through the material that establishes your confidence that you know what you are doing, you’re in fine shape; you’ll have a quality product. If you are continuing to have problems wrapping your brains around any of these attributes of good research, then at least you have an idea where to focus your attention.
In this module, we come back to our starting point, to revisit the importance of the relationship between the theory base of the study and its methods, whatever the level of analysis, and to reemphasize that the creation of research is a gradual process in which decisions made in one place at one time by one set of people against one set of criteria can critically affect the decisions made in others at others by others against others. In life in general, no choice is without consequences, often unforeseen. Research is merely a subset of life, a chunk chosen for examination out of the whole with an intent (usually) to impute back to the whole the characteristics of the chunk. Thus, the process of creating and conducting research is likewise rife with consequences, many of which may not appear, or at least appear to be serious, until much later — often too late to do much about them but cope. Moreover, research is dynamic and longitudinal, taking place within a real context of time and place. Any findings must be interpreted in relation to the context in which they were obtained; interpretability is an issue much larger in scope than the statistical generalizability of a sample.
As MacCallum points out, investigators choose their methods for any of a number of reasons, some of which have little or no relationship to what research design texts might prescribe; these include fashion, familiarity, access to resources, preferences of resource providers, publication requirements, and others too numerous and sometimes too disreputable or embarrassing to mention. Such considerations are not always illegitimate. In particular, resource considerations and the capabilities of the researchers are important to consider. Without resources, nothing happens; but resources adequate to fund even modest studies are seldom available to researchers in the normal run of things. And regarding methodological knowledge, little of use happens when a researcher undertakes to use a technique that s/he doesn’t understand. The important point is that considerations of both theory and method enter into these calculations, and the final product is the result of a highly interactive and iterative series of usually small-scale decisions, few of which are understood at the time to have the long-term implications that they eventually may evidence.
For purposes of this course, then, the important point is to become increasingly sensitive to the implications of your choices about research topics, theoretical perspectives, and methodological preferences, and to understand how choices in one area may facilitate or preclude choices in another. No research problem is uniquely the property of any one research method; hence, the value of multi-method approaches. But even multiple methods are no guarantee of revealing “truth”; they may even contribute to the manifestation of many “truths”, each with competing claims to attention. Ultimately, the behavioral science research game is subject to the larger interpretation of Gödel’s Theorem: the rules of the game are not sufficient to establish their own truth, and the basis for the acceptability of that truth must be found outside the game. Truth searching is an infinite regress. As we noted in initial unit on research quality, the application of criteria for making judgments about quality is hardly an exact science itself, but is subject to pressures is from all sides. That does not mean in any sense that the game is not worth playing, or that value cannot be found in its results. It simply means that at the end of the day, we have to accept our limitations, do the best that we can within them, and give others credit for trying to do the same.