The mathematical framework that embraces the statistical techniques of regression, analysis of variance, correlation, discriminant analysis, and several other procedures is known as the “general linear model” (GLM). Effective employment of these techniques requires making a number of assumptions about the data and their distribution, which are generally well known and often ignored (with rather unpredictable results). Much of the value of the GLM lies in the fact that it functions passably well under conditions when its assumptions are partially violated — it is, in statistical parlance, a “robust” procedure –although just exactly what and how much can be violated while retaining core validity are not clearly known.

In order to understand how a mathematical procedure like regression can produce useful insights into the analysis of human behavior, we need to understand the idea of “variance” in behavior as something that can be quantitatively accounted for. In mathematics, variance is simply the difference between a given value in the distribution and the mean value of that distribution. When the behavioral properties in question can be interpreted as numbers, then a similar interpretation can be applied — behavioral variance is the degree to which the behavior of a given unit differs from the mean value of all units like it. Some units behave very much like the average unit; some behave very differently. Sometimes it matters whether the behavior differs in a positive or negative direction; sometimes it doesn’t, and all that concerns us is the total degree of difference, not in which direction it occurs.

The general linear model allows us to treat variance as a commodity that can be partitioned in various ways among a series of predictors, with a residual “error variance” that is not accounted for by any of the predictors. This partitioning is carried out through applications of the principle of least squares, in turn itself an application of probability theory to the problem of making the best possible prediction about the degree of association between two or more connected things when you know the value or values taken by some of them. “Association” is a relationship defined by the analyst, based on theory or reasonable supposition, and measured by the property of covariance between the variables. It is easy to see applications of this concept. Scores on a test, attitudes as measured by opinion scales, physical characteristics, and a variety of other properties of individuals or groups can be conceived as numerical quanta capable of being measured, averaged, and having their deviations from the average assessed.

It has been observed with some truth that “one person’s error variance is another person’s social behavior”; all models leave certain things unaccounted for, and some leave almost everything unaccounted for. In general, models that account for more variance are preferred to those that account for less, and the tipping point at which a model becomes interesting is the point at which it can account for more variance than can chance alone. The effective statistical use of a general linear model depends on the nature of the underlying model itself. A nonsense model can be tested statistically as easily as one with theoretical validity, and the numbers will look more or less the same. As we’ve discussed before, the difference between validity and nonsense is provided by the arguments of the analyst.

The basic question remains as to why deviation from the average — variance — is interesting; it is not often explicitly addressed in behavioral science modeling. The overriding factor leading to this interest appears to be prediction and/or its sidekick, control. Prediction and control rely on minimizing variance. Therefore, understanding the sources of variance in a particular phenomenon that we want to control should enable a greater degree of control. Control is not always the explicit end product of variance-oriented behavioral science research; in fact, many researchers would shy away from the idea that they are trying to improve control of social phenomena. It’s okay if what we want to control is, say, the growth of hybrid seed corn, but somehow it would be different if what we might be seen as trying to control is political opinions. But epistemologically that’s a distinction without a difference.

If we’re willing to be honest about why we constructed our models, we can use the measures of association generated by the GLM as measures of the degree to which a change in one phenomenon results in a change in another. The causal connection is purely theoretical, but none the less interesting or interpretable because we can’t “prove” this causality. Most of the causal understandings that make our lives bearable are neither proven nor probably provable; the fact that they work, at least most of the time, is enough. Much the same level of confidence about causality governs the application of behavioral science research based on variance analysis. It’s never going to be proved in the physical science sense of the term, but that doesn’t stop us from believing it, acting on it, and reaping the rewards when it works – which it does, often enough for us to feel slightly superior to the priests of Marduk, but not always enough for it to constitute a meaningful technology.

Part 5 of this extended discussion on causal inference is here.