Exploring causality (Part 2 of several)

By | July 25, 2013
marduk

As I said in the previous post on this topic, we’re desperate for any tools with any arguably “scientific” credibility that might let us untangle complex social causality. Once upon a time, we could simply announce from the top of the temple steps that the god Marduk would bring rain only if fifty virgin girls and boys were delivered to the priests before next Monday. If it rained on Tuesday, causality was established; Marduk was obviously happy, and we should keep those kids coming. If it didn’t, that proved that at least some of the fifty weren’t really virgins. In either event, causality could be established by an appeal to faith.

Somewhere along the line, faith in Marduk got replaced by an equally unwarranted faith in science – in matters of establishing causality in particular, we now have a strong belief in the validity and utility of high-power inferential statistics.  In order to approximate causal inference, researchers generally rely on statistical procedures based on association between variables — regression, correlation, cross-tabulation, and assorted variations on these, up to and including structural equation modeling — to provide the hopefully convincing inference of causality. Most of the behavioral science models that we create are implicitly causal. Unfortunately, since the only tests we have available are those that will establish only association, we have to do a lot of arm-waving to get to causality.

There’s no question that regression models, or more properly, various varieties of the general linear model (we’ll talk more about this in later posts), are the most flexible and general-purpose statistical techniques — applicable to almost any situation where there is variation in the data. They are easy-to-learn and explain, useful in terms of their output, and, most importantly, capable of being interpreted, if we choose to do so, as evidence supporting a causal relationship between variables. This sort of regression procedures is referred to as “path analysis”. Although regression does require making certain assumptions about the data in terms of their distribution and other properties, it can withstand a fair degree of stress on those assumptions and still produce useful results; statisticians refer to it as a “robust” technique. Thus, regression seems to have almost everything going for it in the stat tools sweepstakes.

Now it is very important for the researcher in training to acquire mastery over these powerful statistical tools. That’s all well and good, and in varying forms regression is likely to be the statistic of choice for most behavioral science analysis. Regression is an excellent way to test formal hypotheses, assuming that the data meet the required assumptions and that the problem is formulated in a way that maps data onto the hypothesis appropriately. Even when some of the assumptions are not met, regression remains a useful procedure. Indeed, it’s some of that very usefulness that causes it to be pushed into service to support causal inferences beyond its immediate ability.

But — and now we come to the real heart of the issue — even when used appropriately and within the bounds of good theory, regression remains merely a test of association. Formal hypothesis testing leans upon powerful tests such as regression as a major source of its justification. But there is in fact good reason to question whether this teaming of tests of association with the assessment of hypotheses is really a good idea. At the least, we need to back such hypothesis testing with strong and convincing theory explaining why our tested causality makes sense.  All too often, however, in the name of “exploratory analysis”, we just throw all our data up against the wall and see what sticks (i.e., what relationships are determined to be “statistically significant”) and then try to construct a theoretical explanation for why we got the results we did.  This is a fundamental misapplication of statistics, and deserves to be called out whenever we see it being applied.

It’s important to think about the reasons why this technique is so popular and so commonly applied — specifically, the degree of which we can use it to justify our inherent need for understanding of causality. The solution lies somewhere between the strictest cautionary notes that statisticians who forbid even mention of causality and the unreflective laxity of the casual observer. But just where between those two extremes we want to come down in our applications of regression will remain a matter for debate, persuasion, and intelligent application.

Part 3 of this extended discussion on causal inference is here.

  • http://comingsoon Pamela Ey

    Thanks so much JD. If it’s relevant in future posts on the subject, I would appreciate your thoughts on causality with respect to intractable complex systems. I enjoy the work of Erik Hollnagel, and his explanations of complex sociotechnical systems. (That work was born out of the human factors field, but determining causality is an important part of that whole deal. Those folks have traditionally backed through a linear time line of events as if the event prior caused the event after. As a result, they create more procedures, fire the guy with his hand on the wheel and create more brittle systems).

    Keep the posts coming. You are one of my favorite thinkers.

    • DrEvel1

      Great question, Pam! Actually (a bit of synchronicity here) there’s an article in today’s Science Daily that bears somewhat on this issue: “Removing Complexity Layers from the Universe’s Creation“. On a rather different scale of things, the authors suggest that the extremely complex models previously applied to the universe shortly after the Big Bang can be decomposed into a set of simpler models, one based on special relativity and one based on quantum mechanics – two theories that are often considered to be incompatible. This may suggest that a similar decomposition of complexity/chaos models of social behavior might be possible, assuming (a) that such models exist and (b) that the mathematics can be performed. But as you say, most “modeling” of behavior systems is ex post facto, simplistic, oriented retroactively rather than proactively, and aimed at finding levers rather than real causes. As I’m going to discuss down the road, levers have value but are easily misinterpreted out of context. Most model-builders are in fact desperately afraid of predictions that might prove them wrong. The one group of analysts who actually revel in predictions are the sabermetricians (sports statisticians), because they can demonstrate real value to their analyses (cf. Moneyball.) Among political analysts, only Nate Silver shares a similar outlook, and he comes directly out of the sabermetric tradition. It’s interesting that he has just moved from the New York Times to ESPN; apparently accurate modeling doesn’t quite fit in with the Times‘ editorial tradition.