Educational experiments for fun and profit

By | December 14, 2014
Chemistry_Lab

Academia has always been a place for widely differing kinds of educational practices – it’s part of its charm, and why we have each of us generally managed to find our appropriate niches. But in the world of online education, this variety has reached staggeringly divergent levels, with no two schools and often no two programs within one school using similar approaches, expectations, course management systems, policies and procedures, criteria of effectiveness, or management strategies. And what versions of these are in practice usually will change sometimes dramatically from term to term and year to year. Thus, it’s very difficult to offer effective generalizations about what’s happening in online education, still less effective prescriptions about what ought to be in place and how we might get there.

That said, there is significant room for experimentation in search of new and potentially better ways of managing the online educational enterprise. Educational experimentation is always a dicey proposition, since the circumstances generally preclude anything more than an approximation to true experimental conditions, the findings are likely to be equivocal at best, and the prospects for replication and/or generalized implementation are slim. However, it can be fun and at least suggestive, and often has significant benefits at least for the immediate participants.

During my time at TUI, I conducted two small experiments. The first involved testing the effects of a fairly simple addition to the courses – that is, a short video introduction first of the course and then to each module in turn. Many programs use such videos routinely. Our program had originally featured these faculty videos, but along the line they had been judged to be superfluous and had been removed. I suspected that they might actually have a positive effect in terms of increasing engagement, and could be implemented fairly easily. I used one of the core management courses in the MBA program – a course in which I supervised four different adjunct faculty each of whom managed a section of roughly 40 students – as a test bed. I recorded short (5-7 minute) video clips introducing the course and the modules, and posted them to a private YouTube channel. I then made links to these videos available to students in a randomly selected two of the four sections, while leaving the other two sections as standard condition controls. Students were advised that viewing the videos was strictly optional, but that they might find it interesting and possibly helpful. Aside from brief email advisories at the beginning of each module, no particular attempt to push the videos was made.

The results were interesting. At the end of the term, I examined how many total hits each of the videos had received, and how the students performed on average between the two conditions. I had no way of tracking whether any individual student viewed a video; all I had to compare was the average grade received by students in the video condition sections and students in the control sections. Within the video sections, over half the students viewed the initial two or three videos, although by the fifth module viewing was down to perhaps 10%. But after adjusting for the overall grading pattern of each of the adjunct faculty, the students in the two video sections achieved an average grade slightly more than half a point higher than the average in the two control sections. This seemed to be reasonable evidence that a simple video intervention, even if it wasn’t necessarily widely used, could still improve overall student performance in a class of this type. I wrote up findings in a short working paper (available here, if you’re interested). However, there was no great subsequent rush to implement this innovation in other courses, despite my proselytizing.

During one of my last terms at TUI, I tried another small experiment regarding student engagement. In another of the large core MBA classes, we had a fairly standard sort of threaded discussion requirement, in which students were supposed to make at least one substantive post and one substantively responsive post per week. As is often the case, these discussions seldom rose much above the perfunctory. My experiment was to adjust the incentives, such that a student got more points for making a post (original or responsive) that other students responded to in turn; the longer and more interesting the conversation could become, the exponentially more points a student could receive. Thus, the students now had a real incentive to initiate and sustain real conversations, rather than just make “I agree with Sue” posts.

The results were interesting. Some of the students never really figured this out; others leaped on it like starving foxes on a rabbit, and as a result started to have real conversations. I had to step in and edit a few of them that sort of got out of hand, but this was soon understood. Obviously, the students who took advantage of these incentives got more participation points and consequently higher grades than they might have otherwise. But without factoring in these participation points, these students still did significantly better – on their papers and projects – than did the students who didn’t take advantage of the incentives. The participation points essentially became a measure of engagement, and not surprisingly, the more engaged students did better all around.

Now one could argue the direction of the causal arrows – did engaged students do better, or did better students become more engaged? Would they have been equally engaged in the absence of the incentives, or did the incentives spur engagement in those who otherwise might not have shown it? A better and more systematic experiment might have untangled some of these issues (but then an administration interested in experimentation and alternative approaches would also have helped – bootleg experiments are usually less than perfect).

In any event, it’s clear from these quasi-experiments that (a) student engagement interacts with performance; and (b) engagement levels can probably be influenced by both instructor behavior and systematic incentives. What these experiments do not say explicitly is that instructor-supplied enhancements and incentives aren’t nearly as effective as they could be if they were well-supported parts of institutional policy and procedures, and that relying on the individual instructor to figure them out and implement them is unlikely to be very effective. Still less effective would be to hold an instructor responsible for student engagement in the absence of any instructor-manageable course procedures and incentives.

There – I’ve violated my own precept about generalizing prescriptions. Oh well. For what it’s worth.