The recent New York Times coverage of a University of Pennsylvania study of 16 Coursera courses has helped solidify the new conventional wisdom about MOOCs: They have terrible completion rates. The Times reports that “only about 4 percent completed the courses,” which jibes with the Penn press release, titled “PENN GSE STUDY SHOWS MOOCS HAVE RELATIVELY FEW ACTIVE USERS, WITH ONLY A FEW PERSISTING TO COURSE END” and accompanying slide deck, which says, under “Emerging Conclusions,” that “Few ‘persist’ to course end.”
This way of thinking about MOOCs is misleading. It mostly reflects how the traditional college mindset continues to dominate and limit the public understanding of what higher education can and should mean.
Completion rates are fractions. To create them, you have to pick a numerator and a denominator. The numerator in this case is pretty straightforward: the number of people who finished the course. The denominator is a trickier question. Who, exactly should count as a person who tried and did or did not finish? The researchers classified people as “Users,” “Registrants,” “Starters,” and “Active Users,” which represent increasing levels of engagement. Someone identified as a “User” somehow engaged with the Coursera class but never registered for the course. A “Registrant” registered, but never logged back on to start watching lectures or try to learn in any way. A “Starter” is (I assume, the slide deck doesn’t include precise definitions) someone who logged on at least once but dropped out almost immediately.
The four percent figure cited in the Times appears to be the percent of “Registrants” who finished the course. In other words, it includes people who registered but never logged on, or who logged on and immediately dropped out. What effect does choosing that particular denominator have on MOOC completion rates? A large effect, as it turns out. Take the Penn “Mythology” course, which I’ll pick because it’s one of the most traditional college-type courses in the study and also I met the guy who taught it when we both did the Diane Rehm Show last year. Eyeballing the “Variation in Number of Participants by Course” chart (the slide deck doesn’t include the actual numbers, and while I’ve asked Penn for them they haven’t provided them yet), it appears that Mythology had roughly:
- 10,000 Users who never registered
- 15,000 Registrants who never started
- 20,000 Starters who immediately stopped, and
- 25,000 Active Users.
These populations are exclusive of one another. That means that nearly 60 percent of the people the study reported as not finishing the course never tried to finish it in any meaningful way. A quarter of the people in the denominator never even logged on.
Of course, even if we confine the denominator to Active Users, that still produces a single-digit percent completion rate for Mythology, based on roughly 1,350 completers. This seems very bad compared to, say, the University of Pennsylvania’s 87 percent four-year graduation rate. But again, this entirely depends on which denominator you decide to use.
Consider: Coursera has an open enrollment system. Anyone with a computer can sign up. The University of Pennsylvania also has an open system: anyone with a computer or a typewriter, an envelope, and some stamps can apply for admission. Last year, 31,218 students applied to Penn. Thirteen percent were admitted, and 63 percent of those students enrolled. In other words, Penn had (or will have) roughly:
- 27,200 Applicants who were not admitted
- 1,500 Admittants who did not enroll
- 330 Enrollees who did not graduate
- 2,200 Graduates
Or, to put it another way, about seven percent of all students who “signed up” for the University of Pennsylvania by submitting an application end up graduating four years later, which is almost precisely the same as the percentage of Active Users who completed a MOOC in the study held up as evidence that MOOCs don’t work very well.
The point being, what we see in both cases is a filtering process. Penn evaluates applicants and throws out everyone it doesn’t think can succeed in Penn courses. They don’t get counted as failures because they’re not allowed in the denominator in the first place. Coursera lets everyone who wants to take the course. Instead of deciding ahead of time who’s likely to fail, it lets people try for themselves.
The comparison isn’t exact. Like all elite schools, Penn enforces an artificial scarcity on admissions slots. Some–most, perhaps–of the students it fails to admit might be good enough to complete Penn-level work. On the other hand, “submitted an application” is a much higher standard than “signed up for a course on Coursera.” Completing a college application requires a substantial amount of work filling out forms and assembling documents, plus the expenditure of cold hard cash in the form of $75 application fee. Signing up for Coursera takes 30 seconds and is free. An apples-to-apples comparison would probably include everyone who requested a Penn application, or logged onto registrar’s website, but didn’t complete an application. That number would be substantially larger than 31,218, and drive the graduation ratio down further still.
There are also, of course, several crucial non-educational differences between MOOCs and college courses. Going to Penn costs a lot of money–$61,800 per year. MOOCs are free. Penn graduates receive a diploma with a great deal of value in the labor market. MOOC credentials aren’t worth nearly as much. Penn undergraduates are overwhelmingly full-time students at a point in their lives where there are very good reasons to persist and finish a diploma. Coursera students come in all ages and nationalities and many already have college degrees.
It may very well be the case that radically inexpensive college courses reduce student motivation somewhat, in the sense that students aren’t as driven by the terror associated with borrowing huge sums of money for education. If so, that seems like a tradeoff worth making. The fact that colleges have a monopoly on credentials with value in the labor market is an artifact of regulation and social convention. It doesn’t have to be that way, and won’t be in the long run.
Finally, there’s the matter of scale. Penn informs us that the MOOCs in question have “few active users” and that “few” students persist to the end. Few? It’s true that most of the people who had some contact with Mythology were not active users. It’s also true that the remaining minority of active users constituted 25,000 people, which is more than twice the total number of undergraduates enrolled at the University of Pennsylvania. And even though most of the 25,000 didn’t finish, the 1,350 who did represent an order-of-magnitude increase in the number of people who learned Mythology from the same professor the previous year.
In other words, the researchers could have taken exactly the same data and issued a report finding that “MOOCs achieve ten-fold increase in course completers for Ivy league class, at zero cost to students.” But that wouldn’t have fit the current trough of disillusionment stage of the MOOC hype cycle. We’re going to move through that part of the conversation soon enough.