Share with your friends









Submit

The Department of Education will have to make a lot of choices in its forthcoming college ratings system. While it’s unknown what that effort will look like, there is one thing we can confidently predict: indicators of actual student learning will be nowhere to be found.

That’s not to say the Department doesn’t care about student learning. Rather, there’s simply no widely available set of learning quality measures that the Department could possibly draw on. And we’re a long way away from getting to a place where that problem can be rectified. That’s because getting information on student learning outcomes is not just a matter of clearing any regulatory or statutory hurdles to get colleges to report more data. It’s not like employment outcomes, which currently get an empty box on the College Scorecard but are something where one could easily lay out several credible pieces of data that could fill that space.

For the national conversation on college learning to move forward we need to figure out what should even be presented on student learning. That can only be done by addressing some important design questions about how we think about and measure student learning. Some of these policy choices for measuring student learning are all laid out in the excellent book “Measuring College Learning Responsibly” by Richard Shavelson. It’s worth picking up a copy for exploring these issues in greater depth.

Knowledge level or growth?

The term “student learning” is typically used as a generic catch-all for some evidence of academic results. But this phrase can mean very different things in the context of what information colleges might actually report.

For example, do we want colleges to report evidence of the overall level of knowledge their students have or the growth in knowledge over time. The former suggests a single point-in-time measurement that attempts to gauge just how much graduates know at the end of their program. But this may not actually be evidence of learning, since students could come into college already able to demonstrate sufficient knowledge without ever increasing what they know or the skills they possess.

A truer measure of learning takes a longitudinal approach, charting the growth of students from some earlier time in their academic careers through to the end of their programs (or at least samples of students upon entry and exit of programs). This clearly requires more measurement work and raises questions about what groups of students to evaluate. But it does have the advantage of allowing institutions that take on less prepared students to demonstrate their value in a way that might be harder to see with a focus on just the level of knowledge.

Choosing to focus on either the level or growth rate of knowledge is not inherently right or wrong; they are just designed to capture different things. So in making a choice between knowledge and learning we should consider why we want this information in the first place. If the desire is to show that students are acquiring a baseline level of knowledge, then a single measurement may suffice as a minimum test. If, however, the concern is showing that colleges are actually playing a role in students acquiring that knowledge, then more complex measurement will be necessary.

Improvement or accountability?

In an excellent 2009 essay, Peter Ewell notes the tension that exists between using learning outcomes information for accountability purposes versus employing them to drive program improvement. (Shalveson frames this split in terms of summative versus formative assessment, but the idea is similar.)

The difference between these two paths can be thought of in terms of what we want the colleges to do with the data. Is the desire that they will give data to external entities like a state or accreditation agency that will then hold them accountable? Or is the goal that colleges generate data within internal feedback loops that drive self-directed improvement?

An accountability focused system for measuring student learning is going to demand broader measures of learning that are comparable across institutions. For example, the overall institutional score on the Collegiate Learning Assessment would probably work to fill this need. So could something like scores on the GRE, LSAT, or other graduation admissions tests.1

Choosing to emphasize accountability or self improvement in student learning measurement is both a matter of deciding between two different theories of action but also a statement about what we believe is the bigger obstacle to improving student learning in higher education.

Data of this type has a lot of advantages. It gives clearer information that consumers and policymakers can use to make comparisons without being assessment experts. It also creates goalposts for university administrations to strive toward.

Accountability data is not without its shortcoming, though. For one, distilling educational elements down to a handful of data points is less useful for instructors to figure out how to change what they are doing. The CLA, for example, can talk about the overall level of argument writing ability for students, but the people teaching introductory writing classes probably cannot use that information to figure out how exactly to tweak individual assignments along the way to improve their instruction.

Learning outcomes data that can help real changes in teaching and learning will have to operate a more granular level. To continue the example from the prior paragraph, it would mean testing different writing assignments and figuring out which ones help students learn, then making use of those results to change practice. Such a focus, however, is going to produce too many data points to be of use to the public, but will also probably present them using language and terms that are harder to understand.

Neither an accountability or improvement approach is inherently right or wrong—each carries advantages and drawbacks. Accountability can be clearer and create external pressure through policymakers and students to change behavior. By contrast, improvement better fits in the organizational structure of higher education, since it should rely upon measures instructors already use to judge student work and learning.

Choosing which approach to emphasize from a policy perspective is both a matter of deciding between two different theories of action—externally driven by policymakers and students versus internal and self-driven by faculty and the administration—but also a statement about what we believe is the bigger obstacle to improving student learning in higher education. If the problem is that institutions lack the incentive to care about learning, then an external accountability approach may be the right call. But if the problem is ultimately an organizational one—colleges care about learning but do not know what to do to make it better—then encouraging stronger self-improvement mechanisms might be a better path.

Institutional or Programmatic?

An earlier post on college quality talked about the need to consider the implications for quality based on an institutional or programmatic measurement standpoint. The unit of analysis chosen will also affect other possible decisions the type of assessments to use and whether to measure knowledge or learning. This is also affected by the accountability versus improvement choice. For example, it will probably be difficult to generate institution-wide information that will encourage self-improvement. Similarly, some kind of learning growth indicator at the college level will likely have to be a broader measurement of skills—such as the CLA—than something that could reflect knowledge within the discipline. In other words, as you make the unit of analysis less granular, the options for what you can measure and how you would do so decrease.

What’s the federal role?

Complicating all these design questions is the need to figure out the federal role around student learning in higher education. In general, the federal government is stronger at bright-line measurements without complex shades of gray. That would argue for one-time knowledge measurement done through some kind of accountability-focused assessment at the institutional level.

But this may well be an instance where it’s time to learn new tricks. Ultimately, any kind of learning improvement is going to come about through changes to assignments, courses, programs, and all other parts of the educational experience. All of that can only be accomplished through buy in from instructors and institutional leadership. A top-down accountability approach has the advantage of creating clear pressure, but what it asks colleges to do is a lot more complicated than other instances where the federal government employs such tactics, such as getting colleges to make students avoid defaulting on their loans. Instead, a stronger push on internal self-improvement and greater ownership of the learning process might be the better play.

I’m by no means an assessment expert and cannot claim to know exactly what indicators we should produce and how they should be constructed. And certainly concerns about unnecessary burden should be kept in mind.

These issues can be addressed by starting modestly by focusing on just a few common majors–say business and psychology, which produce the most bachelor’s degrees in the country. Any college that offers those degrees would then be asked to provide information in whatever format they would like about how they measure knowledge and learning within the program, as well as the results of those efforts. The Institute of Education Sciences could then use this information to start seeing what best practices might be and learn more about what other colleges should be encouraged to do.

Admittedly, this idea is not going to radically change the business of teaching and learning in higher education. It in’t going to present the clearcut indicators we should be demanding. But in a world where we don’t even know what language to be using for describing college learning, it’s at least a way to start building some vocabulary.

  1. Setting aside questions of getting proper groups of students and whether these are actually good indicators of anything. []