In analytics circles, it is common to quote Peter Drucker: “What gets measured get managed.” By quantifying our activities, it becomes possible to measure the impact of decisions on important outcomes, and optimize processes with a view to continual improvement. With analytics, there comes a tremendous opportunity to make evidence-based decisions where before there was only anecdote.
But there is a flip side to all this. Where measurement and management go hand in hand, the measurable can easily limit the kinds of things we think of as important. Indeed, this is what we have seen in recent years around the term ‘student success.’ As institutions have gained more access to their own institutional data, they have gained tremendous insight into the factors contributing to things like graduation and retention rates. Graduation and retention rates are easy to measure, because they don’t require access to data outside of institutions, and so retention and graduation have become the de facto metrics for student success. Because colleges and universities can easily report on these things, they are also easy to incorporate into rankings of educational quality, accreditation standards, and government statistics.
But are institutional retention and graduation rates actually the best measures of student success? Or are they simply the most expedient given limitations on data collection standards? What if we had greater visibility into how students flowed into and out of institutions? What if we could reward institutions for effectively preparing their students for success at other institutions despite a failure to retain high numbers through to graduation? In many ways, limited data access between institutions has led to conceptions of student success and a system of incentives that foster competition rather than cooperation, and may in fact create obstacles to the success of non-traditional students. These are the kind of questions that have recently motivated a bipartisan group of senators to introduce a bill that would lift a ban on the federal collection of employment and graduation outcomes data.
More than 98% of US institutions provide data and have access to the National Student Clearinghouse. For years, the National Student Clearinghouse (NSC) has provided a rich source of information about the flow of students between institutions in the U.S., but colleges and universities often struggle with making this information available for easy analysis. Institutions see the greatest benefit from access to NSC data when they combine it with other institutional data sources, and especially demographic and performance information stored in their student information systems. This kind of integration is helpful, not only for understanding and mitigating barriers to enrollment and progression, but also as institutions work together to understand the kinds of data that are important to them. As argued in a recent article in Politico, external rating systems have a significant impact on setting institutional priorities and, in so doing, may have the effect of promoting systematic inequity on the basis of class and other factors. As we see at places like Georgia State University, the more data that an institution has at their disposal, and the more power it has to combine multiple data sources the more it can align its measurement practices with its own values, and do what’s best for its students.