My wife’s coach one told her that “experience is what you get the moment after you needed it.” Too often the same can be said for data literacy. Colleges and universities looking to wisely invest in analytics to support the success of their students and to optimize operational efficiency are confronted with the daunting task of having to evaluate a growing number of options before selecting a products and approaches that are right for them. What products and services are most likely to see the greatest returns on investment? What approaches have other institutions taken that have already seen high rates of success? On the one hand, institutions that are just now getting started with analytics have the great advantage of being able to look to many who have gone before and who are beginning to see promising results. On the other hand, the analytics space is still immature and there is little long-term high-quality evidence to support the effectiveness of many products and interventions.
Institutions and vendors who have invested heavily in analytics have a vested interest in representing promising results (and they ARE promising!) in the best light possible. This makes sense. This is a good thing. The marketing tactics that both institutions of higher education and educational technology vendors employ as they represent their results are typically honest and in good faith as they earnestly work in support of student success. But the representation of information is always a rhetorical act. Consequently, the ways in which results are presented too often obscure the actual impact of technologies and interventions. The way that results are promoted can make it difficult for less mature institutions to adjudicate the quality of claims and make well-informed decisions about the products, services, and practices that will be best for them.
Perhaps the most common tactic that is used to make results appear more impressive than they are involves changing the scale used on the y-axis of bar and line charts. A relatively small difference can famously be made to appear dramatic if the range is small enough. But there are other common tactics that are not as easily spotted that are nonetheless just as important when it comes to evaluating the impact of interventions. Here are three:
A single data point does not equal a trend. Context and history are important. When a vendor or institution claims that an intervention saw a significant increase in retention/graduation in only a year, it is possible that such an increase was due to chance, an existing trend, or else was the result of other initiatives or shifts in student demographics. For example, one college recently reported a 10% increase in its retention rate after only one year of using a student retention product. Looking back at historical retention rates, however, one finds that the year prior to tool adoption marked a significant and uncharacteristic drop in retention, which means that any increase could just as easily have been due to chance or other factors. In the same case, close inspection finds that the retention rate following tool adoption was still low from an historical perspective, and part of an emerging downward trend rather than the reverse.
It’s not the tool. It’s the intervention. One will ofter hear vendors take credit for significant increases in retention / graduation rates, when there are actually other far more significant causal factors. One school, for example, is praised for using a particular analytics system to double its graduation rates. What tends not to be mentioned, however, is the fact that the same school also radically reduced its student : advisor ratio, centralized its administration, and engaged in additional significant programmatic changes that contributed to the school’s success over and above the impact that the analytics system might have made by itself. The effective use of an analytics solution can definitely play a major role in facilitating efforts to increase retention and graduation rates. If fact, all things being equal, it is reasonable to expect a 1 to 3 point increase in student retention as a result of using early alerts powered by predictive analytics. Significant gains above this, however, are only possible as a result of significant cultural change, strategic policy decisions, and well-designed interventions. It can be tempting for a vendor specially to at least implicitly take credit for more than is due, but it can be misleading and have the effect of obscuring the tireless efforts of institutions and people who are working to support their students. More than this, overemphasizing products over institutional change can impede progress. It can lead institutions to falsely believe that a product will do all the work, and encourage them to naively embark on analytics projects and initiatives without fully understanding the change in culture, policy, and practice to make them fully successful.
Also published on Medium.