Using Data You Already Have

I-WANT-DATA-NOWThis is a rich (and at times quite dense), article by authors from the University of Central Florida that effectively demonstrates some of the potential for developing predictive models of student (non-)success, but also some of the dangers. It emphasizes the fact that the data do not speak for themselves, but require interpretation at every level. Interpretation not only guides researchers in the questions they ask and the ways that certain insights become actionable, but also their interventions.

Dziuban, Moskal, Cavanagh & Watts (June 2012) “Analytics that Inform the University: Using Data You Already Have”

An interesting example from the article is the observation that, when teaching modalities are compared (ex Blended, Online, Face-To-Face, Lecture Capture), the blended approach is found to produce greater success (defined as a grade of C or higher) and fewer withdrawals. Lecture Capture, on the other hand, sees the least success and the most withdrawals, comparatively. This is a striking observation (especially as institutions invest more and more in products like Echo360, and as MOOC companies like coursera begin to move into the business of providing lecture-capture technology). When modality is included in a logistic regression, however, that includes other variables (ex. Cumulative GPA, High School GPA, etc), it is found to have nearly no predictive power. The lesson here, is that our predictive models need to be carefully assessed, and interventions carefully crafted, so that we are actually identifying students at risk, and that our well-meaning, but some-what mechanically generated, interventions do not have unexpected and negative consequences (i.e. What is the likelihood that identifying a PARTICULAR student as ‘at risk’ may in fact have the effect of DECREASING their chances of success? )