On Magic Beans: Or, Is Learning Analytics simply Educational Data Science, Poorly Done?

UPDATE 31 January 2017: This blog post was written during 2014. Since that time, Blackboard has made several very important and strategic hires including Mike Sharkey, John Whitmer, and others who are not only well-regarded data scientists, but also passionate educators. Since 2014, Blackboard has become a leader in educational data science, conducting generalizable research to arrive at insights with the potential to make a significant impact on how we understand teaching and learning in the 21 century. Blackboard has changed. Blackboard is now committed to high quality research in support of rigorously defensible claims to efficacy. Blackboard is not in the business of selling magic beans. Blackboard is also not the only company doing excellent work in this way. As this article continues to be read and shared, I still believe it has value. But it should be noted that the concerns that I express here are a reflection of the state of a field and industry still in its infancy. The irony it describes is still present to be sure, and we should all work to increase our data literacy so that we can spot the magic beans where they exist, but it should also be noted that educational technology companies are not enemies. Teachers, researchers, and edtech companies alike are struggling together to understand the impact of their work on student success. Appreciating that fact, and working together in a spirit of honesty and openness is crucial to the success of students and institutions of higher education in the 21st century.


The learning analytics space is currently dominated, not by scholars, but rather by tool developers and educational technology vendors with a vested interest in getting their products to market as quickly as they possibly can. The tremendous irony of these products is that, on the one hand, they claim to enable stakeholders (students, faculty, administration) to overcome the limitations of anecdotal decision-making and achieve a more evidence-based approach to teaching and learning. On the other hand, however, the effectiveness of the vast majority of learning analytic tools are untested. In other words, vendors insist upon the importance of evidence-based (i.e. data-driven) decision-making, but rely upon anecdotal evidence in support of claims with regard to the value of their analytics products.

In the above presentation, Kent Chen (former Director of Market Development for Blackboard Analytics) offers a startlingly honest account of the key factors motivating the decision to invest in Learning Analytics:

Analytics, I believe, revolves around two key fundamental concepts. The first of these fundamental concepts is a simple question: is student activity a valid indicator of student success? And this question is really just asking, is the amount of work that a student puts in a good indicator of whether or not that student is learning? Now this is really going to be the leap of faith, the jumping off point for a lot of our clients

Is learning analytics based on a leap of faith? If this is actually the case, then the whole field of learning analytics is premised on a fallacy. Specifically, it begs the question by assuming its conclusion in its premises: “we can use student activity data to predict student success, because student activity data is predictive of student success.” Indeed, we can see this belief in ‘faith as first principle’ in the Blackboard Analytics product itself, which famously fails to report on its own use.

Fortunately for Chen (and for Blackboard Analytics), he’s wrong. During the course of Emory’s year-long pilot of Blackboard Analytics for Learn, we were indeed able to find small but statistically significant correlations between several measures of student activity and success (defined as a grade of C or higher). Our own findings provisionally support the (cautious) use of student course accesses and interactions as heuristics on the basis of which an instructor can identify at-risk students. When it comes to delivering workshops to faculty at Emory, our findings are crucial, not only to making a case in defense of the value of learning analytics for teaching and course design, but also as we discuss how those analytics might most effectively be employed. In fact, analytics is valuable as a way of identifying contexts in which embedded analytic strategies (i.e. student-facing dashboards) might have no, or even negative, effects, and it is incumbent upon institutional researchers and course designers to use the data they have in order to evaluate how to use that data most responsibly. Paradoxically, one of the greatest potential strengths of learning analytics is that it provides us with insight into the contexts and situations where analytics should not be employed.

I should be clear that I use Blackboard Analytics as an example here solely for the sake of convenience. In Blackboard’s case, the problem is not so much a function of the product itself (which is a data model that is often mistaken for a reporting platform), but rather of the fact that it doesn’t understand the product’s full potential, which leads to investment in the wrong areas of product development, cliched marketing, and unsophisticated consulting practices. The same use of anecdotal evidence to justify data-driven approaches to decision-making is endemic to the learning analytics space dominated by educational technology vendors clamoring to make hay from learning analytics while the sun is shining.

I should also say that these criticisms do not necessarily apply to learning analytics researchers (like those involved with the Society of Learning Analytics Research and scholars involved in educational data mining). This is certainly not to say that researchers do not have their own sets of faith commitments (we all do, as a necessary condition of knowledge in general). Rather, freed from the pressure to sell a product, this group is far more reflective about how they understand concepts. As a community, the fields of learning analytics and educational data mining are constantly grappling with questions about the nature of learning, the definition(s) of student success, how concepts are best operationalized, and how specific interventions might be developed and evaluated. To the extent that vendors are not engaged in these kinds of reflective activity — that immediate sales trump understanding — it might be argued that vendors are giving ‘learning analytics’ a bad name, since they and the learning analytics research community are engaged in fundamentally different activities. Or perhaps the educational data science community has made the unfortunate decision to adopt a name for its activity that is already terribly tainted by the tradition of ‘decision-support’ in business, which is itself nothing if not dominated by a similar glut of vendors using a faith in data to sell its magic beans.