In 2011, Long and Siemens famously announced that big data and analytics represented “the most dramatic factor shaping the future of higher education.” Now, five years later, conversations about the use of data and analytics in higher education are more mixed. In 2016, the Campus Computing Project release an annual report that used the language of “analytics angst.” In a recent blog series for the EDUCAUSE Review, Mike Sharkey and I argue that analytics has fallen into a “trough of disillusionment.” What makes some institutions successful in their analytics where others flounder? How can we work to scale, not technologies, but high impact practices? Let’s examine one example.
The University of Maryland Baltimore County (UMBC) began working with Blackboard Analytics in 2006. At that time, they simply wanted to support access to basic information to ensure that the institution was effective and efficient in its operations. Shortly after gaining access to their institutional data, however, they quickly began asking deeper questions about student success.
Coming up with a list of the top eventers based on their performance in 2016 is hard. The sport of three-day eventing is complex and multi-faceted, and the decisions we make about which factors to consider make a significant difference to the final result of any evaluation process. It is a result of this complexity, and the fact that there is bound to be strong disagreement about who ends up being included in a list of this kind, that it is rare to see anything like this published. And yet, I still believe that this exercise has value, particularly for fans like myself who find rankings a useful way of understanding the sport.
Note that the ranking that I have produced is the result of a lot of thinking and expert consultation. It is also a work in progress. I have tried to document some of the theory and methods underlying the list(s), but if you want to bypass this discussion, feel free to skip over these sections and see the lists themselves.
All ranking schemes involve subjective judgement. They involve establishing criteria on the basis of values. Since values differ from individual to individual, disagreement is bound to happen and conflicting lists are bound to appear. But there are two guiding principles that I believe should apply to all rankings:
(1) Look to the data – Human beings are great at making decisions and at coming up with justifications after the fact. We all have biases, and we are all terrible at overcoming them. By limiting ourselves to measurable qualities and available data, we can lessen the impact of irrelevant and inconsistently applied preferences.
(2) Be transparent – Being data-driven in our decision-making processes doesn’t mean being objective. Decisions have to be made about the kinds of data to include, the ways in which that data is transformed, and the analytical tools that are applied. This is not a bad thing. Not only are these decisions necessary, they are also important because it is here that data becomes meaningful. Here, I argue that making the ‘right’ decisions is less important than making your decisions explicit.
Who should be considered for inclusion in a list of top eventers world-wide? Here is a list of criteria that I believe any eventer needs to satisfy in order to be considered among the top in the sport. This is where values and judgement come in, and there is bound to be some disagreement. So it goes.
There are several significant differences between CCI and CIC events. The demands that each of these event types place on horse and rider are so different that, for all intents and purposes, they should be considered different sports entirely. Compared to CIC events, CCIs are characterized by longer cross country courses, have stricter vetting requirements, and include show jumping as the final of the three phases. CIC competitions are developmental. The most elite riders in the world must be able to compete, complete, and excel in CCI events. For this reason, I have chosen only to include CCI riders in the list.
3* and 4* only
This list is meant to include the best of the best. What this means is only including riders who have successfully competed at either 3 star or 4 star levels. Why not just include riders who have competed at the 4 star level and exclude 3 star results? The fact that there are only six 4 star events means that we don’t have a whole lot of data from year to year. The decision to include 3 star data also makes sense in light of recent decisions to downgrade Olympic and World Equestrian Games events to the 3 star level.
At least two competitions
There is a difference between CCI 3*/4* pairs and pairs that have merely competed at that level. In order to be considered in the list, a horse and rider combination must have completed a minimum of two CCI events at either the 3 star or four star level.
100% event completion rate
As recent Olympic history has underscored, the most important quality of an elite rider is the ability to consistently complete events at the highest level. Consistency is key. So I have only included riders in the list that successfully completed every CCI event they entered in 2016.
Once we have established a pool of eligible pairs, what is the best way to rank them? Do we simply take an average of their final scores? How do we account for the fact that some pairs excel in dressage while others shine on cross country or in show jumping? How to we account for the fact that judging differs from event to event, and for differences in terrain, weather, and course design? From a statistical perspective, we know that some events are ‘easier’ than others. How do we fairly compare the relative performance of horses and riders competing under different sets of conditions, even at the same level?
One way of overcoming differences is through a statistical process called standardization. A z-score is the difference between the number of points that a pair earned and the average number of points earned by all competitors at the same event in standard deviation units. A score of 0 means that a pair is average. A negative z-score means the pair is above average, and a positive score means that it is below. By converting points into z-scores, we are able to account for various differences from event to event. By comparing average final z-scores, we can more easily and reliably compare horse and rider combinations on an even playing field.
Once we have standardized final scores, we can sort pairs according to their average z-score and take the top 10. VOILA! We have a list of top riders. Here are the results, along with a little bit of more useful information about their performance at 3* and 4* levels.
The Results (worldwide)
Michael Jung & Fischerrocana FST (GER)
Maxime Livio & Qalao des Mers (FRA)
Hazel Shannon & Clifford (Aus)
Oliver Townend & ODT Ghareeb (GBR)
Jonelle Price & Classic Moet (NZL)
Andrew Nicholson & Teseo (NZL)
Hannah Sue Burnett & Under Suspection (USA)
Nicola Wilson & Annie Clover (GBR)
Andreas Dibowski & FRH Butts Avedon (GER)
Oliver Townend & Lanfranco (GBR)
The Results (USA)
If we apply the same criteria above, but only consider American CCI 3*/4* riders in 2016, we get the following list:
Hannah Sue Burnett & Under Suspection
Hannah Sue Burnett & Harbour Pilot
Boyd Martin & Welcome Shadow
Buck Davidson & Copper Beach
Elisa Wallace & Simply Priceless
Lauren Kieffer & Landmark’s Monte Carlo
Lillian Heard & LCC Barnaby
Kurt Martin & Delux Z
Phillip Dutton & Fernhill Fugitive
Sharon White & Cooley on Show
Some may find it odd that Phillip Dutton & Mighty Nice didn’t make either top 10 list, in spite of being a bronze medalist at the 2016 Olympic Games in Rio, Brazil. The reason for this is that the FEI dataset that I have used intentionally excludes Olympic results because they are kind of strange…a horse of a different color, so to speak. Not including the Olympics, this pair only competed at one CCI event in 2016: the Rolex Kentucky Three Day Event, where they finished in 4th with a final score of 57.8, which converts to a z-score of -1.11. Based on this score, the pair would rank first in terms of national rankings, and fifth in the world. But this is only one CCI event, and so I could not include them in the lists based on the criteria I established above.
Last month we argued that analytics in higher education has entered a trough of disillusionment. We posited that this is actually a good thing for higher education, because it means bringing attention to the hype itself. It means that we are making progress towards true productivity and student success. We need to learn how to spot the hype before we can move beyond it and realize the true potential of educational data and learning analytics.
It is our hope that the ‘analytics angst’ that has accompanied increased data literacy will put pressure on vendors to reduce hyperbole in their marketing materials and encourage institutions to reset their expectations. A more realistic view of educational data will result in greater adoption, more successful implementations, and results that move the needle by positively impacting student success at scale.
The Latin word communitas refers to a collection of individuals who, motivated by a common goal, come together and act as one. Community is powerful.
Common approaches to college and university rankings can sometimes have the unfortunate effect of pitting institutions against each other in a battle for students and prestige. As the U.S. turns its attention to meeting the needs of 21st century students and 21st century labor demands, the power of traditional university ranking schemes is starting to erode.
Student success is not a zero-sum game. Rather than fostering competition, a commitment to student success encourages cooperation.
It has recently become very apparent to me that the value of the things that I own is increasingly elsewhere.
Let me provide two examples.
Several months ago, Intel announced that it was shutting down all support and service for its Basis line of watches. The announcement came in light of a safety recall of the Basis Peak. Shutting down its data service for Peak watches was meant to mitigate safety concerns, since the watch only really ‘works’ if accompanied by its cloud-based service. Intel also offered a full refund on the watches. This was absolutely the best thing that Intel could have done. By withdrawing all features from the watch itself, and offering a financial reward for its return, Intel made it so that the watch’s sole use value was as a thing to be returned.
The announcement was sad for me. I was an early adopter of the original Basis B1 watch. I have had mine since before the acquisition of Basis by Intel in 2014. When other wearables from Fitbit and Jawbone (I have these first generation products in a drawer somewhere) were nothing more than step counters, the B1 also tracked heart rate and moisture levels. It was also a watch.
My B1 still ‘works.’ It is a reliable product, and I don’t worry about it exploding on my wrist. But in discontinuing service for the Peak, Intel discontinued serviced for Basis period. In other words, as of December 31, my Basis BI will lose all value. The watch itself will continue to function exactly as designed, but it will no longer ‘work.’ Sad though I may have been to hear that Basis was shutting down, my disappointment was eased when Intel offered me a full refund in exchange for my non-exploding watch.
I really like my (first generation) Automatic adapter. I like the accuracy with which it tracks my fuel economy and travel distances, and I like knowing that it would automatically notify a few key contacts in case of a collision. But the device has been less and less reliable recently. A recent email explained why:
Like the Basis watch, the value of the Automatic adapter lies, not in the adapter itself, but in the cloud-based services the company provides. Unlike Basis, though, what has left me with a useless piece of plastic is not the discontinuation of those services, but its reliance on a technology that has gone out of date. The device still ‘works,’ but despite firmware updates, it is not longer able to adapt to changing standards. My first generation adapter is now trash.
The major problem with this first generation adapter is that it relied heavily on two kinds of external service, only one of which the company had control over. There is the cloud-based analytics service (similar to that provided by Intel to support its watch), but the device also relied on a Bluetooth enabled smart phone for GPS (to track location) and SMS (in case of collision). Automatic has now learned their lesson. The most recent generations of their adapters do not rely on smartphones nearly to the same extent (if at all). But the fact that Automatic now has greater control over the device and the services that it makes possible does not change the fact that the value of its adapters lies squarely on the service side. The second the service piece is eliminated, the value of the adapter disappears entirely.
These are only two examples of many. I could also have mentioned Narrative, which produces a life-logging camera but whose service-based business model actually undermined product sales (because the camera only works if accompanied by a cloud-based service subscription. It is for this reason that the company recently closed down, only to be opened back up again as a result of an acquisition). And I could have mentioned the Apple Watch (which I love, by the way), which only has value if I resign myself to being locked in to the Apple ecosystem.
So things do not have value anymore. Just as the value of currency is no longer constrained by physical objects that even pretend to have some kind of innate value, so too have our devices ceased to have value in themselves. Our devices merely grant us access to information (and allow information access to us). And to be his extent, our things are not things at all. They are relations. Or, as Luciano Floridi would call them, they are ‘second-order technologies’ with the sole function of mediating the relationship of humans to other technologies.
In direct contradiction to Betteridge’s Law, we believe the answer is yes. Analytics in higher education is in the trough of disillusionment.
The trough of disillusionment refers to a specific stage of Gartner’s Hype Cycle. It is that moment when, after a rapid build up leading to a peak of inflated expectations, a technology’s failure to achieve all that was hoped for results in disillusionment. Those who might benefit from a tool perceive a gap between the hype and actual results. Some have rightly pointed out that not all technologies follow the hype cycle, but we believe that analytics in higher education has followed this pattern fairly closely.
In 2014, I wrote a blog post in which I claimed (along with others) that analytics had reached a ‘peak of inflated expectations.’ Is the use of analytics in higher education now entering what Gartner would call the ‘trough of disillusionment’?
In 2011, Long and Siemens famously argued that big data and analytics represented “the most dramatic factor shaping the future of higher education.” Since that time, the annual NMC Horizon Report has looked forward to the year 2016 as the year when we would see widespread adoption of learning analytics in higher education. But as 2016 comes to a close, the widespread adoption of learning analytics still lies on the distant horizon. Colleges and universities are still very much in their infancy when it comes to the effective use of educational data. In fact, poor implementations and uncertain ROI have led to what Kenneth C. Green has termed ‘angst about analytics.’
As a methodology, the Gartner Hype Cycle is not without criticism. Audrey Watters, for example, takes issue with the fact that it is proprietary and so ‘hidden from scrutiny.’ Any proprietary methodology is in fact difficult to take seriously as a methodology. It should also be noted that the methodology is also improperly named, as any methodology that assumes a particular outcome (i.e. that assumes that all technology adoption trends follow the same patters) is unworthy of the term. But as a heuristic or helpful model, it is helpful way of visualizing analytics adoption in higher education to date, and it offers some helpful language for describing the state of the field. Read more