Analytics isn’t a thing…it’s a relation

In response to the 2017 NMC Horizon report, Mike Sharkey recently observed that analytics had disappeared from the educational technology landscape. After being on the horizon for many years, it seems to have vanished from the report without pomp or lamentation.

For those of us tracking the state of analytics according to the New Media Consortium, we have eagerly awaited analytics’ arrival. In 2011, the time to wide-scale adoption was expected to be four to five years. In 2016, time to adoption was a year or less. In 2017, I would have expected one of two things from the Horizon Report: either (a) great celebration as the age of analytics had finally arrived, or (b) acknowledgment that analytics had not arrived on time.

But we saw neither.

Upon first inspection, analytics seems to have vanished into thin air. But, as Sharkey observes, this was not actually the case. Instead, analytics’ absence from the report was itself a kind of acknowledgement that analytics is not actually ‘a thing’ that can be bought and sold. It is not something that can be ‘adopted.’ Instead, analytics is simply an approach that can be taken in response to particular institutional problems. In other words, to call out analytics as ‘a thing,’ is to establish a solution in search of a problem, as if ‘not having analytics’ was a problem itself that needed to be solved. Analytics never arrived because it was never on its way. The absence of analytics from the horizon report, then, points to the fact that we now understand analytics far better than we did in 2011. If we knew then what we know now, analytics would not have been featured in the horizon report in the first place. We would have put understanding ahead of tools, and bypassed the kind of hype out of which we are only now beginning to emerge.

I agree with Mike. But I want to go a step further. I have always been fascinated by ontologies, and the ways in which the assumptions we make about ‘thingness’ affect our behavior. I have a book in press about the emergence of the modern conception of society. I have written about love (Is it a thing? Is it an activity? Is it a relation? Is it something else?). And I have written about dirt. Mike’s post has served as a catalyst for the convergence of some of my thinking about analytics and ‘thingness.’

Analytics is not a thing. I can produce a dashboard, but I can’t point to that dashboard and say “there is analytics.” There is a important sense in which analytics involves the rhetorical act of translating information in such a way as to render it meaningful. In this, a dashboard only becomes ‘analytics’ when embedded within the act of meaning-making. That’s why a lot of ‘analytics’ products are so terrible. They assume that analytics is the same as data science with a visualization layer. They don’t acknowledge that analytics only happens when someone ‘makes sense’ out of what is presented.

Analytics is like language. Just like language is not the same as what is represented in the dictionary, analytics is not the same as what is represented in charts and graphs. Sure, words and visualizations are important vehicles for meaning. But just as language goes beyond words (or may not involve words at all), so too does analytics.

It is a mistake to confuse analytics with data science. An it is a mistake to confuse it with visualization. If analytics is about meaning-making, then we are working toward a functional definition rather than a structural one. This shift away from structure to function opens up some really exciting possibilities. For example, SAS is doing some incredible work on the sonic representation of data.

As soon as we begin to think analytics beyond ‘thingness,’ and adopt a more functional definition, its contours dissolve really quickly. If what we are talking about is a rhetorical activity according to which data is rendered meaningful, then we are no longer talking about visualization. We are talking about representation. In a recent talk, I suggested that, to the extend that analytics is detached from a particular mode of representation, and what we are talking about is intentional meaning-making — meaning making intended to solve a particular problem — then a conversation can easily become ‘analytics.’

So analytics is not a ‘thing.’ It is not something that we can point to. Is it an activity? Do we ‘do analytics’? No, analytics isn’t an activity either. Why? Because it is communicative, and so requires the complicity of at least one other. Analytics is not something that we do. It is something we do together. But it is not something that we do together in the same way that we might build a robot together, or watch television together, where what we are talking about is the aggregation of activities. What we are engaged in is something more akin to communication, or love.

Analytics is not a thing. Analytics is not an activity. Analytics is a relation.

Is it ethical for marketers to ‘nudge’?

They almost got me.

As I reached for the gasoline nozzle, I realized at the very last minute that what I thought was regular gasoline was actually ‘plus,’ a grade that I did not want and that I would have paid a premium for. The reason for my near mistake? The way my options were ordered. I expected the grades to be ordered by octane as they almost always are. But in this case, regular 87 was sandwiched between two more premium grades.

The strategy that was employed at the pump at this Shell station in Virginia is an example of ‘nudging.’ It is an example of leveraging preexisting expectations and habits to increase the chances of a particular behavior. There is nothing dishonest about the practice. Information is complete and transparent, and personal freedom to choose is not affected. It is simply that the environment is structured in such a way as to promote one decision instead of others. 

Ethically, I like the position of Thaler and Sunstein when they talk about ‘libertarian paternalism.’ In their view, nudging can be a way to reconcile a strong belief in personal freedom with an equally strong belief that certain decisions are better than others. But not all nudges are created equal. Just as it is possible to promote decisions that are better for individuals, so too is it possible to increase the likelihood of choices that serve other interests, and that even serve to subvert the fullest expression of personal liberty, as in the gasoline example above.

One way to think of marketing is as the use of the principles of behavioral economics to change consumer behavior. Marketers are in the business of nudging. Because nudging has a direct impact in human behavior, it is also a fundamentally ethical enterprise. Marketing carries with it a huge burden of responsibility.

What ethical positions do you take in your marketing efforts? What would marketing look like if we were all libertarian paternalists?

Overcoming early analytics challenges at UMBC

In 2011, Long and Siemens famously announced that big data and analytics represented “the most dramatic factor shaping the future of higher education.” Now, five years later, conversations about the use of data and analytics in higher education are more mixed. In 2016, the Campus Computing Project release an annual report that used the language of “analytics angst.” In a recent blog series for the EDUCAUSE Review, Mike Sharkey and I argue that analytics has fallen into a “trough of disillusionment.” What makes some institutions successful in their analytics where others flounder? How can we work to scale, not technologies, but high impact practices? Let’s examine one example.

The University of Maryland Baltimore County (UMBC) began working with Blackboard Analytics in 2006. At that time, they simply wanted to support access to basic information to ensure that the institution was effective and efficient in its operations. Shortly after gaining access to their institutional data, however, they quickly began asking deeper questions about student success.

READ FULL STORY HERE >> http://blog.blackboard.com/overcoming-early-analytics-challenges-at-umbc/

Equestrian Data Science: Ranking Eventers in 2016

Coming up with a list of the top eventers based on their performance in 2016 is hard.  The sport of three-day eventing is complex and multi-faceted, and the decisions we make about which factors to consider make a significant difference to the final result of any evaluation process. It is a result of this complexity, and the fact that there is bound to be strong disagreement about who ends up being included in a list of this kind, that it is rare to see anything like this published. And yet, I still believe that this exercise has value, particularly for fans like myself who find rankings a useful way of understanding the sport.

Note that the ranking that I have produced is the result of a lot of thinking and expert consultation. It is also a work in progress. I have tried to document some of the theory and methods underlying the list(s), but if you want to bypass this discussion, feel free to skip over these sections and see the lists themselves.

Guiding Principles

All ranking schemes involve subjective judgement. They involve establishing criteria on the basis of values. Since values differ from individual to individual, disagreement is bound to happen and conflicting lists are bound to appear. But there are two guiding principles that I believe should apply to all rankings:

(1) Look to the dataHuman beings are great at making decisions and at coming up with justifications after the fact. We all have biases, and we are all terrible at overcoming them. By limiting ourselves to measurable qualities and available data, we can lessen the impact of irrelevant and inconsistently applied preferences.

(2) Be transparentBeing data-driven in our decision-making processes doesn’t mean being objective. Decisions have to be made about the kinds of data to include, the ways in which that data is transformed, and the analytical tools that are applied. This is not a bad thing. Not only are these decisions necessary, they are also important because it is here that data becomes meaningful. Here, I argue that making the ‘right’ decisions is less important than making your decisions explicit.

Method

Inclusion Criteria

Who should be considered for inclusion in a list of top eventers world-wide? Here is a list of criteria that I believe any eventer needs to satisfy in order to be considered among the top in the sport. This is where values and judgement come in, and there is bound to be some disagreement. So it goes.

CCI only
There are several significant differences between CCI and CIC events. The demands that each of these event types place on horse and rider are so different that, for all intents and purposes, they should be considered different sports entirely. Compared to CIC events, CCIs are characterized by longer cross country courses, have stricter vetting requirements, and include show jumping as the final of the three phases.  CIC competitions are developmental.  The most elite riders in the world must be able to compete, complete, and excel in CCI events.  For this reason, I have chosen only to include CCI riders in the list.

3* and 4* only
This list is meant to include the best of the best. What this means is only including riders who have successfully competed at either 3 star or 4 star levels. Why not just include riders who have competed at the 4 star level and exclude 3 star results? The fact that there are only six 4 star events means that we don’t have a whole lot of data from year to year. The decision to include 3 star data also makes sense in light of recent decisions to downgrade Olympic and World Equestrian Games events to the 3 star level.

At least two competitions
There is a difference between CCI 3*/4* pairs and pairs that have merely competed at that level. In order to be considered in the list, a horse and rider combination must have completed a minimum of two CCI events at either the 3 star or four star level.

100% event completion rate
As recent Olympic history has underscored, the most important quality of an elite rider is the ability to consistently complete events at the highest level. Consistency is key. So I have only included riders in the list that successfully completed every CCI event they entered in 2016.

Statistical Methods

Once we have established a pool of eligible pairs, what is the best way to rank them? Do we simply take an average of their final scores? How do we account for the fact that some pairs excel in dressage while others shine on cross country or in show jumping? How to we account for the fact that judging differs from event to event, and for differences in terrain, weather, and course design? From a statistical perspective, we know that some events are ‘easier’ than others. How do we fairly compare the relative performance of horses and riders competing under different sets of conditions, even at the same level?

One way of overcoming differences is through a statistical process called standardization. A z-score is the difference between the number of points that a pair earned and the average number of points earned by all competitors at the same event in standard deviation units. A score of 0 means that a pair is average.  A negative z-score means the pair is above average, and a positive score means that it is below.  By converting points into z-scores, we are able to account for various differences from event to event. By comparing average final z-scores, we can more easily and reliably compare horse and rider combinations on an even playing field.

Once we have standardized final scores, we can sort pairs according to their average z-score and take the top 10.  VOILA!  We have a list of top riders.  Here are the results, along with a little bit of more useful information about their performance at 3* and 4* levels.

The Results (worldwide)

  1. Michael Jung & Fischerrocana FST (GER)
  2. Maxime Livio & Qalao des Mers (FRA)
  3. Hazel Shannon & Clifford (Aus)
  4. Oliver Townend & ODT Ghareeb (GBR)
  5. Jonelle Price & Classic Moet (NZL)
  6. Andrew Nicholson & Teseo (NZL)
  7. Hannah Sue Burnett & Under Suspection (USA)
  8. Nicola Wilson & Annie Clover (GBR)
  9. Andreas Dibowski & FRH Butts Avedon (GER)
  10. Oliver Townend & Lanfranco (GBR)

The Results (USA)

If we apply the same criteria above, but only consider American CCI 3*/4* riders in 2016, we get the following list:

  1. Hannah Sue Burnett & Under Suspection
  2. Hannah Sue Burnett & Harbour Pilot
  3. Boyd Martin & Welcome Shadow
  4. Buck Davidson & Copper Beach
  5. Elisa Wallace & Simply Priceless
  6. Lauren Kieffer & Landmark’s Monte Carlo
  7. Lillian Heard & LCC Barnaby
  8. Kurt Martin & Delux Z
  9. Phillip Dutton & Fernhill Fugitive
  10. Sharon White & Cooley on Show

Some may find it odd that Phillip Dutton & Mighty Nice didn’t make either top 10 list, in spite of being a bronze medalist at the 2016 Olympic Games in Rio, Brazil.  The reason for this is that the FEI dataset that I have used intentionally excludes Olympic results because they are kind of strange…a horse of a different color, so to speak.  Not including the Olympics, this pair only competed at one CCI event in 2016: the Rolex Kentucky Three Day Event, where they finished in 4th with a final score of  57.8, which converts to a z-score of -1.11.  Based on this score, the pair would rank first in terms of national rankings, and fifth in the world.  But this is only one CCI event, and so I could not include them in the lists based on the criteria I established above.


Originally posted to horseHubby.com

Traversing the Trough of Disillusionment: Where do Analytics Go from Here?

Co-Authored with Mike Sharkey

Last month we argued that analytics in higher education has entered a trough of disillusionment.  We posited that this is actually a good thing for higher education, because it means bringing attention to the hype itself. It means that we are making progress towards true productivity and student success.  We need to learn how to spot the hype before we can move beyond it and realize the true potential of educational data and learning analytics.

It is our hope that the ‘analytics angst’ that has accompanied increased data literacy will put pressure on vendors to reduce hyperbole in their marketing materials and encourage institutions to reset their expectations. A more realistic view of educational data will result in greater adoption, more successful implementations, and results that move the needle by positively impacting student success at scale.

READ FULL STORY HERE >> http://er.educause.edu/blogs/2016/12/traversing-the-trough-of-disillusionment-where-do-analytics-go-from-here

Eliminating barriers to innovation at scale: Fostering community through a common language

The Latin word communitas refers to a collection of individuals who, motivated by a common goal, come together and act as one. Community is powerful.

Common approaches to college and university rankings can sometimes have the unfortunate effect of pitting institutions against each other in a battle for students and prestige. As the U.S. turns its attention to meeting the needs of 21st century students and 21st century labor demands, the power of traditional university ranking schemes is starting to erode.

Student success is not a zero-sum game. Rather than fostering competition, a commitment to student success encourages cooperation.

READ FULL STORY HERE >> http://blog.blackboard.com/fostering-community-through-a-common-language/

Your stuff doesn’t have value anymore

It has recently become very apparent to me that the value of the things that I own is increasingly elsewhere.

Let me provide two examples.

Basis Watch

Several months ago, Intel announced that it was shutting down all support and service for its Basis line of watches. The announcement came in light of a safety recall of the Basis Peak. Shutting down its data service for Peak watches was meant to mitigate safety concerns, since the watch only really ‘works’ if accompanied by its cloud-based service. Intel also offered a full refund on the watches. This was absolutely the best thing that Intel could have done. By withdrawing all features from the watch itself, and offering a financial reward for its return, Intel made it so that the watch’s sole use value was as a thing to be returned.

basisrefund

The announcement was sad for me. I was an early adopter of the original Basis B1 watch. I have had mine since before the acquisition of Basis by Intel in 2014. When other wearables from Fitbit and Jawbone (I have these first generation products in a drawer somewhere) were nothing more than step counters, the B1 also tracked heart rate and moisture levels. It was also a watch.

My B1 still ‘works.’ It is a reliable product, and I don’t worry about it exploding on my wrist. But in discontinuing service for the Peak, Intel discontinued serviced for Basis period. In other words, as of December 31, my Basis BI will lose all value. The watch itself will continue to function exactly as designed, but it will no longer ‘work.’ Sad though I may have been to hear that Basis was shutting down, my disappointment was eased when Intel offered me a full refund in exchange for my non-exploding watch.

Automatic Adapter

I really like my (first generation) Automatic adapter. I like the accuracy with which it tracks my fuel economy and travel distances, and I like knowing that it would automatically notify a few key contacts in case of a collision. But the device has been less and less reliable recently. A recent email explained why:

automatic done

Like the Basis watch, the value of the Automatic adapter lies, not in the adapter itself, but in the cloud-based services the company provides. Unlike Basis, though, what has left me with a useless piece of plastic is not the discontinuation of those services, but its reliance on a technology that has gone out of date. The device still ‘works,’ but despite firmware updates, it is not longer able to adapt to changing standards. My first generation adapter is now trash.

The major problem with this first generation adapter is that it relied heavily on two kinds of external service, only one of which the company had control over. There is the cloud-based analytics service (similar to that provided by Intel to support its watch), but the device also relied on a Bluetooth enabled smart phone for GPS (to track location) and SMS (in case of collision). Automatic has now learned their lesson. The most recent generations of their adapters do not rely on smartphones nearly to the same extent (if at all). But the fact that Automatic now has greater control over the device and the services that it makes possible does not change the fact that the value of its adapters lies squarely on the service side. The second the service piece is eliminated, the value of the adapter disappears entirely.

These are only two examples of many. I could also have mentioned Narrative, which produces a life-logging camera but whose service-based business model actually undermined product sales (because the camera only works if accompanied by a cloud-based service subscription. It is for this reason that the company recently closed down, only to be opened back up again as a result of an acquisition). And I could have mentioned the Apple Watch (which I love, by the way), which only has value if I resign myself to being locked in to the Apple ecosystem.

So things do not have value anymore. Just as the value of currency is no longer constrained by physical objects that even pretend to have some kind of innate value, so too have our devices ceased to have value in themselves. Our devices merely grant us access to information (and allow information access to us). And to be his extent, our things are not things at all. They are relations. Or, as Luciano Floridi would call them, they are ‘second-order technologies’ with the sole function of mediating the relationship of humans to other technologies.