‘Good marketing’ isn’t good marketing: Why marketers should get out of the language game

Marketers like to make up new terminology and distinctions. This is easy to do because, in the absence of particular domain expertise, they don’t know the ‘right’ language. Reading is hard. Knowledge acquisition takes time. And marketers need to produce. So they create novel constellations of terms. Instead of exploring the world as it is, they invent the world for themselves. They do all this under the banner of ‘branding.’ When potential customers see this, they smirk and call it ‘good marketing.’ ‘Good marketing’ isn’t good marketing. Read more

Should edtech vendors stop selling ‘predictive analytics’? A response to Tim McKay

Pedantic rants about the use and misuse of language are a lot of fun. We all have our soap boxes, and I strongly encourage everyone to hop on theirs from time to time. But when we enter into conversations around the use and misuse of jargon, we must always keep two things in mind: (1) conceptual boundaries are fuzzy, particularly when common terms are used across different disciplines, and (2) our conceptual commitments have serious consequences for how we perceive the world.

Tim McKay recently wrote a blog post called Hey vendors! Stop calling what you’re selling colleges and universities “Predictive Analytics”. In this piece, Mckay does two things. First, he tries to strongly distinguish the kind of ‘predictive analytics’ work done by vendors from the kind of ‘real’ prediction that is done within his own native discipline, which is astronomy. Second, on the basis of this distinction, he asserts that what analytics companies are calling ‘predictive analytics’ are actually not predictive at all. All of this is to imply what he later says explicitly in a tweet to Mike Sharkey: the language of prediction in higher ed analytics is less about helpfully describing the function of a particular tool, and more about marketing.

What I’d like to do here is to unpack Tim’s claims, and in so doing, soften the kind of strong antagonism that he erects between vendors and the rest of the academy, which is not particularly productive as vendors, higher educational institutions, government, and others seek to work together to promote student success, both in the US and abroad.

What is predictive analytics?

A hermeneutic approach

Let’s begin with defining analytics. Analytics is simply the visual display of quantitative information in support of human decision-making. That’s it. In practice, we see the broad category of analytics sub-divided in a wide variety of ways: by domain (i.e. website analytics), by content area (i.e., learning analytics, supply chain analytics), by intent (i.e., in the case of the common distinction between descriptive, predictive, and prescriptive analytics).

Looking specifically at predictive analytics, it is important not to take the term out of context. In the world of analytics, the term ‘predictive’ always refers to intent. Since analytics is always in the service of human decision-making, it always involves factors that are subject to change on the basis on human activity. Hence, ‘predictive analytics’ involves the desire to anticipate and represent some likely future outcome that is subject to change on the basis on human intervention. When considering the term ‘predictive analytics,’ then, it is important not to consider ‘predictive’ in a vacuum, separate from related terms (descriptive and prescriptive) and the concept of analytics, of which predictive analytics is a type. Pulling a specialized term out of one domain and evaluating it on the terms of another is unfair and is only possible under the presumption that language is static and ontologically bound to specific things.

So, when Tim McKay talks about scientific prediction and complains that predictive analytics do not live up to the rigorous standards of the former, he is absolutely right. But he is right because the language of prediction is deployed in two very different ways. In McKay’s view, scientific prediction involves applying one’s knowledge of governing rules to determine some future state of affairs with a high degree of confidence. In contrast, predictive analytics involves creating a mathematical model that anticipates a likely state of affairs based on observable quantitative patterns in a way that makes no claim to understanding how the world works. Scientific prediction, in McKay’s view, involves an effort to anticipate events that cannot be changed. Predictive analytics involves events that can be changed, and in many cases should be changed.

The distinction that McKay notes is indeed incredibly important. But, unlike McKay, I’m not particularly bothered by the existence of this kind of ambiguity in language. I’m also not particularly prone to lay blame for this kind of ambiguity at the feet of marketers, but I’ll address this later.

An Epistemological Approach

One approach to dealing with the disconnect between scientific prediction and predictive analytics is to admit that there is a degree of ambiguity in the term ‘prediction,’ to adopt a hermeneutic approach, and be clear that the term is simply being deployed relative to a different set of assumption. In other words, science and analytics are both right.

Another approach, however, might involve looking more carefully at the term ‘prediction’ itself and reconciling science and analytics by acknowledging that the difference is a matter of degree, and that they are both equally legitimate (and illegitimate) in their respective claims to the term.

McKay is actually really careful in the way that he describes scientific prediction. To paraphrase, scientific prediction involves (1) accurate information about a state of affairs (ex., the solar system), and (2) an understanding of the rules that govern changes in that state of affairs (ex., laws of gravity, etc). As McKay acknowledges, both our measurements and understanding of the rules of the universe are imperfect and subject to error, but when it comes to something like predicting an eclipse, the information we have is good enough that he is willing to “bet you literally anything in my control that this will happen – my car, my house, my life savings, even my cat. Really. And I’m prepared to settle up on August 22nd.”

Scientific prediction is inductive. It involves the creation of models that adequately describe past states of affairs, an assumption that the future will behave in very much the same way as the past, and some claim about a future event. It’s a systematic way of learning from experience.  McKay implies that explanatory scientific models are the same as the ‘rules that govern,’ but I feel like his admission that ‘Newton’s law of gravity is imperfect but quite adequate’ admits that they are not in fact the same. Our models might adequate rules, but the rules themselves are eternally out of our reach (a philosophical point that has been born out time and time again in the history of science).

Scientific prediction involves the creation of a good enough model that, in spite of errors in measurement and assuming that the patterns of the past will persist into the future, we are able to predict something like a solar eclipse with an incredibly high degree of probability. What if I hated eclipses. What if they really ground my gears. If I had enough time, money, and expertise, might it not be possible for me to…

…wait for it…

…build a GIANT LASER and DESTROY THE MOON?!

Based on my experience as an arm-chair science fiction movie buff, I think the answer is yes.

How is this fundamentally different from how predictive analytics works? Predictive analytics involves the creation of mathematical models based on past states of affairs, an admission that models are inherently incomplete and subject to error in measurement, an assumption that the future will behave in ways very similar to the past, and an acknowledgement that predicted future states of affairs might change with human (or extraterrestrial) intervention. Are the models used to power predictive analytics in higher education as accurate as those we have to predict a lunar eclipse? Certainly not. Is the data collected to produce predictive models of student success free from error? Hardly. But these are differences in degree rather than differences in the thing itself. By this logic, both predictive analytics and scientific prediction function in the exact same way. The only difference is that the social world is way more complex than the astrological world.

So, if scientific predictions are predictive, then student risk predictions are predictive as well. The latter might not be as accurate as the former, but the process and assumptions are identical for both.

An admission

It is unfortunate that, even as he grumbles about how the term ‘predictive’ is used in higher education analytics, McKay doesn’t offer a better alternative.

I’ll admit at this point that, with McKay, I don’t love the term ‘predictive.’ I feel like it is either too strong (in that it assumes some kind of god-like vision into the future) or too weak (in that it is used so widely in common speech and across disciplines that it ceases to have a specific meaning. With Nate Silver, I much prefer the term ‘forecast,’ especially in higher education.

In the Signal and the Noise, Silver notes that the terms ‘prediction’ and ‘forecast’ are used differently in different fields of study, and often interchangeably. In seismology, however, the two terms have very specific meanings: “A prediction is a definitive and specific statement about when and where an earthquake will strike: a major earthquake will hit Kyoto, Japan on June 28…whereas a forecast is a probabilistic statement usually over a longer time scale: there is a 60 percent chance of an earthquake in Southern California over the next thirty years.

There are two things to highlight in Silver’s discussion. First, the term ‘prediction’ is used differently and with varying degrees of rigor depending on the discipline. Second, if we really want to make a distinction, then what we call prediction in higher ed analytics should really be called forecasting. In principle, I like this a lot. When we produce a predictive model of student success, we are forecasting, because we are anticipating an outcome with a known degree of probability. When we take these forecasts and visualize them for the purpose of informing decisions, are we doing ‘forecasting analytics’? ‘forecastive analytics’? ‘forecast analytics’? I can’t actually think of a related term that I’d like to use on a regular basis. Acknowledging that no discipline owns the definition of ‘prediction,’ I’d far rather preserve the term ‘predictive analytics’ in higher education since it both rolls off the tongue, and already has significant momentum within the domain.

Is ‘predictive analytics’ a marketing gimick?

Those who have read my book will know that I like conceptual history. When we look at the history of the concept of prediction, we find that it has Latin roots and significantly predates the scientific revolution. Quoting Silver again:

The words predict and forecast are largely used interchangeably today, but in Shakespeare’s time, they meant different things.  A prediction was what a soothsayer told you […]

The term forecast came from English’s Germanic roots, unlike predict which is from Latin. Forecasting reflected the new Protestant worldliness rather than the otherwordliness of the Holy Roman Empire. Making a forecast typically implied planning under conditions of uncertainty. It suggested having prudence,
wisdom, and industriousness, more like the way we currently use the word foresight.

The term ‘prediction’ has a long and varied history. It’s meaning is slippery. But what I like about Silver’s summary of the term’s origins is that it essentially takes it off the table for everyone except those who who presume a kind of privileged access to the divine. In other words, using the language of prediction might actually be pretty arrogant, regardless of your field of study, since it presumes to have both complete information and an accurate understanding of the rules that govern the universe. Prediction is an activity reserved for gods, not men.

Digressions aside, the greatest issue that I have with McKay’s piece is that it uses the term ‘prediction’ as a site of antagonism between vendors and the academy. If we bracket all that has been said, and for a second accept McKay’s strong definition of ‘prediction,’ it is easy to demonstrate that vendors are not the only ones misusing the term ‘predictive analytics’ in higher education. Siemens and Baker deploy the term in their preface to the Cambridge Handbook of the Learning Sciences. Manuela Ekowo and Iris Palmer from New America comfortably makes use of the term in their recent policy paper on The Promise and Peril of Predictive Analytics in Higher Education. EDUCAUSE actively encourages the adoption of the term ‘predictive analytics’ through large numbers of publications including the Sept/Oct 2016 edition of the EDUCAUSE Review, which was dedicated entirely to the topic. The term appears in the ‘Journal of Learning Analytics,’ and is used in the first edition of the Handbook of Learning Analytics published by the Society of Learning Analytics Research (SoLAR). University administrators use the term. Government officials use the term. The examples are too numerous to cite (a search for “predictive analytics in higher education” in google scholar yields about 58,700 results). If we want to establish the true definition of ‘prediction’ and judge every use by this gold standard, then it is not simply educational technology vendors who should be charged with misuse. If there is a problem with how people are using the term, it is not a vendor problem: it is a problem of language, and of culture.

I began this essay by stating that we need to keep two things in mind when we enter into conversations about conceptual distinctions:  (1) conceptual boundaries are fuzzy, particularly when common terms are used across different disciplines, and (2) our conceptual commitments have serious consequences for how we perceive the world.  By now, I hope that I have demonstrated that the term ‘prediction’ is used in a wide variety of ways depending on context and intention.  That’s not a bad thing.  That’s just language.  A serious consequence of McKay’s discussion of how ed tech vendors use the term ‘predictive analytics is that it tacitly pits vendors against the interests of higher education — and of students — more generally.  Not only is such a sweeping implication unfair, but it is also unproductive.  It is the shared task of colleges, universities, vendors, government, not-for-profits, and others to work together in support of the success of students in the 21st century.  The language of student success is coalescing in such a way as to make possible a common vision and concerted action around a set of shared goals.  The term ‘predictive analytics’ is just one of many evolving terms that make up our contemporary student success vocabulary, and is evidence of an important paradigm shift in how we view higher education in the US.  Instead of quibbling about the ‘right’ use of language, we should instead recognize that language is shaped by values, and so work together to ensure that the words we use reflect the kinds of outcome we collectively wish to bring about.

How ed tech marketers are bad for higher education

A lot of ed tech marketers are really bad. They are probably not bad at their ‘jobs’ — they may or may not be bad at generating leads, creating well-designed sales material, creating brand visibility. But they are bad for higher education and student success.

Bad ed tech marketers are noisy. They use the same message as the ‘competition.’ They hollow out language through the use and abuse of buzz words. They praise product features as if they were innovative when everyone else is selling products that are basically the same. They take credit for the success of ‘mutant’ customers who — because they have the right people and processes in place — would have been successful regardless of their technology investments. Bad marketers make purchasing decisions complex, and they obscure the fact that no product is a magic bullet. They pretend that their tool will catalyze and align the people and processes necessary to make an impact. Bad marketers encourage institutions to think about product first, and to defer important conversations about institutional goals, priorities, values, governance, and process. Bad marketers are bad for institutions of higher education. Bad marketers are bad for students.

Good marketing in educational technology is about telling stories worth spreading. A familiar mantra. But what is a story worth spreading? It is a story that is honest, and told with the desire to make higher education better. It is NOT about selling product. I strongly ascribe to the stoic view that if you do the right thing, rewards will naturally follow. If you focus on short-term rewards, you will not be successful, especially not in the long run.

Here are three characteristics of educational technology stories worth telling:

  1. Giving credit where credit is due – it is wrong for an educational technology company (or funder, or association, or government) to take credit for the success of an institution. Case studies should always be created with a view to accurately documenting the steps taken by an institution to see results. This story might feature a particular product as a necessary condition of success, but it should also highlight those high impact practices that could be replicated, adapted, and scaled in other contexts regardless of the technology used. It is the task of the marketer to make higher education better by acting as a servant in promoting the people and institutions that are making a real impact.
  2. Refusing to lie with numbers – there was a time in the not-so-distant past when educational technology companies suffered from the irony of selling analytics products without any evidence of their impact. Today, those same companies suffer from another terrible irony: using bad data science to sell data products. Good data science doesn’t always result in the sexiest stories, even it it’s results are significant. It is a lazy marketer who twists the numbers to make headlines. It is the task of a good marketer to understand and communicate the significance of small victories, to popularize the insights that make data scientists excited, but that might sound trivial and obscure to the general public without the right perspective..
  3. Expressing the possible – A good marketer should know their products, and they should know their users. They should be empathetic in appreciating the challenges facing students, instructors, and administrators and work tirelessly as a partner in change. A good marketer does not stand at the periphery.  They get involved because they ARE involved.  A good marketer moves beyond product features and competitive positioning, and toward the articulation of concrete and specific ways of using a technology to meet the needs of students, teachers, and administrators a constantly changing world.

Suffice it to say, good marketing is hard to do. It requires domain expertise and empathy. It is not formulaic. Good educational technology marketing involves telling authentic stories that make education better. It is about telling stories that NEED to be told.

If a marketer can’t say something IMPORTANT, they shouldn’t say anything at all.

The Politics of Oversharing: The Circle by Dave Eggers

The Circle by Dave Eggers tells the story of a young woman working to navigate a fictional google-type corporation with its sights set on achieving universal surveillance. What the company hopes to achieve is a panopticon vision of society in which no one has any secrets from anyone else. Everything that everyone does is recorded, streamed, archived, and made available for anyone and everyone to see.

The ‘Three Wise Men’ who founded this fictional company — called ‘The Circle’ — represent three perspectives that we see guiding Big Data investments today:

(1) The gleeful possiblist – unconcerned with the consequences of technologies that are created, this kind of person is simply interested in exploring what is possible. They wash their hands of ethical or long-term implications, since those hinge a kind of widespread adoption that has nothing to do with innovation in itself.

(2) The business man – like the gleeful possiblist, the business man washes their hands of ethical consequences since, driven by a desire to grow the business, the success of the products and services created hinges on the desires of the masses.

(3) The utopian – this is the most thoughtful of the three perspectives, in that it is the only one to accept responsibility for the future. With respect to issues of of big data and surveillance, it sees privacy as a problem to be solved. With privacy comes secrets and the possibility of lies. With secrets and lies come conflict. Universal surveillance and absolutely transparency mean complete accountability, technology-mediated empathy, and freedom from fear.

The bulk of Egger’s work is spent describing a variety of social surveillance technologies and the burdens they place on users like the protagonist, Mae. In many ways, this world is described in compelling and favorable terms. The reader is not disturbed, but actually drawn into agreement with the dominant utopian ideology into which Mae is progressively indoctrinated. Of course, there are nay-sayers, dissenters, and outsiders but in this world they are the minority. In a world where everyone’s opinion matters, democracy is absolute. If democracy is a good thing, absolute democracy the the best thing.

As one would expect from a book like this, as the circle nears completion, Eggers uses the opportunity to explore several distopian themes, including the possibility of a future tyrannical leader making use of truth-telling technology for systematically manipulating public perception. But Eggers is really good at ambiguity. He does an excellent job of using his narrative to explore important themes and possibilities while at the same time withholding judgement. One does not get from Eggers the sense that the trajectory of our serveillance technologies and big data policies are good or bad. What is strongly affirmed, however, is the fact that we are responsible. The purpose of The Circle is to force its readers to reflect on the consequences of their behavior, to consider their part as complicit in shaping the future. The Circle is effective in underscoring the importance of making thoughtful decisions about how we use technology instead of being passive users in a world made of us rather than by us and for us.

Five strategies for succeeding with data in higher education

What important steps can you take to increase the success of your analytics project on campus?

At the 2017 Blackboard Analytics Symposium, A. Michael Berman, ‎VP for Technology & Innovation at CSU Channel Islands and Chief Innovation Officer for California State University, took a different approach to answering this question. Instead of asking about success, he asked about failure. What would project management look like if we set out to fail from the very beginning? As it turns out, the result looks pretty similar to a lot of well-meaning data projects we see today.

 

What important lessons can we learn from failure?

#1. Set clear goals

Setting clear goals is not easy, but it is an important first step to successfully completing any project. If you don’t know what you are setting out to do, you won’t know when you are done, and you won’t know if you succeeded. Setting clear goals is hard work, not only because it requires careful thinking, but also because it involves communication and consensus. Clear communication of well-defined goals creates alignment, but it also invites disagreement as different stakeholders want to achieve different things. Goal setting is a group exercise that involves bringing key stakeholders together to agree on a set of shared outcomes so you can all succeed together.

#2. Gain executive support

Garnering the support of executive champions is a crucial and often overlooked step. All too often, academic technology units are prevented from scaling otherwise innovative practices simply because no one in leadership knows about them. Support from leadership means access to resources. It means advocacy. It also means accountability.

#3. Think beyond the tech

IT projects are never about technology. They are always about solving specific problems for particular groups of people. For the most part, the people that are served by an analytics project have no interest in what “ETL” means, or what a “star schema” is. All they know is that they lack access to important information. What many IT professionals fail to appreciate is the fact that their language is foreign to a lot of people, and that using overly technical language often serves to compound the very problems they are trying to solve. Access to information without understanding is worse than no access at all.

#4. Maximize communication

Communication is important in two respects. It is important to the health of your analytics project because it ensures alignment and fosters momentum around the clearly defined goals that justified the project in the first place. But it is also important once the project is complete. The completion of an analytics project marks the beginning, not the end. If you wait until the project is complete before engaging your end users, you have an uphill battle ahead of you that is fraught with squandered opportunity. With a goal of ensuring widespread adoption once the analytics project is completed, it’s important to share information, raise awareness, and start training well in advance so that your users are ready and excited to dig in and start seeing results as soon as possible.

#5. Celebrate success

It’s easy to think of celebration as a waste of company resources. People come to work to do a job. They get paid for doing their job. What other reward do people need? But IT projects, and analytics projects in particular, are never ‘done.’ And they are never about IT or analytics. They are about people. Celebration needs to be built into a project in order to punctuate a change in state, and propel the project from implementation into adoption. In the absence of this kind of punctuation, projects never really feel complete, and a lack of closure inhibits the exact kind of excitement that is crucial to achieve widespread adoption.


Originally posted to blog.blackboard.com

3 strategies for dealing with public speaking anxiety: Lessons from a pro athlete

Fans often ask my wife, pro equestrian Elisa Wallace, if she still gets nervous. Her answer is always: yes.

Even at the highest levels of equestrian competition, it is not uncommon for athletes to involuntarily evacuate the contents of their stomach before an event. With pressure coming from large numbers of spectators (in person and on television), the hopes and dreams of fans and country, and — most importantly — the internal desire to do justice to the potential of her equine partner, a spike in adrenaline is impossible to avoid.

That involuntary physiological response is completely natural. It is a function of the fact that Elisa cares. If it didn’t happen, something would be wrong.

I’ve been thinking a lot about this this adrenaline response in my own professional life. Although still relatively new to speaking in front of very large audiences, I’ve been public speaking now for a long time, both as a teacher and as a scholar. It seems that no matter how much experience I get, I can’t overcome the experience of an involuntary adrenaline response prior to taking a stage.

This is something that I worry about, since I know that this response has an impact on my ability to think clearly, and to recall even the most well-practiced talk tracks. I worry about a quivering voice. I worry about fumbling about on stage, dropping things, and losing my train of thought.

Recently, I asked my wife how she deals with this kind of pre- performance stress response. She gave me three pieces of advice, based on her own experience as a professional athlete:

1. Embrace it.

The only reason you experience an adrenaline response prior to engaging in a public activity (or any activity for that matter) is that you care. That’s a good thing. The worst thing you can do is to stress out about stressing out. Instead, expect your adrenaline to spike and embrace it as an important part of your process. By simply reinterpreting this physiological response as working for you instead of against you, you can transform a hindrance into a helper.

2. You deserve to be there.

A lot of our anxiety comes from insecurity. For anyone with a realistic self-concept, it can be difficult to overcome ‘impostor syndrome.’ Whether you are a professional athlete or a public speaker, remember that you have worked hard, and the only reason you are there is because others want to see you there. You are there because you are already respected, and others already value your opinion. You have nothing to prove. Just do what you came to do.

3. Get pumped.

(1) and (2) are about mindset. This point is about how to get there. Many professional athletes have mastered the art of creating portable fortresses of solitude. They put their headphones on, listen to music, and tune out. Elisa has a ‘pump up’ playlist on her phone. Prior to going on cross country (the most thrilling and dangerous of her three phases), listening to music is helpful in two ways. It simultaneously (and paradoxically) helps you to tune out extraneous information so you can focus on the task at hand, and distracts you (in a productive way) from the importance of what you are about to do. Preference, of course, is music with driving bass lines, which we know from research has the effect of boosting confidence as well.


Originally published to horsehubby.com

Ethics and Predictive Analytics in Higher Education

In March 2017, Manuela Ekowo and Iris Palmer co-authored a report for New America that offered five guiding practices for the ethical use of predictive analytics in higher education.  This kind of work is really important.  It acknowledges that, to the extent that analytics in higher education is meant to have an impact on human behavior, it it is a fundamentally ethical enterprise.

Work like the recent New America report is not merely about educational data science.  It is an important facet of educational data science itself.

Are we doing ethics well?

But ethics is hard.  Ethics is not about generating a list of commandments.  It is not about cataloging common opinion.  It is about carefully establishing a set of principles on the basis of which it becomes possible to create a coherent system of knowledge and make consistent judgements in specific situations.

Unfortunately, most work on the ethics of analytics in higher education lacks this kind of rigor.  Instead, ethical frameworks are the result of a process of pooling opinions in such a way as to strike a balance between the needs of a large number of stakeholders including students, institutions, the economy, the law, and public opinion.  To call this approach ethics is to confuse the good with the expedient.

Where should we begin?

An ethical system worthy of the name needs to begin with a strong conception of the Good.  Whether stated or implied, the most common paradigm is essentially utilitarian, concerned with maximizing benefit for the greatest number of people.  The problem with this approach, however, is that it can only ever concern perceived benefit.  People are famously bad at knowing what is good for them.

A benefit of this utilitarian approach, of course, is that it allows us to avoid huge epistemological and metaphysical minefields.  In the absence of true knowledge of the good, we can lean on the wisdom of crowds.  By pooling information about perceived utility, so the theory goes, we can approximate something like the good, or at least achieve enough consensus to mitigate conflict as much as possible.

But what if we were more audacious?  What if our starting point was not the pragmatic desire to reduce conflict, but rather an interest in fostering the fullest expression of our potential as humans?  As it specifically pertains to the domain of educational data analytics, what if we abandoned that instrumental view of student success as degree completion?  What if we began with the question of what it means to be human, and wrestled with the ways in which the role of ‘student’ is compatible and incompatible with that humanity?

Humane data ethics in action

Let’s consider one example of how taking human nature seriously affects how we think about analytics technologies.  As the Italian humanist Pier Paolo Vergerio observed, all education is auto-didactic.  When we think about teaching and learning, the teacher has zero ability to confer knowledge.  It is always the learner’s task to acquire it.  True, it is possible to train humans just as we can train all manner of other creatures (operant and classical forms of conditioning are incredibly powerful). but this is not education.  Education is a uniquely human capability whereby we acquire knowledge (with the aim of living life in accord with the Good). Teachers do not educate.  Teachers do not ‘teach.’ Rather, it is the goal of the teacher to establish the context in which the student might become actively engaged as learners.

What does this mean for Education?  Viewed from this perspective, it is incumbent on us as educators to create contexts that bring students to an awareness of themselves as learners in the fullest sense of the word.  It is crucial that we develop technologies that highlight the student’s role as autodidact.  Our technologies need to help bring students to self-knowledge at the same time as they create robust contexts for knowledge acquisition (in addition to providing opportunities for exploration, discovery, experimentation, imagination and other humane attributes).

It is in large part this humanistic perspective that has informed my excitement about student-facing dashboards.  As folks like John Fritz have talked about, one of the great things about putting data in the hands of students is that it furthers institutional goals like graduation and retention as a function of promoting personal responsibility and self-regulated learning.  In other words, by using analytics first and foremost with an interest in helping students to understand and embrace themselves as learners in the fullest sense of the term, we cultivate virtues that translate into degree completion, but also career success and life satisfaction.

In my opinion, analytics (predictive or otherwise) are most powerful when employed with a view to maximizing self-knowledge and the fullest expression of human capability rather than as way to constrain human behavior to achieve institutional goals.  I am confident that such a virtuous and humanistic approach to educational data analytics will also lead to institutional gains (as indeed we have seen at places like Georgia State University), but worry that where values and technologies are not aligned, both human nature and institutional outcomes are bound to suffer.