Liquid modernity & learning analytics: On educational data in the 21st century

I was recently interviewed for a (forthcoming) piece in eLearn Magazine.  Below are my responses to a couple of key questions, reproduced here in their entirety.


eLearn: You have a Ph.D. in Philosophy. Could you share with us a little about your history and your work with learning analytics?

TH: What drives me in my capacity of a philosopher and social theorist is an interest in how changes in information technology affect how we think about society, and in the implications our changing conceptions of society have on the role of education.

I think about how the rapid increase in our access to information as a result of the internet has led to the advent of what Zygmunt Bauman has called ‘liquid modernity.’ In contrast to the world as recently as a half century ago — a world defined by hard and fast divisions of labor, career tracks, class distinctions, power hierarchies, and relationships — the world we live in now is far more fluid: relationships are unstable, changes in job and career are rapid, and the rate of technology change is increasing exponentially. The kind of training that made sense in the 1950’s not only doesn’t work, but it renders students ill-prepared to survive, let alone thrive, in the 21st century.

When I think about our liquid modern world, I am comforted to know that this is not the first time we have lived in a world of constant change.  We experienced it in Ancient Greece, and we experienced it during the Renaissance.  In both of these periods, the role of the teacher was incredibly important.  The Sophists were teachers.  So were the Humanists.  For both of these groups, the task of education was to train citizens to survive and thrive under conditions of constant change by cultivating ingenuity, or the ability to mobilize a variety of disparate elements to solve specific problems in the here and now.  For them, education was less about training than it was about cultivating the imagination, and encouraging the development of a kind of practical wisdom that could only be gained through experience.

It is common among people on analytics circles to use a quote apocryphally attributed to Peter Drucker: “What gets measured gets managed.” Indeed, when we look at the history of analytics, we can find its origins in the modern period immediately following industrialization, concerned with optimizing efficiency through standardization and specialization.  Something that has worried me is whether or not there is a mismatch between analytics – an approach to measurement with roots in early modernity – and the demands of education in the 21st century, when students don’t need to be managed, so much as prepared to adapt.

Is learning analytics compatible with 21st century education?

I believe the answer is yes, but it requires us to think carefully about what data mean, and the ways in which data are exposed.  In essence, it means appreciating the analytics do not represent an objective source of truth.  They are not a replacement for human judgment.  Rather, they represent important artifacts that need to be considered along with a variety of other sources of knowledge (including the wisdom that comes from experience) in order to solve particular problems here and now.  In this, I am really excited about the kind of reflective approaches to learning analytics being explored and championed by people like John Fritz, Alyssa Wise, Bodong Chen, Simon Buckingham Shum, Andrew Gibson, and others

eLearn: You wrote in an article for Blackboard Blog that “analytics take place at the intersection of information and human wisdom”. What does it mean to consider humanistic values when dealing with data? Why is it important?

TH: I mean this in two ways.  On the one hand, analytics is nothing more and nothing less than the visual display of quantitative information.  The movement from activity, to capturing that activity in the form of data, to transforming that data into information, to its visual display in the form of tables, charts, and graphs involves human judgment at every stage.  As an interpretive activity, the visual display of quantitative information involves decisions about what is important.  But it is also a rhetorical activity, designed to support particular kinds of decision in particular kinds of ways.  Analytics is a form of communication.  It is not neutral, and always embeds sets of particular values.  Hence, it is incumbent upon researchers, practitioners, and educational technology vendors to be thoughtful about the values that they bring to bear on their analytics, and also to be transparent about those values so that they can inform the interpretation of analytics by others.

On the other hand, to the extent that analytics are designed to support human decision-making, they are not a replacement for human judgment.  They are an important form of information, but they still need to be interpreted.  The most effective institutions are those with experiences and prudent practitioners who can carefully consider the data within the context of  deep knowledge and experience about students, institutional practices, cultural factors, and other things.

As artifact, analytics is the result of meaning-making, and it informs meaning-making.

eLearn: Do you think that institutions are already taking advantage of all the benefits that learning analytics can offer? What are their main challenges?

TH: No.  The field of learning analytics is really only six years old. We began with access to data and a sense of inflated expectation.

The initial excitement and sense of inflated expectation actually represents a significant challenge.  In those early days, institutions, organizations, and vendors alike promise and expected a lot.  But no one really knew what they had, or what was reasonable to expect.

Mike Sharkey and I recently wrote a series of pieces for EDUCAUSE and Next Generation Learning on the analytics hype cycle, in which we argued that we have entered the trough of disillusionment and have begun to ascend the slope of enlightenment (see HERE & HERE).  Many early adopter institutions were excited, invested, and were hurt. We are at an exciting moment right now because institutions, media, and vendors are beginning to develop far more realistic expectations. We know more, and can now start getting stuff done.

Another major challenge is adoption.  It’s easy to buy a technology.  It’s harder to get people to use it, and even harder to get people to use it effectively.  Overcoming the  adoption challenge is one that involves strong leadership, good marketing, and excellent faculty development.  It also requires courage.  Change is hard, and initially even the most successful institutions encountered significant flak.  But what we see time and time again that a well-executed adoption plan that emphasizes value while assuring safety (should never be punitive) very quickly overcomes negativity and sees broad-based success.

Lastly, a major challenge that institutions have is being overwhelmed by the data, and losing sight of the questions and challenges they what to address.  It is important to invest in data access so that you have the material you need to understand and address barriers when they arise, but questions should come first.

The difference between IT and Ed Tech

In a recent interview with John Jantsch for the Duct Tape Marketing podcast, Danny Iny argued that the difference between information and education essentially comes down to responsibility. Information is simply about presentation. Here are some things you might want to know. Whether and the extent to which you come to know them is entirely up to you.

In contrast, education implies that the one presenting information also takes on a degree of responsibility for ensuring that it is learned. Education is a relationship in which teachers and learners agree to share in the responsibility for the success of the learning experience.

This distinction, argues Iny, accounts for why books are so cheep and university is so expensive. Books merely present information, while universities take on an non-trivial amount of responsibility for what is learned, and how well.

(It is a shame that many teachers don’t appreciate this distinction, and their role as educators. I will admit that, when I was teaching, I didn’t fully grasp the extent of my responsibility for the success of my students. I wish I could go back and reteach those courses as an educator instead of as a mere informer.)

If we accept Iny’s distinction between information and education, what are the implications for what we today call educational technologies, or ‘Ed Tech’? As we look to the future of technology designed to meet specific needs of teachers and learners, is educational technology something that we wish to aspire to, or avoid?

Accepting Iny’s definition, I would contend that what we call educational technologies today are not really educational technologies at all. The reason is that neither they nor the vendors that maintain them take specific responsibility for the success or failure of the individual students they touch. Although vendors are quick to take credit for increased rates of student success, taking credit is not the same as taking responsibility. In higher education, the contract is between the student and the institution. If the student does not succeed, responsibility is shared between the two. No technology or ed tech vendor wants to be held accountable for the success of an individual student. In the absence of such a willingness or desire to accept a significant degree of responsibility for the success of particular individuals, what we have are not educational technologies, but rather information technologies designed for use in educational contexts. Like books…but more expensive.

With the advent of AI, however, we are beginning to see an increasing shift as technologies appear to take more and more responsibility for the learning process itself. Adaptive tutoring. Automated nudging. These approaches are designed to do more than present information. Instead, they are designed to promote learning itself. Should we consider these educational technologies? I think so. And yet they are not treated as such, because vendors in these areas are still unwilling (accountability is tricky) or unable (because of resistance from government and institutions) to accept responsibility for individual student outcomes. There is no culpability. That’s what teachers are for. In the absence of a willingness to carry the burden of responsibility for a student’s success, even these sophisticated approaches are still treated as information technologies, when they should actually be considered far more seriously.

As we look to the future, it does seem possible that the information technology platforms deployed in the context of education will, indeed, increasingly become and be considered full educational technologies. But this can only happen if vendors are willing to accept the kind of responsibility that comes with such a designation, and teachers are willing to share responsibility with technologies capable of automating them out of a job. This possible future state of educational technology may or may not be inevitable. It also may or may not be desirable.


RESOURCES

Why the National Student Clearinghouse matters, and why it should matter more

In analytics circles, it is common to quote Peter Drucker: “What gets measured get managed.” By quantifying our activities, it becomes possible to measure the impact of decisions on important outcomes, and optimize processes with a view to continual improvement.  With analytics, there comes a tremendous opportunity to make evidence-based decisions where before there was only anecdote.

But there is a flip side to all this.  Where measurement and management go hand in hand, the measurable can easily limit the kinds of things we think of as important.  Indeed, this is what we have seen in recent years around the term ‘student success.’  As institutions have gained more access to their own institutional data, they have gained tremendous insight into the factors contributing to things like graduation and retention rates.  Graduation and retention rates are easy to measure, because they don’t require access to data outside of institutions, and so retention and graduation have become the de facto metrics for student success.  Because colleges and universities can easily report on these things, they are also easy to incorporate into rankings of educational quality, accreditation standards, and government statistics.

But are institutional retention and graduation rates actually the best measures of student success? Or are they simply the most expedient given limitations on data collection standards?  What if we had greater visibility into how students flowed into and out of institutions?    What if we could reward institutions for effectively preparing their students for success at other institutions despite a failure to retain high numbers through to graduation?  In many ways, limited data access between institutions has led to conceptions of student success and a system of incentives that foster competition rather than cooperation, and may in fact create obstacles to the success of non-traditional students.  These are the kind of questions that have recently motivated a bipartisan group of senators to introduce a bill that would lift a ban on the federal collection of employment and graduation outcomes data.

More than 98% of US institutions provide data and have access to the National Student Clearinghouse.  For years, the National Student Clearinghouse (NSC) has provided a rich source of information about the flow of students between institutions in the U.S., but colleges and universities often struggle with making this information available for easy analysis.  Institutions see the greatest benefit from access to NSC data when they combine it with other institutional data sources, and especially demographic and performance information stored in their student information systems.  This kind of integration is helpful, not only for understanding and mitigating barriers to enrollment and progression, but also as institutions work together to understand the kinds of data that are important to them.  As argued in a recent article in Politico, external rating systems have a significant impact on setting institutional priorities and, in so doing, may have the effect of promoting systematic inequity on the basis of class and other factors. As we see at places like Georgia State University, the more data that an institution has at their disposal, and the more power it has to combine multiple data sources the more it can align its measurement practices with its own values, and do what’s best for its students.

 

Is Facebook making us more adventurous?

When was the last time you heard someone say “get off of Facebook (or Instagram? or twitter, or …) and DO something!”?

I have a favorite passage from Jean-Paul Sartre’s Nausea:

This is what I thought: for the most banal even to become an adventure, you must (and this is enough) begin to recount it. This is what fools people: a man is always a teller of tales, he sees everything that happens to him through them; and he tries to live his own life as if he were telling a story. But you have to choose: live or tell.

We experience life through the stories we tell, and through the stories of others. It has always been the case. Even before the internet.

So does that mean that social media, which demands the persistent sharing of ‘adventures,’ actually make our lives richer? Does the compulsion to share more moments as if they were significant events render our lives more event-ful?

I eat the same breakfast every day and never remember it. I take a single picture of my meal, and oatmeal becomes an event.

Research suggests that kids today are doing less. And that is probably right. But as they have more opportunities to narrate their lives, perhaps they are more adventurous.

‘Good marketing’ isn’t good marketing: Why marketers should get out of the language game

Marketers like to make up new terminology and distinctions. This is easy to do because, in the absence of particular domain expertise, they don’t know the ‘right’ language. Reading is hard. Knowledge acquisition takes time. And marketers need to produce. So they create novel constellations of terms. Instead of exploring the world as it is, they invent the world for themselves. They do all this under the banner of ‘branding.’ When potential customers see this, they smirk and call it ‘good marketing.’ ‘Good marketing’ isn’t good marketing. Read more

Should edtech vendors stop selling ‘predictive analytics’? A response to Tim McKay

Pedantic rants about the use and misuse of language are a lot of fun. We all have our soap boxes, and I strongly encourage everyone to hop on theirs from time to time. But when we enter into conversations around the use and misuse of jargon, we must always keep two things in mind: (1) conceptual boundaries are fuzzy, particularly when common terms are used across different disciplines, and (2) our conceptual commitments have serious consequences for how we perceive the world.

Tim McKay recently wrote a blog post called Hey vendors! Stop calling what you’re selling colleges and universities “Predictive Analytics”. In this piece, Mckay does two things. First, he tries to strongly distinguish the kind of ‘predictive analytics’ work done by vendors from the kind of ‘real’ prediction that is done within his own native discipline, which is astronomy. Second, on the basis of this distinction, he asserts that what analytics companies are calling ‘predictive analytics’ are actually not predictive at all. All of this is to imply what he later says explicitly in a tweet to Mike Sharkey: the language of prediction in higher ed analytics is less about helpfully describing the function of a particular tool, and more about marketing.

What I’d like to do here is to unpack Tim’s claims, and in so doing, soften the kind of strong antagonism that he erects between vendors and the rest of the academy, which is not particularly productive as vendors, higher educational institutions, government, and others seek to work together to promote student success, both in the US and abroad.

What is predictive analytics?

A hermeneutic approach

Let’s begin with defining analytics. Analytics is simply the visual display of quantitative information in support of human decision-making. That’s it. In practice, we see the broad category of analytics sub-divided in a wide variety of ways: by domain (i.e. website analytics), by content area (i.e., learning analytics, supply chain analytics), by intent (i.e., in the case of the common distinction between descriptive, predictive, and prescriptive analytics).

Looking specifically at predictive analytics, it is important not to take the term out of context. In the world of analytics, the term ‘predictive’ always refers to intent. Since analytics is always in the service of human decision-making, it always involves factors that are subject to change on the basis on human activity. Hence, ‘predictive analytics’ involves the desire to anticipate and represent some likely future outcome that is subject to change on the basis on human intervention. When considering the term ‘predictive analytics,’ then, it is important not to consider ‘predictive’ in a vacuum, separate from related terms (descriptive and prescriptive) and the concept of analytics, of which predictive analytics is a type. Pulling a specialized term out of one domain and evaluating it on the terms of another is unfair and is only possible under the presumption that language is static and ontologically bound to specific things.

So, when Tim McKay talks about scientific prediction and complains that predictive analytics do not live up to the rigorous standards of the former, he is absolutely right. But he is right because the language of prediction is deployed in two very different ways. In McKay’s view, scientific prediction involves applying one’s knowledge of governing rules to determine some future state of affairs with a high degree of confidence. In contrast, predictive analytics involves creating a mathematical model that anticipates a likely state of affairs based on observable quantitative patterns in a way that makes no claim to understanding how the world works. Scientific prediction, in McKay’s view, involves an effort to anticipate events that cannot be changed. Predictive analytics involves events that can be changed, and in many cases should be changed.

The distinction that McKay notes is indeed incredibly important. But, unlike McKay, I’m not particularly bothered by the existence of this kind of ambiguity in language. I’m also not particularly prone to lay blame for this kind of ambiguity at the feet of marketers, but I’ll address this later.

An Epistemological Approach

One approach to dealing with the disconnect between scientific prediction and predictive analytics is to admit that there is a degree of ambiguity in the term ‘prediction,’ to adopt a hermeneutic approach, and be clear that the term is simply being deployed relative to a different set of assumption. In other words, science and analytics are both right.

Another approach, however, might involve looking more carefully at the term ‘prediction’ itself and reconciling science and analytics by acknowledging that the difference is a matter of degree, and that they are both equally legitimate (and illegitimate) in their respective claims to the term.

McKay is actually really careful in the way that he describes scientific prediction. To paraphrase, scientific prediction involves (1) accurate information about a state of affairs (ex., the solar system), and (2) an understanding of the rules that govern changes in that state of affairs (ex., laws of gravity, etc). As McKay acknowledges, both our measurements and understanding of the rules of the universe are imperfect and subject to error, but when it comes to something like predicting an eclipse, the information we have is good enough that he is willing to “bet you literally anything in my control that this will happen – my car, my house, my life savings, even my cat. Really. And I’m prepared to settle up on August 22nd.”

Scientific prediction is inductive. It involves the creation of models that adequately describe past states of affairs, an assumption that the future will behave in very much the same way as the past, and some claim about a future event. It’s a systematic way of learning from experience.  McKay implies that explanatory scientific models are the same as the ‘rules that govern,’ but I feel like his admission that ‘Newton’s law of gravity is imperfect but quite adequate’ admits that they are not in fact the same. Our models might adequate rules, but the rules themselves are eternally out of our reach (a philosophical point that has been born out time and time again in the history of science).

Scientific prediction involves the creation of a good enough model that, in spite of errors in measurement and assuming that the patterns of the past will persist into the future, we are able to predict something like a solar eclipse with an incredibly high degree of probability. What if I hated eclipses. What if they really ground my gears. If I had enough time, money, and expertise, might it not be possible for me to…

…wait for it…

…build a GIANT LASER and DESTROY THE MOON?!

Based on my experience as an arm-chair science fiction movie buff, I think the answer is yes.

How is this fundamentally different from how predictive analytics works? Predictive analytics involves the creation of mathematical models based on past states of affairs, an admission that models are inherently incomplete and subject to error in measurement, an assumption that the future will behave in ways very similar to the past, and an acknowledgement that predicted future states of affairs might change with human (or extraterrestrial) intervention. Are the models used to power predictive analytics in higher education as accurate as those we have to predict a lunar eclipse? Certainly not. Is the data collected to produce predictive models of student success free from error? Hardly. But these are differences in degree rather than differences in the thing itself. By this logic, both predictive analytics and scientific prediction function in the exact same way. The only difference is that the social world is way more complex than the astrological world.

So, if scientific predictions are predictive, then student risk predictions are predictive as well. The latter might not be as accurate as the former, but the process and assumptions are identical for both.

An admission

It is unfortunate that, even as he grumbles about how the term ‘predictive’ is used in higher education analytics, McKay doesn’t offer a better alternative.

I’ll admit at this point that, with McKay, I don’t love the term ‘predictive.’ I feel like it is either too strong (in that it assumes some kind of god-like vision into the future) or too weak (in that it is used so widely in common speech and across disciplines that it ceases to have a specific meaning. With Nate Silver, I much prefer the term ‘forecast,’ especially in higher education.

In the Signal and the Noise, Silver notes that the terms ‘prediction’ and ‘forecast’ are used differently in different fields of study, and often interchangeably. In seismology, however, the two terms have very specific meanings: “A prediction is a definitive and specific statement about when and where an earthquake will strike: a major earthquake will hit Kyoto, Japan on June 28…whereas a forecast is a probabilistic statement usually over a longer time scale: there is a 60 percent chance of an earthquake in Southern California over the next thirty years.

There are two things to highlight in Silver’s discussion. First, the term ‘prediction’ is used differently and with varying degrees of rigor depending on the discipline. Second, if we really want to make a distinction, then what we call prediction in higher ed analytics should really be called forecasting. In principle, I like this a lot. When we produce a predictive model of student success, we are forecasting, because we are anticipating an outcome with a known degree of probability. When we take these forecasts and visualize them for the purpose of informing decisions, are we doing ‘forecasting analytics’? ‘forecastive analytics’? ‘forecast analytics’? I can’t actually think of a related term that I’d like to use on a regular basis. Acknowledging that no discipline owns the definition of ‘prediction,’ I’d far rather preserve the term ‘predictive analytics’ in higher education since it both rolls off the tongue, and already has significant momentum within the domain.

Is ‘predictive analytics’ a marketing gimMick?

Those who have read my book will know that I like conceptual history. When we look at the history of the concept of prediction, we find that it has Latin roots and significantly predates the scientific revolution. Quoting Silver again:

The words predict and forecast are largely used interchangeably today, but in Shakespeare’s time, they meant different things.  A prediction was what a soothsayer told you […]

The term forecast came from English’s Germanic roots, unlike predict which is from Latin. Forecasting reflected the new Protestant worldliness rather than the otherwordliness of the Holy Roman Empire. Making a forecast typically implied planning under conditions of uncertainty. It suggested having prudence,
wisdom, and industriousness, more like the way we currently use the word foresight.

The term ‘prediction’ has a long and varied history. It’s meaning is slippery. But what I like about Silver’s summary of the term’s origins is that it essentially takes it off the table for everyone except those who who presume a kind of privileged access to the divine. In other words, using the language of prediction might actually be pretty arrogant, regardless of your field of study, since it presumes to have both complete information and an accurate understanding of the rules that govern the universe. Prediction is an activity reserved for gods, not men.

Digressions aside, the greatest issue that I have with McKay’s piece is that it uses the term ‘prediction’ as a site of antagonism between vendors and the academy. If we bracket all that has been said, and for a second accept McKay’s strong definition of ‘prediction,’ it is easy to demonstrate that vendors are not the only ones misusing the term ‘predictive analytics’ in higher education. Siemens and Baker deploy the term in their preface to the Cambridge Handbook of the Learning Sciences. Manuela Ekowo and Iris Palmer from New America comfortably makes use of the term in their recent policy paper on The Promise and Peril of Predictive Analytics in Higher Education. EDUCAUSE actively encourages the adoption of the term ‘predictive analytics’ through large numbers of publications including the Sept/Oct 2016 edition of the EDUCAUSE Review, which was dedicated entirely to the topic. The term appears in the ‘Journal of Learning Analytics,’ and is used in the first edition of the Handbook of Learning Analytics published by the Society of Learning Analytics Research (SoLAR). University administrators use the term. Government officials use the term. The examples are too numerous to cite (a search for “predictive analytics in higher education” in google scholar yields about 58,700 results). If we want to establish the true definition of ‘prediction’ and judge every use by this gold standard, then it is not simply educational technology vendors who should be charged with misuse. If there is a problem with how people are using the term, it is not a vendor problem: it is a problem of language, and of culture.

I began this essay by stating that we need to keep two things in mind when we enter into conversations about conceptual distinctions:  (1) conceptual boundaries are fuzzy, particularly when common terms are used across different disciplines, and (2) our conceptual commitments have serious consequences for how we perceive the world.  By now, I hope that I have demonstrated that the term ‘prediction’ is used in a wide variety of ways depending on context and intention.  That’s not a bad thing.  That’s just language.  A serious consequence of McKay’s discussion of how ed tech vendors use the term ‘predictive analytics is that it tacitly pits vendors against the interests of higher education — and of students — more generally.  Not only is such a sweeping implication unfair, but it is also unproductive.  It is the shared task of colleges, universities, vendors, government, not-for-profits, and others to work together in support of the success of students in the 21st century.  The language of student success is coalescing in such a way as to make possible a common vision and concerted action around a set of shared goals.  The term ‘predictive analytics’ is just one of many evolving terms that make up our contemporary student success vocabulary, and is evidence of an important paradigm shift in how we view higher education in the US.  Instead of quibbling about the ‘right’ use of language, we should instead recognize that language is shaped by values, and so work together to ensure that the words we use reflect the kinds of outcome we collectively wish to bring about.

How ed tech marketers are bad for higher education

A lot of ed tech marketers are really bad. They are probably not bad at their ‘jobs’ — they may or may not be bad at generating leads, creating well-designed sales material, creating brand visibility. But they are bad for higher education and student success.

Bad ed tech marketers are noisy. They use the same message as the ‘competition.’ They hollow out language through the use and abuse of buzz words. They praise product features as if they were innovative when everyone else is selling products that are basically the same. They take credit for the success of ‘mutant’ customers who — because they have the right people and processes in place — would have been successful regardless of their technology investments. Bad marketers make purchasing decisions complex, and they obscure the fact that no product is a magic bullet. They pretend that their tool will catalyze and align the people and processes necessary to make an impact. Bad marketers encourage institutions to think about product first, and to defer important conversations about institutional goals, priorities, values, governance, and process. Bad marketers are bad for institutions of higher education. Bad marketers are bad for students.

Good marketing in educational technology is about telling stories worth spreading. A familiar mantra. But what is a story worth spreading? It is a story that is honest, and told with the desire to make higher education better. It is NOT about selling product. I strongly ascribe to the stoic view that if you do the right thing, rewards will naturally follow. If you focus on short-term rewards, you will not be successful, especially not in the long run.

Here are three characteristics of educational technology stories worth telling:

  1. Giving credit where credit is due – it is wrong for an educational technology company (or funder, or association, or government) to take credit for the success of an institution. Case studies should always be created with a view to accurately documenting the steps taken by an institution to see results. This story might feature a particular product as a necessary condition of success, but it should also highlight those high impact practices that could be replicated, adapted, and scaled in other contexts regardless of the technology used. It is the task of the marketer to make higher education better by acting as a servant in promoting the people and institutions that are making a real impact.
  2. Refusing to lie with numbers – there was a time in the not-so-distant past when educational technology companies suffered from the irony of selling analytics products without any evidence of their impact. Today, those same companies suffer from another terrible irony: using bad data science to sell data products. Good data science doesn’t always result in the sexiest stories, even it it’s results are significant. It is a lazy marketer who twists the numbers to make headlines. It is the task of a good marketer to understand and communicate the significance of small victories, to popularize the insights that make data scientists excited, but that might sound trivial and obscure to the general public without the right perspective..
  3. Expressing the possible – A good marketer should know their products, and they should know their users. They should be empathetic in appreciating the challenges facing students, instructors, and administrators and work tirelessly as a partner in change. A good marketer does not stand at the periphery.  They get involved because they ARE involved.  A good marketer moves beyond product features and competitive positioning, and toward the articulation of concrete and specific ways of using a technology to meet the needs of students, teachers, and administrators a constantly changing world.

Suffice it to say, good marketing is hard to do. It requires domain expertise and empathy. It is not formulaic. Good educational technology marketing involves telling authentic stories that make education better. It is about telling stories that NEED to be told.

If a marketer can’t say something IMPORTANT, they shouldn’t say anything at all.