On Leadership and the Liberal Arts

Among the greatest contributions of Plato was his recognition that the training of true leaders requires a broad rather than narrow and strictly practical education.

Plato’s ‘Academy,’ which some consider the first European University, was founded with the understanding that a narrow course of study with a focus on politics alone creates opportunists and demagogues rather than statesmen who act courageously in accordance with true human virtue.

I am formally educated in the liberal arts. But as I have aged, gotten married, embarked on a career, and bought a farm, the amount of time I have dedicated to the liberal arts has all but disappeared.

A farm represents a constant reminder that you can’t just think about improvement. You can’t be narrowly focused on things like progress and innovation. Each day, the things that were once new are getting older and falling apart (like fencing). And if you neglect the old for the sake of the new, at some point you’ll end up with ornament in the absence of foundation, and everything will crumble.

A liberal arts education is never finished.

As I work to cultivate a spirit of true leadership, I am reminded of Plato and how a narrow-focus on leadership for its own sake will, ironically, not produce the qualities of a leader. “All leaders are readers,” as they say.

So I’m going back to my roots, and reacquainting myself with the philosophical and literary traditions that were so important to me for such a long time. I have forgotten a lot since I first read Plato. But I have also gained much experience and perspective.

Where I once studied the tradition as a thing in itself, I now find that I am reading it through the lens of virtue. Where I previously asked questions like “what does it mean,” and “how does it relate to the broader history of ideas,” I am now asking “how can this make me a better person.”

I have changed my routine. Each morning, instead of going to the gym (that’s what the farm is for), I now read from the history of philosophy. The result? Felt increases in empathy, happiness, and creativity. All important attributes of true leadership.

  1. EMPATHY – reading from the philosophical tradition (especially the ancients) reminds one that they are not alone. Perhaps the most important theme in philosophy involves grappling with the reconciliation of unity and multiplicity, or the proper relationship between self and others. In reading, I am reminded at the start of each day that I am not just with others, but for others as well.
  2. HAPPINESS – you can’t achieve something if you can’t define it. For all the disagreement that we see among ancient philosophers, they all agree on this point. In reading about happiness, I am forced to reflect on what happiness is and ask how I can change my life to be more intentional in how I pursue it.
  3. CREATIVITY – creativity involves the ability to make new connections between diverse sets of ideas. The more ideas one is exposed to, the greater the opportunity for creativity. That is just a fact. The more I read, the more ideas I am exposed to, and the more motivated I am to seek out more ideas in unlikely places.

Painting the Sky

This is why ‘Digital Transformation’ is so difficult to define

One of the biggest problems with ‘digital transformation’ is that everyone use sthe term differently.

At times, ‘digital transformation’ is used to describe a set of social conditions. At other times, it refers to something we have to do. Still at others (and more commonly) it is something we must consume. Simon Chan has lamented that the term has ‘morphed into a bit of a beast. A “catch all” banner for the marketing of any IT related products and services.” But this ambiguity is not something that evolved over time.

It was there from the start.

According to Chan, the term ‘digital transformation’ was first coined by the Capgemini Consulting group in the first edition of its Digital Transformation Review.  I read it. Truth be told, I wasn’t expecting anything of substance from a rag like this, but it turns out to be a truly remarkable collection.

Seriously.

As a collection of essays, it illustrates the novelty of the concept of ‘digital transformation.’ It demonstrates how, in 2011, people from various perspectives were just starting to grapple with it. It is also remarkable because each author, in their own way, describes digital transformation as a kind of force that is driving change, and to which people and businesses alike are forced to respond.

But there was equivocation around the term even from the very beginning. In “Transform to the Power of Digital: Digital Transformation as a Driver of Corporate Performance” Bonnet and Ferraris of Capgemini (clearly pitching their firm’s consulting practice) also think about digital transformation as an activity … as something that businesses must either undergo (passive) or do (active). For them, digital transformation is a journey that involves adapting a business to meet the challenges and opportunities of a rapidly transforming world: “The journey toward digital transformation entails harnessing its benefits – such as productivity improvement, cost reduction, and innovation – while navigating through the complexity and ambiguity brought about by the changes in the digital economy.” Importantly, Bonnet and Ferraris note that digital transformation should not be an end in itself, and that digital transformation is as much about people as it is about technology.

In addition to describing (1) a force to contend with and (2) a set of activities to be performed, this same collection of essays also frames ‘digital transformation’ as a set of capabilities that businesses can acquire through the consumption of specific technologies. According to Andrew McAfee, for example, transformative technologies fall into three categories: (1) tools to promote data-driven decision-making (i.e. analytics), (2) tools to increase self-organization (i.e. communication tools and social media), and (3) tools for orchestration (i.e. ERP systems).

What can we learn from all this?

‘Digital transformation’ doesn’t mean any one thing. It means a lot of things. It can and is used to describe a lot of different things. I worry when a term permeates business jargon so quickly while also lacking clear definition. Such things are primed for hype, and are easy fodder for ‘marketers.’

(yes, I realize that I’m a marketer…it’s complicated)

At it’s most benign, ‘digital transformation’ is a kind of throw-away word that either gets in the way of, or excuses people from, talking about real issues like the specific ways that particular technologies may help or hinder the achievement of well-defined business objectives.

Terms like ‘digital transformation’ can also be super handy when people are looking for ways to appear knowledgeable when they are actually looking to avoid a more meaningful conversation.

At it’s most damaging, however, ‘digital transformation’ is a term that can be used by marketers, consultants, and industry analysts to generate a sense of dread on the part of prospective customers in order to construct their products or themselves as heroes. I expand on this in another post.

I worry when ‘digital transformation’ is thought about as either an activity that businesses must perform, or as a thing that businesses must consume, because in both cases the result is the commodification of a solution to a problem that is poorly defined.

But I don’t want to throw out the baby with the dishwater.

The concept of ‘digital transformation’ DOES have value if it is used to refer to some of the ways that technology has shaped the social and economic world. It has value because it highlights systematic changes in the purchasing decisions of businesses as they increasingly mirror the kinds of expectation that we have as consumers.

The concept of ‘digital transformation’ has value because it at least gestures toward a set of emerging challenges that businesses must address.

‘Digital transformation’ is a call, and businesses must respond if they are going to survive. The response will differ from business to business, and it will involve a hybrid strategy that incorporates both digital and analogue solutions.

But there MUST be a response.

Nobody ACTUALLY needs ‘Digital Transformation’

I’m really interested in how ideas become things with the power to shape reality. My interest is not idle. It’s also not strictly academic (despite the fact that I have written a book on the subject). It comes from a desire to explode hype cycles by working with businesses to understand and address real issues instead of being distracted by secondary anxieties created by marketers and industry ‘experts.’ 

So let’s talk about ‘digital transformation.’

The language that is most commonly used to describe ‘digital transformation’ makes a crucial mistake. It treats ‘digital transformation’ as a thing. More than a thing, ‘digital transformation’ is talked about as a thing that businesses need and can consume. The result is a framing of business problems along the following lines:

  1. Businesses need ‘digital transformation’ to survive
  2. Your business does not have ‘digital transformation’ 
  3. Therefore, unless your business invests in ‘digital transformation’ now, it will not survive.

That’s super scary.

Framed in this way, technology vendors, consultants, and industry analysts will use the concept of ‘digital transformation’ to define a problem that businesses didn’t know they had, in order to sell them products and services they might not need. 

But the problem is NEVER that a business lacks ‘digital transformation.’ Digital transformation is never an end in itself.  True, some kinds of technology and certain types of transformation may be required to solve particular business problems, but until those actual problems are defined, it is impossible for a business to know whether ‘digital transformation’ is necessary, or to even know what it means. 

So let’s stop talking about ‘digital transformation.’  Instead, let’s put in the hard work necessary to understand our business challenges, and to seek out the right solutions.  Let’s stop talking about ‘digital transformation,’ and instead talk about problem-solving using any and all resources we have available.  

If all you have is a hammer, everything looks like a nail.  If you think of all your problems like they require digital solutions, those are the only ‘solutions’ you will see.  Let’s not limit ourselves. Instead, let’s adopt a more holistic perspective that looks to solve well-understood problems using any and all available resources. This includes the digital, of course, but in a way that intentionally complements more analogue solutions like people and processes as well.

Should edtech vendors stop selling ‘predictive analytics’? A response to Tim McKay

Pedantic rants about the use and misuse of language are a lot of fun. We all have our soap boxes, and I strongly encourage everyone to hop on theirs from time to time. But when we enter into conversations around the use and misuse of jargon, we must always keep two things in mind: (1) conceptual boundaries are fuzzy, particularly when common terms are used across different disciplines, and (2) our conceptual commitments have serious consequences for how we perceive the world.

Tim McKay recently wrote a blog post called Hey vendors! Stop calling what you’re selling colleges and universities “Predictive Analytics”. In this piece, Mckay does two things. First, he tries to strongly distinguish the kind of ‘predictive analytics’ work done by vendors from the kind of ‘real’ prediction that is done within his own native discipline, which is astronomy. Second, on the basis of this distinction, he asserts that what analytics companies are calling ‘predictive analytics’ are actually not predictive at all. All of this is to imply what he later says explicitly in a tweet to Mike Sharkey: the language of prediction in higher ed analytics is less about helpfully describing the function of a particular tool, and more about marketing.

What I’d like to do here is to unpack Tim’s claims, and in so doing, soften the kind of strong antagonism that he erects between vendors and the rest of the academy, which is not particularly productive as vendors, higher educational institutions, government, and others seek to work together to promote student success, both in the US and abroad.

What is predictive analytics?

A hermeneutic approach

Let’s begin with defining analytics. Analytics is simply the visual display of quantitative information in support of human decision-making. That’s it. In practice, we see the broad category of analytics sub-divided in a wide variety of ways: by domain (i.e. website analytics), by content area (i.e., learning analytics, supply chain analytics), by intent (i.e., in the case of the common distinction between descriptive, predictive, and prescriptive analytics).

Looking specifically at predictive analytics, it is important not to take the term out of context. In the world of analytics, the term ‘predictive’ always refers to intent. Since analytics is always in the service of human decision-making, it always involves factors that are subject to change on the basis on human activity. Hence, ‘predictive analytics’ involves the desire to anticipate and represent some likely future outcome that is subject to change on the basis on human intervention. When considering the term ‘predictive analytics,’ then, it is important not to consider ‘predictive’ in a vacuum, separate from related terms (descriptive and prescriptive) and the concept of analytics, of which predictive analytics is a type. Pulling a specialized term out of one domain and evaluating it on the terms of another is unfair and is only possible under the presumption that language is static and ontologically bound to specific things.

So, when Tim McKay talks about scientific prediction and complains that predictive analytics do not live up to the rigorous standards of the former, he is absolutely right. But he is right because the language of prediction is deployed in two very different ways. In McKay’s view, scientific prediction involves applying one’s knowledge of governing rules to determine some future state of affairs with a high degree of confidence. In contrast, predictive analytics involves creating a mathematical model that anticipates a likely state of affairs based on observable quantitative patterns in a way that makes no claim to understanding how the world works. Scientific prediction, in McKay’s view, involves an effort to anticipate events that cannot be changed. Predictive analytics involves events that can be changed, and in many cases should be changed.

The distinction that McKay notes is indeed incredibly important. But, unlike McKay, I’m not particularly bothered by the existence of this kind of ambiguity in language. I’m also not particularly prone to lay blame for this kind of ambiguity at the feet of marketers, but I’ll address this later.

An Epistemological Approach

One approach to dealing with the disconnect between scientific prediction and predictive analytics is to admit that there is a degree of ambiguity in the term ‘prediction,’ to adopt a hermeneutic approach, and be clear that the term is simply being deployed relative to a different set of assumption. In other words, science and analytics are both right.

Another approach, however, might involve looking more carefully at the term ‘prediction’ itself and reconciling science and analytics by acknowledging that the difference is a matter of degree, and that they are both equally legitimate (and illegitimate) in their respective claims to the term.

McKay is actually really careful in the way that he describes scientific prediction. To paraphrase, scientific prediction involves (1) accurate information about a state of affairs (ex., the solar system), and (2) an understanding of the rules that govern changes in that state of affairs (ex., laws of gravity, etc). As McKay acknowledges, both our measurements and understanding of the rules of the universe are imperfect and subject to error, but when it comes to something like predicting an eclipse, the information we have is good enough that he is willing to “bet you literally anything in my control that this will happen – my car, my house, my life savings, even my cat. Really. And I’m prepared to settle up on August 22nd.”

Scientific prediction is inductive. It involves the creation of models that adequately describe past states of affairs, an assumption that the future will behave in very much the same way as the past, and some claim about a future event. It’s a systematic way of learning from experience.  McKay implies that explanatory scientific models are the same as the ‘rules that govern,’ but I feel like his admission that ‘Newton’s law of gravity is imperfect but quite adequate’ admits that they are not in fact the same. Our models might adequate rules, but the rules themselves are eternally out of our reach (a philosophical point that has been born out time and time again in the history of science).

Scientific prediction involves the creation of a good enough model that, in spite of errors in measurement and assuming that the patterns of the past will persist into the future, we are able to predict something like a solar eclipse with an incredibly high degree of probability. What if I hated eclipses. What if they really ground my gears. If I had enough time, money, and expertise, might it not be possible for me to…

…wait for it…

…build a GIANT LASER and DESTROY THE MOON?!

Based on my experience as an arm-chair science fiction movie buff, I think the answer is yes.

How is this fundamentally different from how predictive analytics works? Predictive analytics involves the creation of mathematical models based on past states of affairs, an admission that models are inherently incomplete and subject to error in measurement, an assumption that the future will behave in ways very similar to the past, and an acknowledgement that predicted future states of affairs might change with human (or extraterrestrial) intervention. Are the models used to power predictive analytics in higher education as accurate as those we have to predict a lunar eclipse? Certainly not. Is the data collected to produce predictive models of student success free from error? Hardly. But these are differences in degree rather than differences in the thing itself. By this logic, both predictive analytics and scientific prediction function in the exact same way. The only difference is that the social world is way more complex than the astrological world.

So, if scientific predictions are predictive, then student risk predictions are predictive as well. The latter might not be as accurate as the former, but the process and assumptions are identical for both.

An admission

It is unfortunate that, even as he grumbles about how the term ‘predictive’ is used in higher education analytics, McKay doesn’t offer a better alternative.

I’ll admit at this point that, with McKay, I don’t love the term ‘predictive.’ I feel like it is either too strong (in that it assumes some kind of god-like vision into the future) or too weak (in that it is used so widely in common speech and across disciplines that it ceases to have a specific meaning. With Nate Silver, I much prefer the term ‘forecast,’ especially in higher education.

In the Signal and the Noise, Silver notes that the terms ‘prediction’ and ‘forecast’ are used differently in different fields of study, and often interchangeably. In seismology, however, the two terms have very specific meanings: “A prediction is a definitive and specific statement about when and where an earthquake will strike: a major earthquake will hit Kyoto, Japan on June 28…whereas a forecast is a probabilistic statement usually over a longer time scale: there is a 60 percent chance of an earthquake in Southern California over the next thirty years.

There are two things to highlight in Silver’s discussion. First, the term ‘prediction’ is used differently and with varying degrees of rigor depending on the discipline. Second, if we really want to make a distinction, then what we call prediction in higher ed analytics should really be called forecasting. In principle, I like this a lot. When we produce a predictive model of student success, we are forecasting, because we are anticipating an outcome with a known degree of probability. When we take these forecasts and visualize them for the purpose of informing decisions, are we doing ‘forecasting analytics’? ‘forecastive analytics’? ‘forecast analytics’? I can’t actually think of a related term that I’d like to use on a regular basis. Acknowledging that no discipline owns the definition of ‘prediction,’ I’d far rather preserve the term ‘predictive analytics’ in higher education since it both rolls off the tongue, and already has significant momentum within the domain.

Is ‘predictive analytics’ a marketing gimMick?

Those who have read my book will know that I like conceptual history. When we look at the history of the concept of prediction, we find that it has Latin roots and significantly predates the scientific revolution. Quoting Silver again:

The words predict and forecast are largely used interchangeably today, but in Shakespeare’s time, they meant different things.  A prediction was what a soothsayer told you […]

The term forecast came from English’s Germanic roots, unlike predict which is from Latin. Forecasting reflected the new Protestant worldliness rather than the otherwordliness of the Holy Roman Empire. Making a forecast typically implied planning under conditions of uncertainty. It suggested having prudence,
wisdom, and industriousness, more like the way we currently use the word foresight.

The term ‘prediction’ has a long and varied history. It’s meaning is slippery. But what I like about Silver’s summary of the term’s origins is that it essentially takes it off the table for everyone except those who who presume a kind of privileged access to the divine. In other words, using the language of prediction might actually be pretty arrogant, regardless of your field of study, since it presumes to have both complete information and an accurate understanding of the rules that govern the universe. Prediction is an activity reserved for gods, not men.

Digressions aside, the greatest issue that I have with McKay’s piece is that it uses the term ‘prediction’ as a site of antagonism between vendors and the academy. If we bracket all that has been said, and for a second accept McKay’s strong definition of ‘prediction,’ it is easy to demonstrate that vendors are not the only ones misusing the term ‘predictive analytics’ in higher education. Siemens and Baker deploy the term in their preface to the Cambridge Handbook of the Learning Sciences. Manuela Ekowo and Iris Palmer from New America comfortably makes use of the term in their recent policy paper on The Promise and Peril of Predictive Analytics in Higher Education. EDUCAUSE actively encourages the adoption of the term ‘predictive analytics’ through large numbers of publications including the Sept/Oct 2016 edition of the EDUCAUSE Review, which was dedicated entirely to the topic. The term appears in the ‘Journal of Learning Analytics,’ and is used in the first edition of the Handbook of Learning Analytics published by the Society of Learning Analytics Research (SoLAR). University administrators use the term. Government officials use the term. The examples are too numerous to cite (a search for “predictive analytics in higher education” in google scholar yields about 58,700 results). If we want to establish the true definition of ‘prediction’ and judge every use by this gold standard, then it is not simply educational technology vendors who should be charged with misuse. If there is a problem with how people are using the term, it is not a vendor problem: it is a problem of language, and of culture.

I began this essay by stating that we need to keep two things in mind when we enter into conversations about conceptual distinctions:  (1) conceptual boundaries are fuzzy, particularly when common terms are used across different disciplines, and (2) our conceptual commitments have serious consequences for how we perceive the world.  By now, I hope that I have demonstrated that the term ‘prediction’ is used in a wide variety of ways depending on context and intention.  That’s not a bad thing.  That’s just language.  A serious consequence of McKay’s discussion of how ed tech vendors use the term ‘predictive analytics is that it tacitly pits vendors against the interests of higher education — and of students — more generally.  Not only is such a sweeping implication unfair, but it is also unproductive.  It is the shared task of colleges, universities, vendors, government, not-for-profits, and others to work together in support of the success of students in the 21st century.  The language of student success is coalescing in such a way as to make possible a common vision and concerted action around a set of shared goals.  The term ‘predictive analytics’ is just one of many evolving terms that make up our contemporary student success vocabulary, and is evidence of an important paradigm shift in how we view higher education in the US.  Instead of quibbling about the ‘right’ use of language, we should instead recognize that language is shaped by values, and so work together to ensure that the words we use reflect the kinds of outcome we collectively wish to bring about.

The Trouble with ‘Student Success’

I’m increasingly troubled by ‘student success,’ and am even somewhat inclined to stop using the term entirely.

The trouble with ‘student success,’ it seems to me, is that it actually has very little to do with people. It’s not about humans, but rather about a set of conditions required for humans to successfully fill a particular role: that of a student.

So, what is a student?

A student (within the context of higher education, and as the term is employed within student success literature) is someone who is admitted to an institution of higher education, is at least minimally retained by that institution (many colleges and universities require at least 60 non-transferred credit hours in order to grant a degree), and graduate with some kind of credential (at least an Associate’s degree, but preferably a Bachelor’s). The student is the product of higher education. It is the task of colleges and universities to convert non-students into students (through the admissions process), only to convert them into a better kinds of non-students (through the graduation process). The whole thing is not entirely different from that religious process whereby an individual must first be converted from an a-sinner (someone who doesn’t grasp what sin is) into a sinner (they need to learn what sin is, and that they have committed it) in order to be transformed into a non-sinner through a process of redemption.

The language of ‘student success’ assumes that ‘being a student’ is an unmitigated good. But being a student is not a good in itself. The good of being a student is a direct consequence of the fact that being a student is requisite for attaining other higher goods. Having been a successful student is necessary in order to become a good worker. From the perspective of the individual, having been a successful student translates into being able to get a better job and earn a higher salary. From the perspective of a nation, a well-educated populace translates into an ability to meet labor demands in the service of economic growth. If this is the end of being a student, then, shouldn’t we talk about ‘Worker Success’? Replacing ‘student-‘ with ‘worker-‘ would retain every feature of ‘student success,’ but with the advantage of acknowledging that a post-secondary degree is not an end in itself, but is rather in the service of something greater. It would be more honest. It might also have the effect of increasing graduation rates by extending the horizon of students beyond the shoreline of their college experience and out toward the open sea of what will become something between a job and a vocation.

But I find the idea of ‘worker success’ still troubling in the same way as ‘student success.’ As with ‘student success,’ ‘worker success’ speaks to a role that humans occupy. It refers to something that a person does, rather than what a person is. As with being a successful student, being a successful worker implied having satisfactorily met the demands of a particular role, a set of criteria that come from outside of you, and that it is incumbant upon you to achieve. A successful student is someone who is admitted, retained, and graduates and so it is unsurprising that these are the measures against which colleges and universities are evaluated. A successful institution is one that creates successful students. Pressure is increasingly being put on institutions to ensure that students find success in career, but this is far more difficult to track (great minds are working on it). A successful worker is one who earns a high-paying job (high-salary serving as a proxy for the amount of value that a particular individual contributes to the overall economy).

What if we were to shift the way that we think about student success, away from focusing on conditional and instrumental goods, and instead toward goods that are unconditional and intrinsic? What if we viewed student success, not as an end in itself, but rather as something that may or may not help human beings contract their full potential as human beings? Would it mean eliminating higher education as it is today? I don’t think so. I’m not a utopian. I readily understand the historical, social, cultural, and material conditions that make school and work important. To the contrary, shifting out perspective toward what makes us human may in fact serve to underline the importance of an undergraduate education, and even of that piece of paper they call a degree. To the extent that an undergraduate education exposes minds to a world of knowledge, at the same time as it provides them with an opportunity to earn a good wage means that they are freed from the conditions of bare life (i.e. living paycheck to paycheck) and can commit their energies to higher order pursuits. Considered in this way, the importance of eliminating achievement gaps on the basis of race. ethnicity, gender, income, etc is also increased. For these groups who have been traditionally underserved by higher education, what is at stake in NOT having a post-secondary credential is not just a wage, but also perhaps their potential as human beings. At the same time as it make higher education more important, considering the student journey from the perspective of human success also opens up legitimate alternative pathways to formal education through which it is also possible to flourish. Higher education might be a way, but it might not be the way. And that should be okay.

I don’t know what this shift in perspective would mean for evaluating institutions. As long as colleges and universities are aimed at producing student-graduates, their reason for being is to solve a tactical problem — “how do we admit and graduate more students” — and they can be evaluated empirically and quantitatively by the extent to which they have solved the problem. The minute that colleges and universities start to reconceive their mission, not in terms of students, but in terms of humans, their success becomes far more difficult to measure, because the success of students-as-humans is difficult to measure. By thinking of education as a way of serving humans as opposed to serving students, our task becomes far more important, and also far more challenging.

But since when were the Good and the Easy the same thing?

This Week in Learning Analytics: Privacy and Ethics

DogDisguise

Another set of ethical issues that were raised this week involve the intersection of analytics and the humanities. Joshua Kim sparked a conversation about the place of analytics in the liberal arts. In the discussion following Kim’s post, greatest attention was paid to issues of definition: What is ‘assessment’? What are the ‘Liberal Arts’? (Mike Sharkey, for example, suggests that the liberal arts simply imply “small classes and a high-touch environment,” and argues that analytics offers very little value in such contexts. Timothy Harfield argues that the liberal arts provide a critical perspective on analytics, and are crucial to ensuring that educational institutions are learning-driven rather than data-driven). Lastly, in an article for Educause Review Online, James E. Willis discusses the failure of ethical discussions in learning analytics, and offers an ethical framework that highlights some of the complexities involved in the debate. He categorizes ethical questions in terms of three distinct philosophical perspectives, what he calls “Moral Utopianism,” “Moral Ambiguity,” and “Moral Nihilism.” The framework itself is at once overly pedantic and lacking in the clarity and sophistication that one would expect from a piece with tacit claims to a foundation in the history of philosophy, but nevertheless represents an interesting attempt to push the debate outside of the more comfortable legal questions that most often frame conversations about data and privacy.

Recent Blog Posts

Featured Articles

Privacy, Anonymity, and Big Data in the Social Sciences
Jon P. Daries, Justin Reich, Jim Waldo, Elise M. Young, Jonathan Whittinghill, Daniel Thomas Seaton, Andrew Dean Ho, Isaac Chuang

Open data has tremendous potential for science, but, in human subjects research, there is a tension between privacy and releasing high-quality open data. Federal law governing student privacy and the release of student records suggests that anonymizing student data protects student privacy. Guided by this standard, we de-identified and released a data set from 16 MOOCs (massive open online courses) from MITx and HarvardX on the edX platform. In this article, we show that these and other de-identification procedures necessitate changes to data sets that threaten replication and extension of baseline analyses. To balance student privacy and the benefits of open data, we suggest focusing on protecting privacy without anonymizing data by instead expanding policies that compel researchers to uphold the privacy of the subjects in open data sets. If we want to have high-quality social science research and also protect the privacy of human subjects, we must eventually have trust in researchers. Otherwise, we’ll always have the strict tradeoff between anonymity and science illustrated here.

Using Learning Analytics to Analyze Writing Skills of Students: A Case Study in a Technological Common Core Curriculum Course
Chi-Un Lei, Ka Lok Man, and T. O. Ting

Pedagogy with learning analytics is shown to facil- itate the teaching-learning process through analyzing student’s behaviours. In this paper, we explored the possibility of using learning analytics tools Coh-Metrix and Lightside for analyzing and improving writing skills of students in a technological common core curriculum course. In this study, we i) investigated linguistic characteristics of student’s essays, and ii) applied a machine learning algorithm for giving instant sketch feedback to students. Results illustrated the necessity of improving student’s writing skills in their university learning through e- learning technologies, so that students can effectively circulate their ideas to the public in the future.

Calls for Papers

CALL FOR CHAPTERS: Developing Effective Educational Experiences through Learning Analytics
Edge Hill University Press (ABSTRACT SUBMISSION DEADLINE: 15 September 2014)

CALL FOR PAPERS: 5th International Learning Analytics and Knowledge (LAK) Conference
Marist College (Poughkeepsie, NY) | 16-20 March 2015 (SUBMISSION DEADLINE: 14 October 2014)

Recommended Resources

swirl: Learn R, in R
swirl teaches you R programming and data science interactively, at your own pace, and right in the R console!

Upcoming Events

6-9 October 2014
Learning Analytics Week
École polytechnique fédérale de Lausanne

15 October 2014
ALE Speaker Series: Charles Dziuban on Engaging Students in an Engaging Educational Environment

Emory University (Streaming Available)

20 October 2014
Data, Analytics and Learning: An introduction to the logic and methods of analysis of data to improve teaching and learning

University of Texas Arlington | EdX