In Praise of Turmeric

The more I age, the more I think about aging. And with age I am changing my view of health. When I was younger, I thought about health in terms of how good I looked and how much I could lift. Now I am thinking more about health in terms of how well I can manage my energy, optimize productivity, and increase longevity. A change in mindset involves a lot of retraining. It’s really easy to fall back on old ‘health’ habits because they’re familiar even though they are actually counter-productive. Change begins with education. I’m interrogating my current habits and working to develop new ones. In this, I don’t want to blindly follow the advice of ‘experts.’ I want to ask and answer the question ‘why?’

So Turmeric.

I’ve been more intentional about consuming turmeric over the last few months. The health benefits are well-documented, but I don’t want to stop at hearing ‘turmeric is good for you,’ throw a sprinkle or two into my food from them to time, and call it a day. I want to know why turmeric is ‘good for you,’ and how I need to consume it to actually benefit.

Although the benefits of turmeric’s most active medicinal molecule curcumin are many, I want to focus on two specific areas because these are areas that I am personally most interested in optimizing for: anti-oxidation and anti-inflammation.

Turmeric as Anti-Oxidant

Oxidation is a normal and beneficial part of the body’s metabolic processes. It involves the splitting of oxygen molecules into single atoms with unpaired electrons that then go about harvesting electrons from other places in the body: cells, proteins, and DNA. These unpaired oxygen molecules are called free radicals.

I feel like free radicals have gotten a bad rap. Free radicals are not bad. Float around attacking foreign invaders, they’re actually a super important part of the immune system. What IS bad is oxidative stress, which is an imbalance of free radicals and anti-oxidants (molecules that keep free radicals in check by lending them their own spare electrons). Without anti-oxidants, free radicals go beyond their job as a part of the immune system and attack otherwise healthy cells. Too much oxidative stress over time can lead to all manner of dysfunction because … humans are made of cells.

The human body creates its own any-oxidants (Most notable glutathione) just like it creates free radicals. But the body isn’t a closed system. Instead, we are constantly introducing more free radicals as a result of our diet (fried foods, alcohol, pesticides), and as a result of environmental pollutants.

Smoking is of course a terrible idea.

In order to mitigate the harm of oxidative stress on muscles, joints, skin, DNA, glands, organs, and the brain, it’s important for us to do things to support the body’s natural production of anti-oxidants while supplementing through our diet.

That’s where turmeric comes in. The curcumin in turmeric works as a both a free radical scavenger and an anti-oxidant that supports the body’s own natural ability to produce glutathione.

Turmeric as Anti-Inflammatory

Acute inflammation is a good thing. When we’re injured, for example, inflammation takes place as the body increases the presence of red and white blood cells along with additional hormones and nutrients to help with healing.

Chronic or low level inflammation is not a good thing. It happens when a mild inflammatory response is triggered despite the absence of a significant threat. Although it may not have symptoms, the long-term effects of chronic inflammation is that the immune system begins attacking otherwise healthy cells. In the long term, chronic inflammation can lead to cancer, heart disease, and a range of auto-immune disorders.

The curcumin in turmeric works as a powerful anti-inflammatory that is equal in effectiveness to many anti-inflammatory drugs but without the side-effects. It does this by regulating the cellular reception of NF-kB, which is responsible for activating inflammation-related genes.

How to benefit from turmeric

To benefit the most from the curcumin in turmeric, there are a few things you need to know:

  1. The amount of curcumin in ground or fresh turmeric is only about 3% by weight.
  2. Most studies that see significant benefits for curcumin do so at levels between 500mg and 1000mg per day
  3. Curcumin alone is poorly absorbed into the bloodstream, but it can be significantly enhanced when consumed with black pepper at a ratio of 1:100 pepper to turmeric.

What this all means is that, assuming that you want more out of your turmeric than the taste, you need to consume about 7 teaspoons (or about 2.5 tablespoons) each day to achieve a minimum effective dose.

That’s a lot of turmeric. But it’s not so much that it’s impossible.

In addition to using turmeric more in my cooking, here are a couple of other things I am doing to add more curcumin to my diet.

My special blend

My wife recently bought me one of these half-gallon motivational water bottles. Each day I fill it with water and add the following:

    2 tablespoons of apple cider vinegar, which when diluted gives the water a nice kombucha-like flavor while also helping to regulate blood sugar and curb hunger pangs throughout the day
    1 tsp of ground ginger, which is also great as an anti-inflammatory and metabolic aid
    1/2 tsp black pepper, to assist with curcumin absorption

I like to keep my concoction in the fridge at home or at work, and visit it for a drink as a way to reset between meetings and other activities. In addition to meeting my daily curcumin requirement, it ensures that I am hydrated (vitally important for maintaining mental sharpness).

Turmeric Tea

A big jug of yellow sludge may be okay at home and at work, but it can be challenging to wield whilst traveling. For a while I have made it a habit to pack a brick of Numi Organic Aged Pu-Erh. Great for gut health and blood sugar regulation for sustained energy throughout the day. I pop a square in my infuser and refill it throughout the day.

Numi has recently launched a line of turmeric teas. Numi really does it right using only high quality, ethically sourced, organic ingredients. I am a huge fan of rooibos, and so really enjoy the Amber Sun blend for winding down before bed.

Turmeric is not a magic bullet. Truth be told, it’s kind of weird, and it’s super weird that I’ve written an entire blog post about it. But is also pretty neat. Many of us in the West (including myself) aren’t particularly good about our use of spices, either for flavor or for health. We would arguably be far better off if we were. We all need to start somewhere. I’m starting with Turmeric.

This is why ‘Digital Transformation’ is so difficult to define

One of the biggest problems with ‘digital transformation’ is that everyone use sthe term differently.

At times, ‘digital transformation’ is used to describe a set of social conditions. At other times, it refers to something we have to do. Still at others (and more commonly) it is something we must consume. Simon Chan has lamented that the term has ‘morphed into a bit of a beast. A “catch all” banner for the marketing of any IT related products and services.” But this ambiguity is not something that evolved over time.

It was there from the start.

According to Chan, the term ‘digital transformation’ was first coined by the Capgemini Consulting group in the first edition of its Digital Transformation Review.  I read it. Truth be told, I wasn’t expecting anything of substance from a rag like this, but it turns out to be a truly remarkable collection.

Seriously.

As a collection of essays, it illustrates the novelty of the concept of ‘digital transformation.’ It demonstrates how, in 2011, people from various perspectives were just starting to grapple with it. It is also remarkable because each author, in their own way, describes digital transformation as a kind of force that is driving change, and to which people and businesses alike are forced to respond.

But there was equivocation around the term even from the very beginning. In “Transform to the Power of Digital: Digital Transformation as a Driver of Corporate Performance” Bonnet and Ferraris of Capgemini (clearly pitching their firm’s consulting practice) also think about digital transformation as an activity … as something that businesses must either undergo (passive) or do (active). For them, digital transformation is a journey that involves adapting a business to meet the challenges and opportunities of a rapidly transforming world: “The journey toward digital transformation entails harnessing its benefits – such as productivity improvement, cost reduction, and innovation – while navigating through the complexity and ambiguity brought about by the changes in the digital economy.” Importantly, Bonnet and Ferraris note that digital transformation should not be an end in itself, and that digital transformation is as much about people as it is about technology.

In addition to describing (1) a force to contend with and (2) a set of activities to be performed, this same collection of essays also frames ‘digital transformation’ as a set of capabilities that businesses can acquire through the consumption of specific technologies. According to Andrew McAfee, for example, transformative technologies fall into three categories: (1) tools to promote data-driven decision-making (i.e. analytics), (2) tools to increase self-organization (i.e. communication tools and social media), and (3) tools for orchestration (i.e. ERP systems).

What can we learn from all this?

‘Digital transformation’ doesn’t mean any one thing. It means a lot of things. It can and is used to describe a lot of different things. I worry when a term permeates business jargon so quickly while also lacking clear definition. Such things are primed for hype, and are easy fodder for ‘marketers.’

(yes, I realize that I’m a marketer…it’s complicated)

At it’s most benign, ‘digital transformation’ is a kind of throw-away word that either gets in the way of, or excuses people from, talking about real issues like the specific ways that particular technologies may help or hinder the achievement of well-defined business objectives.

Terms like ‘digital transformation’ can also be super handy when people are looking for ways to appear knowledgeable when they are actually looking to avoid a more meaningful conversation.

At it’s most damaging, however, ‘digital transformation’ is a term that can be used by marketers, consultants, and industry analysts to generate a sense of dread on the part of prospective customers in order to construct their products or themselves as heroes. I expand on this in another post.

I worry when ‘digital transformation’ is thought about as either an activity that businesses must perform, or as a thing that businesses must consume, because in both cases the result is the commodification of a solution to a problem that is poorly defined.

But I don’t want to throw out the baby with the dishwater.

The concept of ‘digital transformation’ DOES have value if it is used to refer to some of the ways that technology has shaped the social and economic world. It has value because it highlights systematic changes in the purchasing decisions of businesses as they increasingly mirror the kinds of expectation that we have as consumers.

The concept of ‘digital transformation’ has value because it at least gestures toward a set of emerging challenges that businesses must address.

‘Digital transformation’ is a call, and businesses must respond if they are going to survive. The response will differ from business to business, and it will involve a hybrid strategy that incorporates both digital and analogue solutions.

But there MUST be a response.

Nobody ACTUALLY needs ‘Digital Transformation’

I’m really interested in how ideas become things with the power to shape reality. My interest is not idle. It’s also not strictly academic (despite the fact that I have written a book on the subject). It comes from a desire to explode hype cycles by working with businesses to understand and address real issues instead of being distracted by secondary anxieties created by marketers and industry ‘experts.’ 

So let’s talk about ‘digital transformation.’

The language that is most commonly used to describe ‘digital transformation’ makes a crucial mistake. It treats ‘digital transformation’ as a thing. More than a thing, ‘digital transformation’ is talked about as a thing that businesses need and can consume. The result is a framing of business problems along the following lines:

  1. Businesses need ‘digital transformation’ to survive
  2. Your business does not have ‘digital transformation’ 
  3. Therefore, unless your business invests in ‘digital transformation’ now, it will not survive.

That’s super scary.

Framed in this way, technology vendors, consultants, and industry analysts will use the concept of ‘digital transformation’ to define a problem that businesses didn’t know they had, in order to sell them products and services they might not need. 

But the problem is NEVER that a business lacks ‘digital transformation.’ Digital transformation is never an end in itself.  True, some kinds of technology and certain types of transformation may be required to solve particular business problems, but until those actual problems are defined, it is impossible for a business to know whether ‘digital transformation’ is necessary, or to even know what it means. 

So let’s stop talking about ‘digital transformation.’  Instead, let’s put in the hard work necessary to understand our business challenges, and to seek out the right solutions.  Let’s stop talking about ‘digital transformation,’ and instead talk about problem-solving using any and all resources we have available.  

If all you have is a hammer, everything looks like a nail.  If you think of all your problems like they require digital solutions, those are the only ‘solutions’ you will see.  Let’s not limit ourselves. Instead, let’s adopt a more holistic perspective that looks to solve well-understood problems using any and all available resources. This includes the digital, of course, but in a way that intentionally complements more analogue solutions like people and processes as well.

AI, Higher Education, and Standardizing Values in Human Decision-Making

Our current use of AI in higher education involves automating parts (and at times the whole) of the human decision-making process. Where there is automation there is standardization. Where there are decisions, there are values. As a consequence, we can think of one of the functions of AI as the standardization of values. Depending on what your values are, and the extent to which they are reflected by algorithms as they are deployed, this may be more or less a good or bad thing.

Augmenting Human Decision-Making

An example of how AI is being used to automate parts of the decision-making process is through nudging. According to Thaler and Sunstein, the concept of nudging is rooted in an ethical perspective that they term ‘libertarian paternalism.’ Wanting to encourage people to behave in ways that are likely to benefit them, but not also wanting to undermine human freedom of choice (which Thaler, Sunstein, and many others view as an unequivocal good), nudging aims to structure environments so as to increase the chances that human beings will freely make the ‘right decisions.’ In higher education, a nudge could be something as simple as an automated alert reminding a student to register for the next semester or begin the next assignment. It could be an approach to instructional design meant to increase a student’s level of engagement in an online course. It could be student-facing analytics meant to promote increased reflection about one’s level of interaction in a discussion board. Nudges don’t have to involve AI (a grading rubric is a great example of a formative assessment practice designed to increase the salience of certain values at the expense of others), but what AI allows us to do is to scale and standardize nudges in a way that was, until recently, unimaginable.

Putting aside the obvious ‘having one’s cake and eating it too’ tension at the heart of the idea of libertarian paternalism, the fact of the matter is that a nudge functions by making decisions easier through the (at least partial) automation of the decision-making process. It serves to make choices easier my making some factors more salient than others, reducing an otherwise large and complex set of factors to a set that is much more manageable. The way a nudge works is by universalizing a set of values by using them as criteria for pre-selecting relevant factors for use in the decision-making process.

I don’t want to say whether this is a good or a bad thing. It is happening, and it certainly brings with it the possibility of promoting a range of social goods. But it is important for us to recognize that values are involved. We need to be aware of, and responsible for, the values that we are choosing to standardize in a given nudge. And we need to constantly revisit those values to ensure that they are consistent with our views and in light of the impact on human behavior that they are designed to have.

Automating Human Decision-Making

An example of where AI is being used to automate the entire decision process is in chat bots. Chat bots make a lot of sense for institutions looking to increase efficiency. During the admissions process, for example, university call centers are bombarded with phone calls from students seeking answers to common questions. Call centers are expensive and so universities are looking for ways to reduce cost. But lower cost has traditionally meant decreased capacity, and if capacity isn’t sufficient to handle demand from students, institutions run the risk of losing the very students they are looking to admit. AI is helping institutions to scale their ability to respond to common student questions by, in essence, personalizing a student’s experience with a knowledge base. A chat bot is an interface. In contrast to automated nudges, which serve to augment human decision-making, chat bots automate the entire process, since they are (1) defining a situation, and (2) formulating a response, (3) without the need for human intervention.

What kinds of assumption do chat bots like this make about the humans they serve? First, they assume that the only reason a student is reaching out to the university is for information. While this may be the case for some, or even most, it may not be for all. In addition to information, a student may also be in need of reassurance (whether they realize it or not). For first generation students especially, they may not know what questions to ask in the first place, and may need to be encouraged to think about factors they may not have otherwise considered. There is a huge amount to gain from one-one-one contact with a human being, and these benefits are lost when an interaction is reduced to a single function. Subtlety and lateral thinking are not virtues of AI (at least not today).

This is not to say that chat bots are bad. The increased efficiency they bring to an institution means that an institution can invest in other ways that enhance the student experience. The increased satisfaction from students who no longer have to sit on hold for hours is also highly beneficial, not to mention that some students simply feel more comfortable asking what they think are ‘dumb questions’ when they know they are talking to a robot. But we also need to be aware of the specific values we assume through the use of these technologies, and the opportunities that we are giving up, including a diversity of perspective, inter-personal support, narrative/biographical interjection, personalized nudging based on the experience and intuition of an advisor, and the ability to co-create meaning.

Is AI in higher education a good thing? It certainly carries with it an array of goods, but the good it brings is certainly not unequivocal. Because it automates at least parts of the decision-making process, it involves the standardization of values in a way, and at a scale, that until now has not been possible.

AI is here to stay. It is a bell that we can’t unring. Knowing that AI functions through the automation of at least some parts of human decision-making, then, it is incumbent upon us to think carefully about our values, and to take responsibility for the ways (both expected and unanticipated) that the standardization of values through information technology will affect how we think about ourselves, others, and the world we cohabit.

It’s time to get over the accuracy of predictive models in higher education

How should we approach the evaluation of predictive models in higher education?

It is easy to fall into the trap of thinking that the goal of a predictive algorithm is to be as accurate as possible. But, as I have explained previously, the desire to increase the accuracy of a model for its own sake is one that fundamentally misunderstands the purpose of predictive analytics. The goal of predictive analytics in identifying at-risk students is not to ‘get it right,’ but rather to inform action. Accuracy is definitely important here, but it is not the most important, and getting hung up on academic conversations about a model can actually obscure its purpose and impede the progress we are able to make in support of student success.

Let’s take a hypothetical example. Consider a model with an accuracy of 70% in predicting a student’s chances of completing a course with a grade of C or higher. A confusion matrix representing this might look something like this:

Too much emphasis on model accuracy can lead to a kind of paralysis, or hesitation to reach out to students for fear that that model has misclassified them. Institutions might worry about students falling through the cracks because the model predicted they would pass when they actually failed. But what is worse? Acting wrongly? Or not acting at all?

Let’s consider this from the perspective of the academic advisor. In the absence of a predictive model, advisors may approach their task from one of the following two perspectives.

  • No proactive outreach – this is the traditional walk-in model of academic advising. We know that the students who are most likely to seek out an academic advisor are also among the most likely to succeed anyway. What this means is that an academic advisor will probably only see some portion of 40 students in the above scenario, and make very little impact since those students would probably do just fine without them.
  • Proactively reach out to everyone – we know that proactive advising works, so why not try and reach everyone? This would obviously be amazing! But institutions simply do not have the capacity to do this very well. With average advising loads of 450 students or more, it is impossible for advisors to reach out to all their students in time to ensure that they are on track and remain on track each semester. If an advisor only had the ability to see 50 students before week six of the semester, selected at random, only 25 of students (50%) seen would actually have been in need of academic support.

Compare the results of each of these scenarios with the results of an advisor who only reaches out to students that the algorithm has identified as being at-risk of failure. I this case, an advisor would only need to see 45 students, which means that they have greater time available to meet with each of them. True, only 30 of these students would truly be at risk of failing, but this is significantly greater than the number of at-risk students they would otherwise be able to meet with. There is, of course, no harm in meeting with students who are not actually at risk. Complemented by additional information about student performance and engagement, a trained academic advisor could also further triage students flagged as being at risk, and communicate with instructors to increase the accuracy and impact of outreach attempts.

What about the students who fail through the cracks? The students that the model predicts would be successful but who actually fail the course? This is obviously something we’d like to avoid, but 15% is far lower than the 60% that fall through in a traditional advising context, and the 25% that fall through using the scatter shot approach. Of course, this is an artificial example, describing an advisor who only makes outreach decisions on the basis of the recommendation produced by the predictive model. In actual fact, however, through a system like Blackboard Predict, advisors and faculty have access to additional information and communications tools to help them to fine tune their judgments and increase the accuracy and impact of outreach efforts even further.

What I hope this example underscores is that predictive analytics should be viewed as simply a tool. Prediction is not prophesy. It is an opportunity to have a conversation. Accuracy is important, but not to the point that modeling efforts get in the way of the actual interventions that drive student success. It is understandable that institutions might worry that a perceived lack of sufficient model accuracy by faculty and advisors might error confidence in the model that prevents them from taking action. It is therefore incredibly important that misunderstandings about the nature of prediction, predictive modeling, and action be addressed from the outset so that time and resources can be committed where they will make the greatest impact: in the development and implementation of high impact advising practices that use predictive analytics as a valuable source of information alongside others, including the kind of wisdom that comes through experience.


This is the third in a series of blog posts on misunderstandings about predictive analytics that get in the way of high impact campus adoption. For the first two posts in this series, check out What are predictive analytics? And why does the idea create so much confusion? and Predictive analytics are not social science: A common misunderstanding with major consequences for higher education

Predictive analytics are not social science: A common misunderstanding with major consequences for higher education

This is the second in my series on common misunderstandings about predictive analytics that hinder their adoption in higher education. Last week I talked about the language of predictive analytics. This week, I want to comment on another common misconception: that predictive analytics (and educational data mining more generally) is a social science. Read more

What are predictive analytics? And why does the idea create so much confusion?

The greatest barrier to the widespread impact of predictive analytics in higher education is adoption. No matter how great the technology is, if people don’t use it effectively, any potential value is lost.

In the early stages of predictive analytics implementations at colleges and universities, a common obstacle comes in the form of questions that arise from some essential misunderstandings about data science and predictive analytics.  Without a clear understanding of what predictive analytics are, how they work, and what they do, it is easy to establish false expectations.  When predictive analytics fail to live up to these expectations, the result is disappointment, frustration, poor adoption, and a failure to fully actualize their potential value for student success.

This post is the first in a series of posts addressing common misunderstandings about data science that can have serious consequences for the success of an educational data or learning analytics analytics initiative in higher education.  The most basic misunderstanding that people have is about the language of prediction. What do we mean by ‘predictive’ analytics, anyway?

Why is the concept of ‘Predictive Analytics’ so confusing?

The term ‘predictive analytics’ is used widely, not just in education, but across all knowledge domains. We use the term because everyone else uses it, but it is actually pretty misleading.

I have written about this at length elsewhere, but in nutshell the term ‘prediction’ has a long history of being associated with a kind of mystical access to true knowledge about future events in a deterministic universe.  The history of the term is important, because it explains why many people get hung up on issues of accuracy, as if the goal of predictive analytics was to become something akin to the gold standard of a crystal ball.  It also explains why others are immediately creeped out by conversations about predictive analytics in higher education, because the term ‘prediction’ carries with it a set of pretty heavy metaphysical and epistemological connotations.  It is not uncommon in discussions of ethics and AI in higher education to hear comparisons between predictive analytics and the world of the film Minority Report (which is awesome), in which government agents are able to intervene and arrest people for crimes before they were committed.  In these conversations, however, it is rarely remembered that Minority Report predictions were quasi-magical in origin, where predictive analytics involve computational power applied to incomplete information.

Predictive analytics are not magic, even if the language of prediction sets us up to think of it in this way.  In The Signal an the Noise, Nate Silver suggests that we can begin to overcome this confusion by using the language of forecasting instead.  Where the goal of prediction is to be correct, the goal of a forecast is to be prepared.  I watch the weather channel, not because I want to know what the weather is going to be like, but because I want to know whether I need to pack an umbrella.

In higher education, it is unlikely that we will stop talking about predictive analytics any time soon.  But it is important to shift our thinking and set our expectations along the lines of forecasting.  When it comes to the early identification of at-risk students, our aim is not to be 100% accurate, and we are not making deterministic claims about a particular student’s future behavior.  What we are doing is providing a forecast based on incomplete information about groups of students in the past so that instructors and professional advisors can take action. The goal of predictive analytics in higher education is to offer  students an umbrella when the sky turns grey and there is a strong chance of rain.