Student Success and Liberal Democracy

The political environment in the United States has increasingly highlighted huge problems in our education system. These problems, I would argue, are not unrelated to how we as a country conceptualize student success. From the perspective of the student, success is about finding a high-paying job that provides a strong sense of personal fulfillment. From the perspective of colleges and universities, student success is about graduation and retention. From the perspective of government, it’s about making sure that we have a trained workforce capable of meeting labor market demands. For all of the recent and growing amount of attention paid to student success, however, what is woefully absent seems to be any talk about the importance of education to producing a liberal democratic citizenry. In the age of ‘big data,’ of course, part of this absence may be the fact that the success of a liberal education is difficult to measure. From this perspective, the success of a country’s education system cannot be measured directly. Instead, it is measured by the extent to which it’s citizens demonstrate things like active engagement, an interest/ability to adjudicate truth claims, and a desire to promote social and societal goods. Now, more than any time in recent history, we are witnessing the failure of American education. In the US, the topic of education has been largely absent from the platforms of individual presidential candidates.  This is, perhaps, a testament to the fact that education is bad for politics.  Where it has been discussed, we hear Trump talk about cutting funding to the Department of Education, if not eliminating it entirely. We hear Clinton talk about early childhood education, free/debt-free college, and more computer science training in k-12, but in each of these cases, the tenor tends to be about work and jobs rather than promoting societal goods more generally.

But I don’t want to make this post about politics. Our political climate is merely a reflection of the values that inform our conceptions of student success. These values — work, personal fulfillment, etc — inform policy decisions and university programs, but they also inform the development of educational technologies. The values that make up our nation’s conception of ‘student success’ produce the market demand that educational technology companies then try to meet. It is for this reason that we see a recent surge (some would say glut) of student retention products on the market, and relatively few that are meant to support liberal democratic values. It’s easy to forget that our technologies are not value-neutral. It’s easy to forget that, especially when it comes to communication technologies, the ‘medium is the message.’

What can educational technology companies do to meet market demands (something necessary to survival) while at the same time being attuned to the larger needs of society? I would suggest three things:

  1. Struggle. Keeping ethical considerations and the needs of society top of mind is hard.  For educational technologies to acknowledge the extent to which they both shape and are shaped by cultural movements produces a heavy burden of responsibility.  The easy thing to do is to abdicate responsibility, citing the fact that ‘we are just a technology company.’  But technologies always promote particular sets of values.  Accepting the need to meet market demand at the same time as the need to support liberal democratic education can be hard. These values WILL and DO come into conflict. But that’s not a reason to abandon either one or the other.  It means constantly struggling in the knowledge that educational technologies have a real impact on the lives of people.  Educational technology development is an inherently ethical enterprise.  Ethics are hard.
  2. Augment human judgment.  Educational technologies should not create opportunities for human beings to avoid taking responsibility for their decisions.  With more data, more analytics, and more artificial intelligence, it is tempting to lean on technology to make decisions for us.  But liberal democracy is not about eliminating human responsibility, and it is not about making critical thinking unnecessary.  To the contrary, personal responsibility and critical thinking are hallmarks of a liberal democratic citizen — and are essential to what it means to be human.  As tempting as it may be to create technologies that make decisions for us because they can, I feel like it is vitally important that we design technologies that increase our ability to participate in those activities that are the most human.
  3. Focus on community and critical thinking.  Creating technologies that foster engagement with complex ideas is hard.  Very much in line with the ‘augmented’ approach to educational technology development, I look to people like Alyssa Wise and Bodong Chen, who are looking at ways that a combination of embedded analytics and thoughtful teaching practices can produce reflective moments for students, and foster critical thinking in the context of community.  And it is for this reason that I am excited about tools like X-Ray Learning Analytics, a product for Moodle that makes use of social network analysis and natural language processing in a way that empowers teachers to promote critical thinking and community engagement.

Number Games: Data Literacy When You Need It

My wife’s coach one told her that “experience is what you get the moment after you needed it.”  Too often the same can be said for data literacy.  Colleges and universities looking to wisely invest in analytics to support the success of their students and to optimize operational efficiency are confronted with the daunting task of having to evaluate a growing number of options before selecting a products and approaches that are right for them.  What products and services are most likely to see the greatest returns on investment?  What approaches have other institutions taken that have already seen high rates of success?  On the one hand, institutions that are just now getting started with analytics have the great advantage of being able to look to many who have gone before and who are beginning to see promising results.  On the other hand, the analytics space is still immature and there is little long-term high-quality evidence to support the effectiveness of many products and interventions.

Institutions and vendors who have invested heavily in analytics have a vested interest in representing promising results (and they ARE promising!) in the best light possible.  This makes sense.  This is a good thing.  The marketing tactics that both institutions of higher education and educational technology vendors employ as they represent their results are typically honest and in good faith as they earnestly work in support of student success.  But the representation of information is always a rhetorical act.  Consequently, the ways in which results are presented too often obscure the actual impact of technologies and interventions.  The way that results are promoted can make it difficult for less mature institutions to adjudicate the quality of claims and make well-informed decisions about the products, services, and practices that will be best for them.

Perhaps the most common tactic that is used to make results appear more impressive than they are involves changing the scale used on the y-axis of bar and line charts.  A relatively small difference can famously be made to appear dramatic if the range is small enough.  But there are other common tactics that are not as easily spotted that are nonetheless just as important when it comes to evaluating the impact of interventions.  Here are three:

There is a difference between a percentage increase and an increase in percentage points.  For example, an increase in retention from 50% to 55% may be represented as either an increase of 5 points or 10%.  It is also important to note that the same number of points will translate into a different percentage increase depending on the starting rate.  For example, a 5-point increase from a retention rate of 25% represents an increase of 20%.  A 5-point increase from a starting retention rate of 75%, on the other hand, is only an increase of 7%.  Marketing literature will tend to choose metrics based on what sounds most impressive, even if it obscures the real impact.

A single data point does not equal a trend.  Context and history are important.  When a vendor or institution claims that an intervention saw a significant increase in retention/graduation in only a year, it is possible that such an increase was due to chance, an existing trend, or else was the result of other initiatives or shifts in student demographics.  For example, one college recently reported a 10% increase in its retention rate after only one year of using a student retention product.  Looking back at historical retention rates, however, one finds that the year prior to tool adoption marked a significant and uncharacteristic drop in retention, which means that any increase could just as easily have been due to chance or other factors.  In the same case, close inspection finds that the retention rate following tool adoption was still low from an historical perspective, and part of an emerging downward trend rather than the reverse.

It’s not the tool.  It’s the intervention. One will ofter hear vendors take credit for significant increases in retention / graduation rates, when there are actually other far more significant causal factors.  One school, for example, is praised for using a particular analytics system to double its graduation rates.  What tends not to be mentioned, however, is the fact that the same school also radically reduced its student : advisor ratio, centralized its administration, and engaged in additional significant programmatic changes that contributed to the school’s success over and above the impact that the analytics system might have made by itself.  The effective use of an analytics solution can definitely play a major role in facilitating efforts to increase retention and graduation rates.  If fact, all things being equal, it is reasonable to expect a 1 to 3 point increase in student retention as a result of using early alerts powered by predictive analytics.  Significant gains above this, however, are only possible as a result of significant cultural change, strategic policy decisions, and well-designed interventions.  It can be tempting for a vendor specially to at least implicitly take credit for more than is due, but it can be misleading and have the effect of obscuring the tireless efforts of institutions and people who are working to support their students.  More than this, overemphasizing products over institutional change can impede progress.  It can lead institutions to falsely believe that a product will do all the work, and encourage them to naively embark on analytics projects and initiatives without fully understanding the change in culture, policy, and practice to make them fully successful.

Vlogging my way through BbWorld16

EPISODE I: Going to Vegas

Headed to Las Vegas for DevCon and BbWorld 2016. Having attended twice before as a customer, I am very excited to have played a part in organizing this year’s event.

In this vlog episode, I check in with Scott Hurrey (Code Poet at Blackboard) and ask him about what excites him the most about DevCon. Dan Rinzel (Product Manager, Blackboard Analytics) and John Whitmer (Director of Analytics and Research at Blackboard) tackle some extreme food portions.

EPISODE II: Teamwork makes the Dream Work

A day of rehearsal for the BbWorld16 opening general session leads to an air of playful excitement in anticipation of the main event. ‘Dr John’ talks about why data science isn’t scary, and why everyone should be interested and involved.

EPISODE III: Making Magic Happen

Want to go behind the scenes and get a sense of all of the work that goes into the opening main stage keynote presentation each year? Michelle Williams takes us on a tour!

EPISODE IV: Yoga and Analytics

Meet the Predictive Analytics ‘booth babes,’ learn from Michael Berman that yoga and analytics DO mix. Executive Director of the University Innovation Alliance, Bridget Burns, explains why there is a need for more empathy between institutions of higher educations and educational technology companies, and in higher education in general.

EPISODE V: We Are Family

Rachel Seranno from Appalachian State University talks about power poses and memes. Eric Silva praises the power of Twitter. Casey Nugent and Shelley White from the University of Nebraska – Lincoln describe how they are working with Blackboard consultants to understand and optimize instruction.

Next Generation Learning Analytics: Or, How Learning Analytics is Passé

‘Learning Analytics,’ as so many know it, is already passé.

There is almost always a disconnect between research innovation and the popular imagination. By the time a new concept or approach achieves widespread acceptance, its popular identity and applications too-often lag behind the state of the field.

The biggest problem with learning analytics in its most popular incarnations is that — particularly as it is applied at scale by colleges, universities, and in vendor-driven solutions — it sits on top of existing learning management architectures which, in turn, rely on irrelevant assumptions about what higher education looks like. At most universities, investing in learning analytics means buying or building a ‘nudging’ engine, a predictive model based on data from an institution’s student information system (SIS) and Learning Management System (LMS) that is used to drive alerts about at risk students. Such investments are costly, and so institutions have a vested interest in maximizing their Return on Investment (ROI). Where models make use of LMS data, their accuracy is a function of an institution’s rate of LMS utilization. The more data the better. If a university is serious about its investment in learning analytics, then, it also needs to be serious about its investment in a single learning management system to the exclusion of other alternatives.


George Tooker, “Landscape with Figures,” 1965-66.

But learning management systems are famously at odds with 21st century pedagogies. Popular solutions like Blackboard, D2L (now Brightspace), and Canvas continue to operate according to the assumption the university teaching involves the packaging of information for distribution by a single expert. Even with the addition of more social elements like blogs, wikis, and discussion boards, the fact that all of these elements are contained within a single ‘course shell’ betrays the fact that LMS-based teaching is centralized and, even if not tightly controlled by the instructor, at least curated (what is a syllabus, after all, but a set of rules set out by an instructor that determine what does and does not belong in their course?). The 20th century course is the intellectual property of the instructor (the question of course ownership has been raised with particular vigor recently, as schools push to deliver MOOCs). It is the instructor’s creation. It is the teacher’s course. It may be for students, but it is not theirs.

Learning analytics is very easy to do in the context of highly centralized teaching environments: where the institutions offers instructors a limited and requisite range of educational technologies, and where students agree to limit their learning activity to the teacher’s course. But learning analytics is first and foremost about learning, and learning in the 21st century is not centralized.

In a new report for the EDUCAUSE Learning Initiative (ELI), Malcolm Brown, Joanne Dehoney, and Nancy Millichap observe that the changing needs of higher education demand a major shift in thinking away from the Learning Management System and towards a Digital Learning Environment.

What is clear is that the LMS has been highly successful in enabling the administration
of learning but less so in enabling learning itself. Tools such as the grade book and mechanisms for distributing materials such as the syllabus are invaluable for the management of a course, but these resources contribute, at best, only indirectly to learning success. Initial LMS designs have been both course- and instructor- centric, which is consonant with the way higher education viewed teaching and learning through the 1990s.

In contrast the centralized learning management systems, the report’s authors recommend a Next Generation Digital Learning Environment (NGDLE) that acknowledges the distributed nature of learning and that empowers pedagogical innovation through a “Lego” approach that allows for decentralization, interoperability, and personalization. Analytics, in this context, would be challenging as the NGDLE would not lend itself to a single learning analytics layer. Open data, however, would facilitate the creation of modules that would address the needs of instructor, students, and researchers in particular learning environments. In this, we see a shift in the aim of analytics away from management and administration, and toward learning as an aim in itself.

Where the lag between learning analytics research and popular imagination is being overcome is in efforts like GlassLab (Games, Learning and Assessment Lab), an experimental nonprofit that aims at the creation of analytics-enabled educational versions of commercial video games. A joint initiative of the Educational Testing Service (ETS) and Electronic Arts, GlassLab aims at rethinking standardized testing in an age of gamified learning.

The future of learning analytics, a future in which it is not passé, is one in which learning comes before management, and analytics are intensive rather than extensive. In this, the field of learning analytics can actually function as an important agent of change. Critics have expressed concern over the datification of education, observing that the needs of big data require standardized systems that tend to conserve outdated (and counterproductive) pedagogical models. But the most innovative approaches to learning analytics take learning seriously. They are not interested in reducing learning to something that can be grasped, but rather in understanding it in all its complexity, and in all of its particular contexts. A common theme among the talks hosted by Emory University this past year as part of its QuanTM Learning Analytics Speaker Series (which featured Chuck Dziuban, Alyssa Wise, Ryan Baker, Tim McKay, and Dragan Gašević) was that learning is complicated, and learning analytics is hard to do. Gluttons for punishment, driven by a strong sense of vocation, and exceptionally humble, researchers and innovators in the learning analytics space are anything but reductive in their views of learning and are among the greatest advocates of distributed approaches to education.

I worry that learning analytics is indeed passé, a buzz word picked up by well-meaning administrators to give the impression of innovation while at the same time serving to support otherwise tired and irrelevant approaches to educational management. When I look at the work of learning analytics, however, what I see is something that is not only relevant and responsive to the needs of learners here and now, but also reflectively oriented toward shaping a rich and robust (possibly post-administrative) educational future.

Piloting Blackboard Analytics using the Learning Analytics Acceptance Model (LAAM)

In the summer of 2013, I designed a year-long pilot of Blackboard Analytics for Learn™ (A4L), a data warehousing solution that is meant to be accessed via one of three different GUI reporting environments: an instructor dashboard, a student dashboard, and a BI reporting tool. The overall approach to tool assessment was multimodal, but the value of the instructor and student dashboards was evaluated using a survey instrument that I developed based on the Learning Analytics Acceptance Model (LAAM) described and validated by Ali, Asadi, Gasevic, Jovanovic, and Hatala (2013). What follows is a brief description of our pilot methodology and survey instrument.

The Learning Analytics Acceptance Model

The Learning Analytics Acceptance Model (LAAM) is based on the research of Ali, Asadi, Gasevic, Jovanovic, and Hatala (2013), who provisionally found a positive correlation between a tool’s (1) usefulness, (2) ease of use, and (3) perceived relative value and the likelihood of adoption. The authors acknowledge that the specific ways in which variables are operationalized will depend on the nature of the tool being assessed. Because Blackboard Analytics for Learn™ is a learning analytics tool with both an instructor dimension (i.e. the instructor dashboard) and a learner dimension (i.e. the student dashboard), it was important to adapt the basic model to include questions that addressed the particularities of each tool and perspective.  Regardless of the primary user of a tool (instructor or student), however, the decision to adopt a tool or tool-set within a particular classroom environment is made by the instructor. Since it is the instructor’s perceptions about the usefulness, ease of use, and relative value that ultimately inform their decision to adopt a tool, even the usefulness, ease of use, and relative value for students was assessed in terms of the instructor’s perceptions, in addition to directly from the students themselves. Our survey of student perceptions of the tool was conducted in such a way as to be directly comparable to instructor perceptions. This was in order to gather additional information about the usefulness of the product, in order to inform the larger decision of whether to invest in the product at the enterprise level. Although student experience does not typically directly or immediately influence an instructor’s likelihood of adopting a tool, such information is helpful in understanding the extent to which instructors are ‘in tune’ with their student’s experience in the class.

Survey Design


Instructor Course Reports


With respect to the instructor reports, the instructor’s perception of the usefulness of the tool was operationalized from the teaching perspective, and in terms of six core instructional values: (1) engagement, (2) responsibility, (3) course design, (4) performance, (5) satisfaction, and (6) relevance. Although Ali et al (2013) found that only the perceived ability to identify learning contents that needed improvement (responsibility) was significantly correlated with the behavioral intention to adopt a tool (P < 0.01), we predicted that self-reporting about the overall perceived usefulness of the instructor dashboard would be more highly correlated to likelihood of adoption than any other instructor usefulness item. It was my hope, however, that including questions about the extent to which the tool addressed these six core values would provide insight into the specific values that contributed to an instructor’s perception of overall usefulness of a class monitoring tool, and allow for possible segmentation in the future.

  1. Engagement – will the tool facilitate effective interaction between the instructor and students, both online and in class?
  2. Responsibility – will the tool assist the instructor in identifying aspects of the course with which their class is having difficulty, and make timely interventions, both with individual students and with respect to the delivery of the course as a whole
  3. Course Design – will insights from the tool help the instructor to identify potential improvements to the course content and learning environment, and motivate them to make positive improvements in subsequent iterations of the course
  4. Performance – Will the instructor’s use of the tool have a significant and positive effect on student grades in the class
  5. Satisfaction – Will the instructor’s use of the tool have a significant positive effect on satisfaction in the course, both for the instructor and their students?
  6. Relevance – Does the tool give instructors the right kinds of information? Is the use of the tool compatible with existing teaching practices and course objectives? Is the use of the tool compatible with general teaching practices within the instructor’s discipline?
  7. Overall Usefulness – What is the instructor’s overall impression of the tool?

Student Course Report


With respect to the student dashboard, the instructor’s perception of the usefulness of the tool was operationalized from a learning perspective, and in terms of seven values that instructors commonly hold regarding student behavior: (1) engagement, (2) responsibility, (3) content, (4) collaboration, (5) performance, (6) satisfaction, and (7) relevance. We predicted that self-reporting about the overall perceived usefulness of the student dashboard would be more highly correlated to likelihood of adoption than any other student usefulness item. We hoped that including questions about the extent to which a tool addresses these seven core values would provide insight into the specific values that contribute to an instructor’s perception of the overall usefulness of a student learning tool, and allow for possible segmentation in the future.

  1. Engagement – will use of the tool increase student interaction with their learning environment online and in class?
  2. Content – will the tool assist students in identifying topics and skills with which they are having difficulty?
  3. Responsibility – will the use of the tool increase the likelihood that students will actively seek out timely assistance / remediation for topics and skill with which they are having difficulty?
  4. Collaboration – will the tool encourage collaborative and peer-to-peer activity within the online learning environment?
  5. Performance – will the tool increase students chances of success in the course (i.e. passing, achieving a high grade, etc)
  6. Satisfaction – Does the use of the tool increase the student’s satisfaction with the course?
  7. Relevance – Is the information provided to the student relevant and helpful in facilitating the student’s success in the course?
  8. Overall Usefulness – What is the instructor’s overall impression of the student tool?

Ease of Use

The basic criteria by which ease of use is evaluated are the same regardless of tool or perspective: (1) navigation, (2) understanding, (3) information quantity, and (4) appearance. As in the operationalization of usefulness, self-reporting of overall ease of use was predicted to have the strongest correlation to likelihood of adoption than any of the other ease of use measures. Again, however, we hoped that including questions about the extent to which the tool addressed various aspects of ease of use would provide insight into the specific values that contribute to an instructor’s perception of overall ease of use, and allow for possible segmentation in the future.

  1. Navigation – can the instructor easily find the information they are looking for?
  2. Understanding – is the information presented in a way that is accessible, comprehensible, and actionable?
  3. Information Quantity – does the tool present so much information that it overwhelms the instructor? Or so little that the instructor is left with more questions than answers?
  4. Appearance – does the instructor find the interface appealing and generally enjoyable to work with?
  5. Overall Ease – What is the instructor’s overall impression of the tool’s ease of use?

Relative Value

Ali et al (2013) found that prior exposure to the graphical user interface of a similar learning analytics tool was among the highest correlated measures to behavioral intention to adopt a tool, although this correlation was not significant at either 0.01 or 0.05 levels.  It was, therefore, important to include a question about the relative value of Blackboard Analytics reports relative to other similar tools with which the respondent was familiar.

Survey Results

We piloted A4L in eight classes. Of these eight classes, only three made serious use of the tool, and only two faculty members responded to the survey. For the most part, response rates among students were far too low to be informative, except for two course sections in which the instructor incentivized participation by offering a cross-the-board grade bonus if 90% of the class completed the survey. In this latter case, the response rate was nearly 70%, but students were in a post-graduate professional program that was predominantly online, and so it was impossible to generalize results to the Emory University community as a whole. As a consequence of poor response rates, the feedback we received (both quantitative and qualitative) was treated anecdotally, but nevertheless provided several rich insights that informed our future decision to license the product.

In spite of challenges associated with the nature and motivation-level of our convenience sample (behaviors that were helpful, in a way, as indicators of low-likelihood to adopt), I have a lot of confidence in our implementation of the Learning Analytics Acceptance Model (LAAM), and am eager to see it put it to use again using a larger sample of participants and implementing practices that would increase response rates.