Overcoming early analytics challenges at UMBC

In 2011, Long and Siemens famously announced that big data and analytics represented “the most dramatic factor shaping the future of higher education.” Now, five years later, conversations about the use of data and analytics in higher education are more mixed. In 2016, the Campus Computing Project release an annual report that used the language of “analytics angst.” In a recent blog series for the EDUCAUSE Review, Mike Sharkey and I argue that analytics has fallen into a “trough of disillusionment.” What makes some institutions successful in their analytics where others flounder? How can we work to scale, not technologies, but high impact practices? Let’s examine one example.

The University of Maryland Baltimore County (UMBC) began working with Blackboard Analytics in 2006. At that time, they simply wanted to support access to basic information to ensure that the institution was effective and efficient in its operations. Shortly after gaining access to their institutional data, however, they quickly began asking deeper questions about student success.

READ FULL STORY HERE >> http://blog.blackboard.com/overcoming-early-analytics-challenges-at-umbc/

Eliminating barriers to innovation at scale: Fostering community through a common language

The Latin word communitas refers to a collection of individuals who, motivated by a common goal, come together and act as one. Community is powerful.

Common approaches to college and university rankings can sometimes have the unfortunate effect of pitting institutions against each other in a battle for students and prestige. As the U.S. turns its attention to meeting the needs of 21st century students and 21st century labor demands, the power of traditional university ranking schemes is starting to erode.

Student success is not a zero-sum game. Rather than fostering competition, a commitment to student success encourages cooperation.

READ FULL STORY HERE >> http://blog.blackboard.com/fostering-community-through-a-common-language/

Using Analytics to Meet the Needs of Students in the 21st Century

Below is excerpted from a keynote address that I delivered on November 8, 2016 at Texas A&M at Texarkana for its National Distance Education Week Mini-Conference


Right now in the US, nearly a quarter of all undergraduate students — 4.5 million — are both first generation and low income.

Of these students, only 11% earn a bachelors degree in under 6 years. That’s compared to the rest of the population, which sees students graduate at a national rate of 55%. What this means is that 89% of first generation, low income students stop out, perpetuating a widespread pattern of socio-economic inequality.

Since 1970, bachelors degree attainment among those in the top income quartile in the US has steadily increased from 40.2% to 82.4 in 2009. By contrast, those in the bottom two income quartiles have seen only slight improvements: under an 8 point increase for the bottom two quartiles combined. In the US, a bachelors degree means a difference in lifetime earnings of more than 66% compared to those with only high school. Read more

Does Student Success start with Diversity in Higher Ed Administration?

Twitter has finally begun to add tools to mitigate harassment.

Harassment on Twitter has been a huge problem in recent years, and the amount of poor citizenship on the platform has only increased post-election. Why has it taken so long to respond? On the one hand, it is a very hard technical problem: how can users benefit from radical openness at the same time as they are protected from personal harm? In certain respects, this is a problem with free speech in general, but the problem is even greater for Twitter as it looks to grow its user base and prepare for sale. On the other hand, Twitter insiders have said that dealing with harassment has simply not been a priority for the mostly white male leadership team. Diversity is famously bad at Twitter. A lack of diversity within an organization leads to a lack of empathy for the concerns of ‘others.’ It leads to gaps in an organization’s field of vision, since we as people naturally pursue goals that are important to us, and what is important to us is naturally a product of our own experience. Values create culture. And culture determines what is included and excluded (both people and perspectives). Read more

Piloting Blackboard Analytics using the Learning Analytics Acceptance Model (LAAM)

In the summer of 2013, I designed a year-long pilot of Blackboard Analytics for Learn™ (A4L), a data warehousing solution that is meant to be accessed via one of three different GUI reporting environments: an instructor dashboard, a student dashboard, and a BI reporting tool. The overall approach to tool assessment was multimodal, but the value of the instructor and student dashboards was evaluated using a survey instrument that I developed based on the Learning Analytics Acceptance Model (LAAM) described and validated by Ali, Asadi, Gasevic, Jovanovic, and Hatala (2013). What follows is a brief description of our pilot methodology and survey instrument.

The Learning Analytics Acceptance Model

The Learning Analytics Acceptance Model (LAAM) is based on the research of Ali, Asadi, Gasevic, Jovanovic, and Hatala (2013), who provisionally found a positive correlation between a tool’s (1) usefulness, (2) ease of use, and (3) perceived relative value and the likelihood of adoption. The authors acknowledge that the specific ways in which variables are operationalized will depend on the nature of the tool being assessed. Because Blackboard Analytics for Learn™ is a learning analytics tool with both an instructor dimension (i.e. the instructor dashboard) and a learner dimension (i.e. the student dashboard), it was important to adapt the basic model to include questions that addressed the particularities of each tool and perspective.  Regardless of the primary user of a tool (instructor or student), however, the decision to adopt a tool or tool-set within a particular classroom environment is made by the instructor. Since it is the instructor’s perceptions about the usefulness, ease of use, and relative value that ultimately inform their decision to adopt a tool, even the usefulness, ease of use, and relative value for students was assessed in terms of the instructor’s perceptions, in addition to directly from the students themselves. Our survey of student perceptions of the tool was conducted in such a way as to be directly comparable to instructor perceptions. This was in order to gather additional information about the usefulness of the product, in order to inform the larger decision of whether to invest in the product at the enterprise level. Although student experience does not typically directly or immediately influence an instructor’s likelihood of adopting a tool, such information is helpful in understanding the extent to which instructors are ‘in tune’ with their student’s experience in the class.

Survey Design

Usefulness

Instructor Course Reports

LAAM_InstructorPerceptions

With respect to the instructor reports, the instructor’s perception of the usefulness of the tool was operationalized from the teaching perspective, and in terms of six core instructional values: (1) engagement, (2) responsibility, (3) course design, (4) performance, (5) satisfaction, and (6) relevance. Although Ali et al (2013) found that only the perceived ability to identify learning contents that needed improvement (responsibility) was significantly correlated with the behavioral intention to adopt a tool (P < 0.01), we predicted that self-reporting about the overall perceived usefulness of the instructor dashboard would be more highly correlated to likelihood of adoption than any other instructor usefulness item. It was my hope, however, that including questions about the extent to which the tool addressed these six core values would provide insight into the specific values that contributed to an instructor’s perception of overall usefulness of a class monitoring tool, and allow for possible segmentation in the future.

  1. Engagement – will the tool facilitate effective interaction between the instructor and students, both online and in class?
  2. Responsibility – will the tool assist the instructor in identifying aspects of the course with which their class is having difficulty, and make timely interventions, both with individual students and with respect to the delivery of the course as a whole
  3. Course Design – will insights from the tool help the instructor to identify potential improvements to the course content and learning environment, and motivate them to make positive improvements in subsequent iterations of the course
  4. Performance – Will the instructor’s use of the tool have a significant and positive effect on student grades in the class
  5. Satisfaction – Will the instructor’s use of the tool have a significant positive effect on satisfaction in the course, both for the instructor and their students?
  6. Relevance – Does the tool give instructors the right kinds of information? Is the use of the tool compatible with existing teaching practices and course objectives? Is the use of the tool compatible with general teaching practices within the instructor’s discipline?
  7. Overall Usefulness – What is the instructor’s overall impression of the tool?

Student Course Report

LAAM_StudentPerceptions

With respect to the student dashboard, the instructor’s perception of the usefulness of the tool was operationalized from a learning perspective, and in terms of seven values that instructors commonly hold regarding student behavior: (1) engagement, (2) responsibility, (3) content, (4) collaboration, (5) performance, (6) satisfaction, and (7) relevance. We predicted that self-reporting about the overall perceived usefulness of the student dashboard would be more highly correlated to likelihood of adoption than any other student usefulness item. We hoped that including questions about the extent to which a tool addresses these seven core values would provide insight into the specific values that contribute to an instructor’s perception of the overall usefulness of a student learning tool, and allow for possible segmentation in the future.

  1. Engagement – will use of the tool increase student interaction with their learning environment online and in class?
  2. Content – will the tool assist students in identifying topics and skills with which they are having difficulty?
  3. Responsibility – will the use of the tool increase the likelihood that students will actively seek out timely assistance / remediation for topics and skill with which they are having difficulty?
  4. Collaboration – will the tool encourage collaborative and peer-to-peer activity within the online learning environment?
  5. Performance – will the tool increase students chances of success in the course (i.e. passing, achieving a high grade, etc)
  6. Satisfaction – Does the use of the tool increase the student’s satisfaction with the course?
  7. Relevance – Is the information provided to the student relevant and helpful in facilitating the student’s success in the course?
  8. Overall Usefulness – What is the instructor’s overall impression of the student tool?

Ease of Use

The basic criteria by which ease of use is evaluated are the same regardless of tool or perspective: (1) navigation, (2) understanding, (3) information quantity, and (4) appearance. As in the operationalization of usefulness, self-reporting of overall ease of use was predicted to have the strongest correlation to likelihood of adoption than any of the other ease of use measures. Again, however, we hoped that including questions about the extent to which the tool addressed various aspects of ease of use would provide insight into the specific values that contribute to an instructor’s perception of overall ease of use, and allow for possible segmentation in the future.

  1. Navigation – can the instructor easily find the information they are looking for?
  2. Understanding – is the information presented in a way that is accessible, comprehensible, and actionable?
  3. Information Quantity – does the tool present so much information that it overwhelms the instructor? Or so little that the instructor is left with more questions than answers?
  4. Appearance – does the instructor find the interface appealing and generally enjoyable to work with?
  5. Overall Ease – What is the instructor’s overall impression of the tool’s ease of use?

Relative Value

Ali et al (2013) found that prior exposure to the graphical user interface of a similar learning analytics tool was among the highest correlated measures to behavioral intention to adopt a tool, although this correlation was not significant at either 0.01 or 0.05 levels.  It was, therefore, important to include a question about the relative value of Blackboard Analytics reports relative to other similar tools with which the respondent was familiar.

Survey Results

We piloted A4L in eight classes. Of these eight classes, only three made serious use of the tool, and only two faculty members responded to the survey. For the most part, response rates among students were far too low to be informative, except for two course sections in which the instructor incentivized participation by offering a cross-the-board grade bonus if 90% of the class completed the survey. In this latter case, the response rate was nearly 70%, but students were in a post-graduate professional program that was predominantly online, and so it was impossible to generalize results to the Emory University community as a whole. As a consequence of poor response rates, the feedback we received (both quantitative and qualitative) was treated anecdotally, but nevertheless provided several rich insights that informed our future decision to license the product.

In spite of challenges associated with the nature and motivation-level of our convenience sample (behaviors that were helpful, in a way, as indicators of low-likelihood to adopt), I have a lot of confidence in our implementation of the Learning Analytics Acceptance Model (LAAM), and am eager to see it put it to use again using a larger sample of participants and implementing practices that would increase response rates.

Rethinking Student Success: Analytics in Support of Teaching and Learning

Presented at the 2014 Blackboard Institutional Performance Conference (30 – 31 October 2014)

Passing grades and retention through to degree are essential to success in higher education, but these factors are too often mistaken for ends in themselves. A high-performing student environment has provided teachers and researchers at Emory University with a space to think critically about what success means, and about the extent to which data might inform the design of successful learning environments. This presentation will (1) discuss some of the unique challenges encountered by Emory University during its 2013-2014 Blackboard Analytics pilot, (2) describe several provisional insights gained from exploratory data mining, and (3) outline how Emory’s pilot experience has informed support of learning analytics on campus.

On Magic Beans: Or, Is Learning Analytics simply Educational Data Science, Poorly Done?

UPDATE 31 January 2017: This blog post was written during 2014. Since that time, Blackboard has made several very important and strategic hires including Mike Sharkey, John Whitmer, and others who are not only well-regarded data scientists, but also passionate educators. Since 2014, Blackboard has become a leader in educational data science, conducting generalizable research to arrive at insights with the potential to make a significant impact on how we understand teaching and learning in the 21 century. Blackboard has changed. Blackboard is now committed to high quality research in support of rigorously defensible claims to efficacy. Blackboard is not in the business of selling magic beans. Blackboard is also not the only company doing excellent work in this way. As this article continues to be read and shared, I still believe it has value. But it should be noted that the concerns that I express here are a reflection of the state of a field and industry still in its infancy. The irony it describes is still present to be sure, and we should all work to increase our data literacy so that we can spot the magic beans where they exist, but it should also be noted that educational technology companies are not enemies. Teachers, researchers, and edtech companies alike are struggling together to understand the impact of their work on student success. Appreciating that fact, and working together in a spirit of honesty and openness is crucial to the success of students and institutions of higher education in the 21st century.


The learning analytics space is currently dominated, not by scholars, but rather by tool developers and educational technology vendors with a vested interest in getting their products to market as quickly as they possibly can. The tremendous irony of these products is that, on the one hand, they claim to enable stakeholders (students, faculty, administration) to overcome the limitations of anecdotal decision-making and achieve a more evidence-based approach to teaching and learning. On the other hand, however, the effectiveness of the vast majority of learning analytic tools are untested. In other words, vendors insist upon the importance of evidence-based (i.e. data-driven) decision-making, but rely upon anecdotal evidence in support of claims with regard to the value of their analytics products.

In the above presentation, Kent Chen (former Director of Market Development for Blackboard Analytics) offers a startlingly honest account of the key factors motivating the decision to invest in Learning Analytics:

Analytics, I believe, revolves around two key fundamental concepts. The first of these fundamental concepts is a simple question: is student activity a valid indicator of student success? And this question is really just asking, is the amount of work that a student puts in a good indicator of whether or not that student is learning? Now this is really going to be the leap of faith, the jumping off point for a lot of our clients

Is learning analytics based on a leap of faith? If this is actually the case, then the whole field of learning analytics is premised on a fallacy. Specifically, it begs the question by assuming its conclusion in its premises: “we can use student activity data to predict student success, because student activity data is predictive of student success.” Indeed, we can see this belief in ‘faith as first principle’ in the Blackboard Analytics product itself, which famously fails to report on its own use.

Fortunately for Chen (and for Blackboard Analytics), he’s wrong. During the course of Emory’s year-long pilot of Blackboard Analytics for Learn, we were indeed able to find small but statistically significant correlations between several measures of student activity and success (defined as a grade of C or higher). Our own findings provisionally support the (cautious) use of student course accesses and interactions as heuristics on the basis of which an instructor can identify at-risk students. When it comes to delivering workshops to faculty at Emory, our findings are crucial, not only to making a case in defense of the value of learning analytics for teaching and course design, but also as we discuss how those analytics might most effectively be employed. In fact, analytics is valuable as a way of identifying contexts in which embedded analytic strategies (i.e. student-facing dashboards) might have no, or even negative, effects, and it is incumbent upon institutional researchers and course designers to use the data they have in order to evaluate how to use that data most responsibly. Paradoxically, one of the greatest potential strengths of learning analytics is that it provides us with insight into the contexts and situations where analytics should not be employed.

I should be clear that I use Blackboard Analytics as an example here solely for the sake of convenience. In Blackboard’s case, the problem is not so much a function of the product itself (which is a data model that is often mistaken for a reporting platform), but rather of the fact that it doesn’t understand the product’s full potential, which leads to investment in the wrong areas of product development, cliched marketing, and unsophisticated consulting practices. The same use of anecdotal evidence to justify data-driven approaches to decision-making is endemic to the learning analytics space dominated by educational technology vendors clamoring to make hay from learning analytics while the sun is shining.

I should also say that these criticisms do not necessarily apply to learning analytics researchers (like those involved with the Society of Learning Analytics Research and scholars involved in educational data mining). This is certainly not to say that researchers do not have their own sets of faith commitments (we all do, as a necessary condition of knowledge in general). Rather, freed from the pressure to sell a product, this group is far more reflective about how they understand concepts. As a community, the fields of learning analytics and educational data mining are constantly grappling with questions about the nature of learning, the definition(s) of student success, how concepts are best operationalized, and how specific interventions might be developed and evaluated. To the extent that vendors are not engaged in these kinds of reflective activity — that immediate sales trump understanding — it might be argued that vendors are giving ‘learning analytics’ a bad name, since they and the learning analytics research community are engaged in fundamentally different activities. Or perhaps the educational data science community has made the unfortunate decision to adopt a name for its activity that is already terribly tainted by the tradition of ‘decision-support’ in business, which is itself nothing if not dominated by a similar glut of vendors using a faith in data to sell its magic beans.