Piloting Blackboard Analytics using the Learning Analytics Acceptance Model (LAAM)

In the summer of 2013, I designed a year-long pilot of Blackboard Analytics for Learn™ (A4L), a data warehousing solution that is meant to be accessed via one of three different GUI reporting environments: an instructor dashboard, a student dashboard, and a BI reporting tool. The overall approach to tool assessment was multimodal, but the value of the instructor and student dashboards was evaluated using a survey instrument that I developed based on the Learning Analytics Acceptance Model (LAAM) described and validated by Ali, Asadi, Gasevic, Jovanovic, and Hatala (2013). What follows is a brief description of our pilot methodology and survey instrument.

The Learning Analytics Acceptance Model

The Learning Analytics Acceptance Model (LAAM) is based on the research of Ali, Asadi, Gasevic, Jovanovic, and Hatala (2013), who provisionally found a positive correlation between a tool’s (1) usefulness, (2) ease of use, and (3) perceived relative value and the likelihood of adoption. The authors acknowledge that the specific ways in which variables are operationalized will depend on the nature of the tool being assessed. Because Blackboard Analytics for Learn™ is a learning analytics tool with both an instructor dimension (i.e. the instructor dashboard) and a learner dimension (i.e. the student dashboard), it was important to adapt the basic model to include questions that addressed the particularities of each tool and perspective.  Regardless of the primary user of a tool (instructor or student), however, the decision to adopt a tool or tool-set within a particular classroom environment is made by the instructor. Since it is the instructor’s perceptions about the usefulness, ease of use, and relative value that ultimately inform their decision to adopt a tool, even the usefulness, ease of use, and relative value for students was assessed in terms of the instructor’s perceptions, in addition to directly from the students themselves. Our survey of student perceptions of the tool was conducted in such a way as to be directly comparable to instructor perceptions. This was in order to gather additional information about the usefulness of the product, in order to inform the larger decision of whether to invest in the product at the enterprise level. Although student experience does not typically directly or immediately influence an instructor’s likelihood of adopting a tool, such information is helpful in understanding the extent to which instructors are ‘in tune’ with their student’s experience in the class.

Survey Design

Usefulness

Instructor Course Reports

LAAM_InstructorPerceptions

With respect to the instructor reports, the instructor’s perception of the usefulness of the tool was operationalized from the teaching perspective, and in terms of six core instructional values: (1) engagement, (2) responsibility, (3) course design, (4) performance, (5) satisfaction, and (6) relevance. Although Ali et al (2013) found that only the perceived ability to identify learning contents that needed improvement (responsibility) was significantly correlated with the behavioral intention to adopt a tool (P < 0.01), we predicted that self-reporting about the overall perceived usefulness of the instructor dashboard would be more highly correlated to likelihood of adoption than any other instructor usefulness item. It was my hope, however, that including questions about the extent to which the tool addressed these six core values would provide insight into the specific values that contributed to an instructor’s perception of overall usefulness of a class monitoring tool, and allow for possible segmentation in the future.

  1. Engagement – will the tool facilitate effective interaction between the instructor and students, both online and in class?
  2. Responsibility – will the tool assist the instructor in identifying aspects of the course with which their class is having difficulty, and make timely interventions, both with individual students and with respect to the delivery of the course as a whole
  3. Course Design – will insights from the tool help the instructor to identify potential improvements to the course content and learning environment, and motivate them to make positive improvements in subsequent iterations of the course
  4. Performance – Will the instructor’s use of the tool have a significant and positive effect on student grades in the class
  5. Satisfaction – Will the instructor’s use of the tool have a significant positive effect on satisfaction in the course, both for the instructor and their students?
  6. Relevance – Does the tool give instructors the right kinds of information? Is the use of the tool compatible with existing teaching practices and course objectives? Is the use of the tool compatible with general teaching practices within the instructor’s discipline?
  7. Overall Usefulness – What is the instructor’s overall impression of the tool?

Student Course Report

LAAM_StudentPerceptions

With respect to the student dashboard, the instructor’s perception of the usefulness of the tool was operationalized from a learning perspective, and in terms of seven values that instructors commonly hold regarding student behavior: (1) engagement, (2) responsibility, (3) content, (4) collaboration, (5) performance, (6) satisfaction, and (7) relevance. We predicted that self-reporting about the overall perceived usefulness of the student dashboard would be more highly correlated to likelihood of adoption than any other student usefulness item. We hoped that including questions about the extent to which a tool addresses these seven core values would provide insight into the specific values that contribute to an instructor’s perception of the overall usefulness of a student learning tool, and allow for possible segmentation in the future.

  1. Engagement – will use of the tool increase student interaction with their learning environment online and in class?
  2. Content – will the tool assist students in identifying topics and skills with which they are having difficulty?
  3. Responsibility – will the use of the tool increase the likelihood that students will actively seek out timely assistance / remediation for topics and skill with which they are having difficulty?
  4. Collaboration – will the tool encourage collaborative and peer-to-peer activity within the online learning environment?
  5. Performance – will the tool increase students chances of success in the course (i.e. passing, achieving a high grade, etc)
  6. Satisfaction – Does the use of the tool increase the student’s satisfaction with the course?
  7. Relevance – Is the information provided to the student relevant and helpful in facilitating the student’s success in the course?
  8. Overall Usefulness – What is the instructor’s overall impression of the student tool?

Ease of Use

The basic criteria by which ease of use is evaluated are the same regardless of tool or perspective: (1) navigation, (2) understanding, (3) information quantity, and (4) appearance. As in the operationalization of usefulness, self-reporting of overall ease of use was predicted to have the strongest correlation to likelihood of adoption than any of the other ease of use measures. Again, however, we hoped that including questions about the extent to which the tool addressed various aspects of ease of use would provide insight into the specific values that contribute to an instructor’s perception of overall ease of use, and allow for possible segmentation in the future.

  1. Navigation – can the instructor easily find the information they are looking for?
  2. Understanding – is the information presented in a way that is accessible, comprehensible, and actionable?
  3. Information Quantity – does the tool present so much information that it overwhelms the instructor? Or so little that the instructor is left with more questions than answers?
  4. Appearance – does the instructor find the interface appealing and generally enjoyable to work with?
  5. Overall Ease – What is the instructor’s overall impression of the tool’s ease of use?

Relative Value

Ali et al (2013) found that prior exposure to the graphical user interface of a similar learning analytics tool was among the highest correlated measures to behavioral intention to adopt a tool, although this correlation was not significant at either 0.01 or 0.05 levels.  It was, therefore, important to include a question about the relative value of Blackboard Analytics reports relative to other similar tools with which the respondent was familiar.

Survey Results

We piloted A4L in eight classes. Of these eight classes, only three made serious use of the tool, and only two faculty members responded to the survey. For the most part, response rates among students were far too low to be informative, except for two course sections in which the instructor incentivized participation by offering a cross-the-board grade bonus if 90% of the class completed the survey. In this latter case, the response rate was nearly 70%, but students were in a post-graduate professional program that was predominantly online, and so it was impossible to generalize results to the Emory University community as a whole. As a consequence of poor response rates, the feedback we received (both quantitative and qualitative) was treated anecdotally, but nevertheless provided several rich insights that informed our future decision to license the product.

In spite of challenges associated with the nature and motivation-level of our convenience sample (behaviors that were helpful, in a way, as indicators of low-likelihood to adopt), I have a lot of confidence in our implementation of the Learning Analytics Acceptance Model (LAAM), and am eager to see it put it to use again using a larger sample of participants and implementing practices that would increase response rates.