Using SNAPP in OSX

In recent months, many folks (myself included) have been frustrated by a sudden incompatibility between SNAPP and the latest version of popular browsers and Java. SNAPP (Social Networks Adapting Pedagogical Practice) is a highly useful tool for visualizing discussion board activity in Blackboard, Moodle, and Sakai. I have wanted to use it, and to recommend it to others. With this work around, I can do both.

I should note up front, that I have only successfully implemented this work around on my own desktop environment, and have not tested it on other platforms or using other versions. If this works for you under other conditions, or if you find other more versatile work arounds, please post to the comments.

The environment in which I have been able to successfully launch SNAPP is as follows:

Under these conditions, getting SNAPP to work is actually…well, a snap!

1. In the Java Control Panel (System Preferences –> Java), go to the Security tab and edit the Site List to include both the code source and the base url from which you expect to launch the SNAPP applet. (If you point to, then Java will yell at you for not using https. You’ll have to accept, or else host the code in some other more secure location. So it goes.)

SNAPP Java Fix

2. Click OK to commit the changes.

3. Hurray!!!

In addition to the primary website for the SNAPP tool (, Sandeep Jayaprakash from Marist College has provided some excellent documentation on installing SNAPP to work on a local machine. well worth checking out:

Teaching the Unteachable: On the Compatibility of Learning Analytics and Humane Education

This paper is an exploratory effort to find a place for learning analytics in humane education. After distinguishing humane education from training on the basis of the Aristotelian model of intellectual capabilities, and arguing that humane education is distinct by virtue of its interest in cultivating prudence, which is unteachable, an account of three key characteristics of humane education is provided. Appealing to thinkers of the Italian Renaissance, it is argued that ingenium, eloquence, and self-knowledge constitute the what, how, and why of humane education. Lastly, looking to several examples from recent learning analytics literature, it is demonstrated that learning analytics is not only helpful as set of aids for ensuring success in scientific and technical disciplines, but in the humanities as well. In order to function effectively as an aid to humane education, however, learning analytics must be embedded within a context that encourages continuous reflection, responsiveness, and personal responsibility for learning.

Access Full Text Here

Learning Analytics for a World of Constant Change

Slides from a presentation delivered during the a Digital Pedagogy Meetup in Atlanta (20 February 2014), discussing ways in which traditional analytics may stifle innovation, and identifying several ways in which embedded approaches to learning analytics may actually contribute to the development of personal responsibility, critical thinking, digital citizenship, and imagination — characteristics so vital to surviving and thriving in the 21st century.

“The Society for Learning Analytics Research defines learning analytics as the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs. Universities are increasingly using analytics to increase student retention and performance. Yet, the assumptions that frequently underly such initiatives are also inconsistent with pedagogies that would seek to cultivate creativity and innovation, capacities that are necessary in order to survive and thrive in a world that is constantly and increasingly changing. It will be argued that innovation and analytics are not incompatible, however, but rather that they are reconcilable through a shift in emphasis and priority. The presentation will sketch a provisional model of learning analytics that puts analytics in the service of (a humanist conception of) learning, rather than the reverse, and provide concrete examples of how this might be applied in practice.”

“Educational Data Mining and Learning Analytics”

This week, Ryan Baker posted a link to a piece, co-written with George Siemens, that is meant to function as an introduction to the fields of Educational Data Mining (EDM) and Learning Analytics (LA). “Educational Data Mining and Learning Analytics” is book chapter primarily concerned with methods and tools, and does an excellent job of summarizing some of the key similarities and differences between the two fields in this regard. However, in spite of the fact that the authors make a point of explicitly stating that EDM and LA are distinctly marked by an emphasis on making connections to educational theory and philosophy, the theoretical content of the piece is unfortunately quite sparse.

The tone of this work actually brings up some concerns that I have about EDM/LA as a whole. The authors observe that EDM and LA have been made possible, and have in fact been fueled, by (1) increases in technological capacity and (2) advances in business analytics that are readily adaptable to educational environments.

“The use of analytics in education has grown in recent years for four primary reasons: a substantial increase in data quantity, improved data formats, advances in computing, and increased sophistication of tools available for analytics”

The authors also make a point of highlighting the centrality of theory and philosophy in informing methods and interpretation.

“Both EDM and LA have a strong emphasis on connection to theory in the learning sciences and education philosophy…The theory-oriented perspective marks a departure of EDM and LA from technical approaches that use data as their sole guiding point”

My fear, however, which seems justified in light of the imbalance between theory and method in this chapter (a work meant to introduce, summarize, and so represent the two fields), is that the tools and methods that the fields have adopted, along with the technological- and business-oriented assumptions (and language) that those methods imply, have actually had a tendency to drive their educational philosophy.  From their past work, I get the sense that Baker and Siemens would both agree that the educational / learning space differs markedly from the kind of spaces we encounter in IT and business more generally. If this is the case, I would like to see more reflection on the nature of those differences, and then to see various statistical and machine learning methods evaluated in terms of their relevance to educational environments as educational environments.

Donkey-Carried-by-the-CartAs a set of tools for “understanding and optimizing learning and the environments in which it occurs” (, learning analytics should be driven, first and foremost, by an interest in learning. This means that each EDM/LA project should begin with a strong conception of what learning is, and of the types of learning that it wants to ‘optimize’ (a term that is, itself, imported from technical and business environments into the education/learning space, and which is not at all neutral). To my mind, however, basic ideas like ‘learning’ and ‘education’ have not been sufficiently theorized or conceptualized by the field. In the absence of such critical reflection on the nature of education, and on the extent to which learning can in fact be measured, it is impossible to say exactly what it is that EDM/LA are taking as their object. How can we measure something if we do not know what it is? How can we optimize something unless we know what it is for? In the absence of critical reflection, and of maintaining a constant eye on our object, it becomes all too easy to consider our object as if its contours are the same as the limits of our methods, when in actual fact we need to be vigilant in our appreciation of just how much of the learning space our methods leave untouched.

If it is true that the field of learning analytics has emerged as a result of, and is driven by, advancements in machine learning methods, computing power, and business intelligence, then I worry about the risk of mistaking the cart for the horse and, in so doing, becoming blind to the possibility that our horse might actually be a mule—an infertile combination of business and education, which is also neither.

Four (Bad) Questions about Big Data

A colleague recently sent me an email that included four questions that he suggested were the most concerning to both data management companies and customers: *

  • Big Data Tools – What’s working today? What’s next?
  • Big Data Storage – Do organizations have a manageable and scalable storage strategy?
  • Big Data Analytics – How are organizations using analytics to manage their large volume of data and put it to use?
  • Big Data Accessibility – How are organizations leveraging this data and making it more accessible?

These are bad questions.

I should be clear that the questions are not bad on account of the general concerns they are meant to address. Questions about tools, scalable storage, the ways in which data are analyzed (and visualized), and the availability of information are central to an organization’s long-term information strategy. Each of these four questions addresses a central concern that has very significant consequences for the extent to which available data can be leveraged to meet current informational requirements, but also future capacity. These concerns are good and important. The questions, however, are still bad.

The reason these questions are bad (okay, maybe they’re not bad…maybe I just don’t like them) is that they are unclear about their terms and definitions. In the first place, they imply that there is a separation between something called ‘Big Data’ and the tools, storage, analytics (here used very loosely), and accessibility necessary to manage it. In actual fact, however, there is no such ‘thing’ as Big Data in the absence of each of those four things. Transactional systems (in the most general sense, which also includes sensors) produce a wide variety of data, and it is an interest in identifying patterns in this data that has always motivated empirical scientific research. In other words, it is data, and not ‘Big Data’ that is our primary concern.

The problem with data as objects is that, until recently, we have been radically limited in our ability to capture and store them. A transactional system may produce data, but how much can we capture? How much can we store? For how long? Until recently, technological limitations have radically limited our ability to capture, store, and analyze the immense quantities of data that are generated, and have meant working with samples, and using inferential statistics to make probable judgements about a population. In the era of Big Data, these technological limitations are rapidly disappearing. As we increase our capacity to capture and store data, we increasingly have access to entire populations. A radical increase in available data, however, is not yet ‘Big Data.’ It doesn’t matter how much data you can store if you don’t also have the capacity to access it. Without massive processing power, sophisticated statistical techniques, and visualization aids, all of the data we collect is for naught, pure potentiality in need of actualization. It is only once we make population data meaningful in its entirety (not sampling from our population data) through the application of statistical techniques and sound judgement that we have something that can legitimately be called ‘Big Data.’ A datum is a thing given to experience. The collection and visualization of a population of data produces another thing given to experience, a meta-datum, perhaps.

In light of these brief reflections, I would like to propose the following (VERY) provisional definition of Big Data (which resonates strongly, I think, with much of the other literature I have read):

Big Data is the set of capabilities (capture, storage, analysis) necessary to make meaningful judgements about populations of data.

By way of closing, I think it is also important to distinguish between ‘Big Data’ on the one hand, and ‘Analytics’ on the other. Although the two are often used in conjunction with each other, it is important to note that using Big Data is not the same as doing analytics. Just as the defining characteristic of Big Data above in increased access (access to data populations instead of samples), so to does analytics. In the past, the ability to make data-driven judgements meant either having some level of sophisticated statistical knowledge oneself, or else (more commonly) relying upon a small number of ‘data gurus,’ hired expressly because of their statistical expertise. In contrast to more traditional approaches to institutional intelligence, which involve data collection, cleaning, analysis, and reporting (all of which took time), analytics toolkits quickly perform these operations in real-time, and make use of visual dashboards that allow stakeholders to make timely and informed decisions without also having the skills and expertise necessary to generate these insights ‘from scratch.’

Where Big Data gives individuals access to all the data, Analytics makes Big Data available to all

Big Data is REALLY REALLY exciting. Of course, there are some significant ethical issues that need to be addressed in this area, particularly as the data collected are coming from human actors, but from a methodological point of view, having direct access to populations of data is something akin to a holy grail. From a social scientific perspective, the ability to track and analyze actual behavior instead of relying on self-reporting about behavior on surveys can give us insight into human interactions that, until now, was completely impossible. Analytics, on the other hand, is something about which I am a little more ambivalent. There is definitely something to be said to encouraging data-driven decision-making, even by those with limited statistical expertise. Confronted by pretty dashboards that are primarily (if not exclusively) descriptive, without the statistical knowledge to ask even basic questions about significance (just because there appears to be a big difference between populations on a graph, it doesn’t necessarily mean that there is one), and with no knowledge about the ways in which data are being extracted, transformed, and loaded into proprietary data warehousing solutions, I wonder about the extent to which analytics do not, at least sometimes, just offer the possibility of a new kind of anecdotal evidence justified by appeal to the authority of data. Insights generated in this way are akin to undergraduate research papers that lean heavily upon Wikipedia because, if it’s on the internet, it’s got to be true.

If it’s data-driven, it’s got to be true.

Analytics Four Square Diagram

I’m not really happy with this diagram. Definitely a work in progress, but hopefully it capture’s the gist of what I’m trying to sort out here.

* The source of these questions is an event that was recently put on by the POTOMAC Officer’s Club entitled “Big Data Analytics – Critical Support for the Agency Mission”, featuring Ely Kahn, Todd Myers, and Raymond Hensberger.

Learning Analytics as Teaching Practice

Too often, it seems, conversations about learning analytics focus too much on means, and not enough on ends. Learning analytics initiatives are always justified by the promise of using data as a way of identifying students at risk, in order to develop interventions that would increase their chances of success. In spite of the fact that the literature almost always holds such intervention out as a promise, a surprising lack of attention is paid to what these interventions might look like. A recent paper presented by Wise, Zhao, and Hausknecht at the 2013 Conference on Learning Analytics and Knowledge (LAK’13) goes a long way in putting learning analytics in perspective, taking some crucial first steps in the direction of a model of learning analytics as a pedagogical practice.

Analytics About Learning

Analytics ABOUT Learning

Like so many, I often find myself being sucked into the trap of thinking of learning analytics as a set of tools for evaluating learning, as if learning and analytics inform one another as processes that are complementary, but nonetheless distinct. In other words, it is easy for me to think of learning analytics as analytics ABOUT learning. What this group of researchers from Simon Fraser University show, however, is that it is possible to think of learning analytics as a robust pedagogical practice in its own right. From analytics ABOUT learning, Wise, Zhao, and Hausknecht encourage us to think about analytics AS learning.

Analytics AS Learning

Analytics AS Learning

The paper is ostensibly interested in analytics for online discussions, and is insightful in its emphasis on dialogical factors, like the extent to which students not only actively contribute their own thoughts and ideas, but also engage in ‘listening’-type behaviors (i.e. thoughtful reading) that would engender engagement in community and a deeper level of discussion. More generally, however, two observations struck me as generally applicable to thinking of learning analytics as a pedagogical practice.

1. Embedded Analytics are also Interventions

Wise et al make a distinction between embedded analytics, which are “embedded in the discussion interface and can be used by learners in real-time to guide their participation,” and extracted analytics, which involve the collection of traces from learning activity in order to interpret them apart from the learning activity itself. Now, the fact that student-facing activity dashboards are actually also (if not primarily) intervention strategies is perhaps fairly obvious, but I have never thought about them in this way before. #mindblown

2. Analytics are Valued, through and through

By now we all know that, whatever its form, research of any kind always involves values, no matter how much we might seek to be value neutral. The valued nature of learning analytics, however, is particularly salient as we blur the line between analysis (which concerns itself with objects) and learning (which concerns itself with subjects). Regardless of the extent to which we realize how our use of analytics reinforces values and behaviors beyond those explicitly articulated in a curriculum, THAT we are using analytics and HOW we are using them DO have an impact. Thinking carefully about this latent curriculum and actively identifying the core values and behaviors that we would like our teaching practices to reinforce allows ensure consistency across our practices and with the larger pedagogical aims that we are interested in pursuing.

Wise, Zhao, and Hausknecht identify six principles (with which I am generally sympathetic) that guide their use of analytics as, and for the sake of, pedagogical intervention:

  1. Integration – in order for analytics to be effectively used by students, the instructor must present those analytics are meaningfully connected to larger purposes and expectations for the course or activity. It is incumbent upon the ‘data-driven instructor’ to ensure that data are not presented merely as a set of numbers, but rather as meaningful information of immediate relevance to the context of learning.
  2. Diversity (of metrics) – if students are presented with too few sources of data, it becomes very easy for them to fixate upon optimizing those few data points to the exclusion of others. Sensitive also to the opposite extreme, which would be to overload students with too much data, it is important to present data in such a say as to encourage am holistic approach to learning and learning aims.
  3. Agency – students should be encouraged to use the analytics to set personal goals, and to use analytics as a way of monitoring their progress relative to these. Analytics should be used to cultivate sutonomy and a strong sense of personal responsibility. The instructor must be careful to mitigate against a ‘big-brother’ approach to analytics that would measure all students against a common and rigid set of instructor-driven standards. The instructor must also act to mitigate against the impression that this is what is going on, which has the same effect.
  4. Reflection – encouraging agency involves cultivating habits of self-reflection. The instructor should, therefore, provide explicit time and space for reflection on analytics. The authors, for example, use an online reflective journal that is shared between students and instructor.
  5. Parity – activities should be designed to avoid a balance of power situation in which the instructor collects data on the students, and instead use data as a reflective and dialogic tool between the instructor and students. In other words, data should not be used for purposes of evaluation or ranking, but rather should be used as a critical tool for the purpose of identifying and correcting faults or reinforcing excellences.
  6. Dialogue – just as analytics are used as an occasion for students to cultivate agency through active reflection on their behavior, the instructor should “expose themselves to the same vulnerability as the students.” Not only should instructors attend to and reflect upon their own analytics, but do so in full view of the class and in such a way as to allow students to criticize him/her in the same way as s/he does them.


How Big Data Is Taking Teachers Out of the Lecturing Business

A Summary and Response

In his Scientific American article, How Big Data is Taking Teachers Out of the Lecturing Business” Seth Fletcher describes the power of data-driven adaptive learning for increasing the efficacy of education while also cutting the costs associated with hiring teachers. Looking specifically at the case of Arizona State University, where computer-assisted learning has been adopted as an efficient way to facilitate the completion of general education requirements (math in particular), Fletcher describes a situation in which outcomes for students scores increase, teacher satisfaction improves (as teachers shift from lecturing to mediating), and profit is to be made by teams of data-scientists for hire.

Plato_Aristotle_della_Robbia_OPA_FlorenceThere are, of course, concerns about computer-assisted adaptive learning, including those surrounding issues of privacy and the question of whether such a data-driven approach to education doesn’t tacitly favor STEM (training in which can be easily tested and performance quantified) over the humanities (which demands an artfulness not easily captured by even the most elaborate of algorithms). In spite of these concerns, however, Fletcher concludes with the claim that “sufficiently advanced testing is indistinguishable from instruction.” This may very well be the case, but his conception of ‘instruction’ needs to be clarified here. If by instruction Fletcher means to say teaching in general, then the implication of his statement is that teachers are becoming passé, and will at some point become entirely unnecessary. If, on the other hand, instruction refers only to a subset of activities that take place under the broader rubric of education, then there remains an unquantifiable space for teachers to practice pedagogy as an art, the space of criticism and imagination…the space of the humanities, perhaps?

As the title of Fletcher’s piece suggests, Big Data may very well be taking teachers out of the lecturing business, but it is not taking teachers out of the teaching business. In fact, one could argue that lecturing has NEVER been the business of teaching. In illustrating the aspects of traditional teaching that CAN be taken over by machines, big data initiatives are providing us with the impetus to return to questions about what teaching is, to clarify the space of teaching as distinct from instruction, and with respect to which instruction is of a lower-order even as it is necessary. Once a competence has been acquired and demonstrated, the next step is not only to put that competency to use in messy, real-world situations–situations in which it is WE who must swiftly adapt–but also to take a step back in order to criticize the assumptions of our training. Provisionally (ALWAYS provisionally), I would like to argue that it is here, where technê ends and phronesis begins, that the art of teaching begins as well.