Against ‘Dumbing Down’: A Response to Adam Cooper’s On the Question of Validity in Learning Analytics

The following is a response to Adam Cooper’s blog post, On the Question of Validity in Learning Analytics, and to a brief critique that I issued HERE. This is an interesting conversation, and one that I think deserves to be highlighted rather than buried in comments. I am fond of Adam’s writing and reflection on issues in the learning analytics space, and am grateful for the opportunity to enter into a discussion on this matter. As a point of reference, I am quoting Adam’s comment to my original response here:

I think I am paying the price of having written a somewhat compressed piece about a complex topic…

I did not intend to convey the idea that conflation of concerns is desirable; indeed, the first listed point in the conclusion section could be unpacked to include the idea that we (a community with expertise in different disciplines, but also a community that would benefit from drawing more scholars into it) need to better articulate the distinctions within a wider interperation [sic] of “validity” than exists in particular disciplinary areas. This is not a “loosening of conceptual clarity” but a call for widening of conceptual view.

It is precisely the risk of under-appreciating complexity that prompted me to write the article. For all non-expert practitioners may need a set of expert advisors, I believe the reality is that they are unlikely to have access to them, or simply not see the need to consult them. At present, it seems likely that these non-experts will make decisions based on seeing a visually-attractive dashboard, a quoted prediction rate, or a statement like “based on the system used by XXX”. We need to move this narrow/near view forwards, to widen the view, and yes, to raise awareness of the need to consult experts. In the process, we should be aware that specialised vocabularies can be a source of difficulty. The same applies across disciplines and vocational areas; not all teams involved in implementing learning analytics will be as diverse as would be ideal. There is, I think, a need to develop
awareness of the many sides of “validity”, even within the community.

So… yes, I’m all for conceptual sophistication, but also for “dumbing-down”. The way forward I see is to develop a more socialised conceptual map as a basis for working out how best to simplify the message.

In what follows, I don’t believe that I am truly disagreeing with Adam in any significant way. If anything, I am piggy-backing off of his original piece, riffing off of some important themes, and looking to clarify some concerns that his piece raised for me.


I take words very seriously. Words define the contours of things, shape our perceptions of reality, and have real effects on our behavior. What concerns me about Adam Cooper’s advocacy of a ‘dumbed-down vocabulary’ (or a kind of two-tiered approach to methodology: rigor for data scientists and a ‘socialised’ conceptual map for the rest), is that a simplified message may serve, not simply to facilitate analytical practice, but also change it.

The primary virtue of analytics is that it permits and encourages an evidence-based approach to decision-making. Whether it delivers on its promise or not, analytics claims to overcome the need for dependence upon anecdotal accounts of human behavior, by allowing us to get at the behavior itself. The only way that analytics can do this is through methods that employ shared and clearly defined conceptions of validity and reliability.

Reliability is ‘easy,’ as it merely refers to the extent that one’s approach is capable of yielding the same result through repeated application. Validity, on the other hand, is hard. It is hard, because it is a measure of the extent to which one’s claims about reality are true. It is not the case that there are many ‘sides’ of validity. To say this is to imply that all conceptions of validity ultimately get at the same thing, as in the parable of the three blind men and the elephant. Instead, conceptions of validity differ according to the nature of the specific objects and domains with respect to which the concept is applied.

In logic, an argument is valid if it is impossible for its premises to be true and its conclusions false. The domain of logical validity (the standard against which validity is judged) is not reality, but rather a set of axioms, known rules that define a system (i.e. the law of the excluded middle and the law of non-contradiction). In statistics, we do not simply talk about validity. The term ‘validity’ is meaningless unless accompanied by more specific information about a particular domain. A domain-specific concept of validity is useless unless it can be operationalized in terms of a concrete methodology.

In the social sciences, certain kinds of validity are more challenging than others. Certain general conceptions of validity look to reality as their domain, as in the case of construct validity (which evaluates the extent to which a test is capable of measuring a particular phenomenon) and tests of experimental validity (which evaluate causality). In the absence of direct access to reality, these general forms of validity are operationalized in terms of proxies and justified epistemologically through theory. For example, it is impossible for me to say whether there is such a thing as intelligence, and so it is impossible for me to know for sure whether a particular test is capable of grasping it. What I CAN say, is the extent to which several measures that should correlate (according to my theory) actually do. The latter (convergent validity) is NOT a measure of construct validity, but rather a proxy for it. Just as logical validity is a measure of the extent to which a conclusion conforms to the known rules of a closed system, so too is something like convergent validity. Understanding validity and its limits with respect to particular knowledge domains should give us pause at the moment of decision-making. No matter what our methodology, when working with data (a term with an ancient etymology, referring to a set of individual experiences of reality), we are never working with reality itself, even if our activity has real consequences. Between data science and action, there is always judgment, and it is in this space of judgment that we are wholly and infinitely responsible.

To return to more practical matters, my concern with a ‘dumbed down’ definition of validity is that it might lend itself to a failure on the part of practitioners to fully realize their own responsibility for decision-making. When clearly and expertly defined, validity emphasizes both (1) the need to strive after accurate articulations of true states of affairs, while at the same time (2) acknowledging our humility in the face of our epistemological limits, and (3) affirming our responsibility for decision-making. If practitioners are allowed, or even encouraged, to adopt ‘dumbed-down’ conceptions of validity, I fear a loss of humility and a failure to take responsibility for decision-making. My fear comes, in part, as a result of my experience in market research. The function of the market research industry is not to accurately describe or to predict reality. The function of market research is to mitigate the perception of risk and responsibility involved in decision-making. In business (and in education), the vast majority of decisions are still made on the basis of personal opinion and anecdotal evidence. Huge investments in market research are meant to ‘launder’ decision-making processes, giving the appearance that they were based on a more robust kind of evidence, so that responsibility for negative consequences sticks to the data rather than to individuals. In my experience, organizations often make strategic decisions before contracting market research, and it is rare to see market research make a significant impact on an organization’s trajectory.

I do not mean to say that any practitioner is looking to ‘get away’ with anything. I do not mean to imply that data are only used strategically to manipulate an audience into thinking that an idea is better supported than it would be if justified by anecdote alone (although in many cases this is very much the case). Rather, I mean to say that a lack of knowledge about methodology and specialized methodological vocabulary can give practitioners a false sense of confidence in their analytical abilities and results. The ‘dumbing down’ of language to make analytics easier means also making it easier for practitioners to avoid responsibility for either their research (there are tools for this) or their ‘evidence-based’ decisions (“just do what the data say, and you’ll be fine. If things go wrong, you were just following orders the data”). In light of the importance of words in informing behavior, the fact that practitioners are in the business of making decisions that have real effects should make conceptual rigor more important, not less.

What I appreciate the most about Adam’s piece, is the effort to take something methodological like ‘validity,’ and to demonstrate an ethical dimension. I whole-heartedly agree with Adam’s concern about practitioners leaning on pretty graphs and big numbers, and empathize with his point about specialized vocabularies representing an obstacle to meaningful participation by laymen. But language about validity is not jargon. It is rather a tool-kit of methodologically useful distinctions. Here, a ‘dumbing down’ of vocabulary is tantamount to giving a plastic hammer to a non-carpenter and expecting him/her to build a house. What practitioners need are not different tools, but rather education and community. With Adam, I would like to see an interdisciplinary community develop around concepts like validity. Such a community should not only aim to render those concepts more understandable, but also provide enrichment that would benefit even researchers and scholars who so expertly wield them. From what I have briefly written here, I hope to have illustrated in some small way how a multitude of perspectives yields more complexity, not less. Through complexity, however, comes reflection, and exactly the kind of reflective practice that is required in the human sciences.