AI, Higher Education, and Standardizing Values in Human Decision-Making

Our current use of AI in higher education involves automating parts (and at times the whole) of the human decision-making process. Where there is automation there is standardization. Where there are decisions, there are values. As a consequence, we can think of one of the functions of AI as the standardization of values. Depending on what your values are, and the extent to which they are reflected by algorithms as they are deployed, this may be more or less a good or bad thing.

Augmenting Human Decision-Making

An example of how AI is being used to automate parts of the decision-making process is through nudging. According to Thaler and Sunstein, the concept of nudging is rooted in an ethical perspective that they term ‘libertarian paternalism.’ Wanting to encourage people to behave in ways that are likely to benefit them, but not also wanting to undermine human freedom of choice (which Thaler, Sunstein, and many others view as an unequivocal good), nudging aims to structure environments so as to increase the chances that human beings will freely make the ‘right decisions.’ In higher education, a nudge could be something as simple as an automated alert reminding a student to register for the next semester or begin the next assignment. It could be an approach to instructional design meant to increase a student’s level of engagement in an online course. It could be student-facing analytics meant to promote increased reflection about one’s level of interaction in a discussion board. Nudges don’t have to involve AI (a grading rubric is a great example of a formative assessment practice designed to increase the salience of certain values at the expense of others), but what AI allows us to do is to scale and standardize nudges in a way that was, until recently, unimaginable.

Putting aside the obvious ‘having one’s cake and eating it too’ tension at the heart of the idea of libertarian paternalism, the fact of the matter is that a nudge functions by making decisions easier through the (at least partial) automation of the decision-making process. It serves to make choices easier my making some factors more salient than others, reducing an otherwise large and complex set of factors to a set that is much more manageable. The way a nudge works is by universalizing a set of values by using them as criteria for pre-selecting relevant factors for use in the decision-making process.

I don’t want to say whether this is a good or a bad thing. It is happening, and it certainly brings with it the possibility of promoting a range of social goods. But it is important for us to recognize that values are involved. We need to be aware of, and responsible for, the values that we are choosing to standardize in a given nudge. And we need to constantly revisit those values to ensure that they are consistent with our views and in light of the impact on human behavior that they are designed to have.

Automating Human Decision-Making

An example of where AI is being used to automate the entire decision process is in chat bots. Chat bots make a lot of sense for institutions looking to increase efficiency. During the admissions process, for example, university call centers are bombarded with phone calls from students seeking answers to common questions. Call centers are expensive and so universities are looking for ways to reduce cost. But lower cost has traditionally meant decreased capacity, and if capacity isn’t sufficient to handle demand from students, institutions run the risk of losing the very students they are looking to admit. AI is helping institutions to scale their ability to respond to common student questions by, in essence, personalizing a student’s experience with a knowledge base. A chat bot is an interface. In contrast to automated nudges, which serve to augment human decision-making, chat bots automate the entire process, since they are (1) defining a situation, and (2) formulating a response, (3) without the need for human intervention.

What kinds of assumption do chat bots like this make about the humans they serve? First, they assume that the only reason a student is reaching out to the university is for information. While this may be the case for some, or even most, it may not be for all. In addition to information, a student may also be in need of reassurance (whether they realize it or not). For first generation students especially, they may not know what questions to ask in the first place, and may need to be encouraged to think about factors they may not have otherwise considered. There is a huge amount to gain from one-one-one contact with a human being, and these benefits are lost when an interaction is reduced to a single function. Subtlety and lateral thinking are not virtues of AI (at least not today).

This is not to say that chat bots are bad. The increased efficiency they bring to an institution means that an institution can invest in other ways that enhance the student experience. The increased satisfaction from students who no longer have to sit on hold for hours is also highly beneficial, not to mention that some students simply feel more comfortable asking what they think are ‘dumb questions’ when they know they are talking to a robot. But we also need to be aware of the specific values we assume through the use of these technologies, and the opportunities that we are giving up, including a diversity of perspective, inter-personal support, narrative/biographical interjection, personalized nudging based on the experience and intuition of an advisor, and the ability to co-create meaning.

Is AI in higher education a good thing? It certainly carries with it an array of goods, but the good it brings is certainly not unequivocal. Because it automates at least parts of the decision-making process, it involves the standardization of values in a way, and at a scale, that until now has not been possible.

AI is here to stay. It is a bell that we can’t unring. Knowing that AI functions through the automation of at least some parts of human decision-making, then, it is incumbent upon us to think carefully about our values, and to take responsibility for the ways (both expected and unanticipated) that the standardization of values through information technology will affect how we think about ourselves, others, and the world we cohabit.

Ethical AI in Higher Education: Are we doing it wrong?

In higher education, and in general, an increasing amount of attention is being paid to questions about the ethical use of data. People are working to produce principles, guidelines and ethical frameworks. This is a good thing.

Despite being well-intentioned, however, most of these projects are doomed to failure. The reason is that, amidst talk about arriving at an ethics, or developing an ethical framework, the terms ‘ethics’ and ‘framework’ are rarely well-defined from the outset. If you don’t have a clear understanding of your goal, you can’t define a strategy to achieve it, and you won’t know if you have reached it if you ever do.

As a foundation to future blog posts that I will write on the matter of ethics in AI, what I’d like to do is propose a couple of key definitions, and invite comment where my assumptions might not make sense.

What do we mean by ‘ethics’?

Ethics is hard to do. It is one of those five inter-related sub-disciplines of philosophy defined by Aristotle that also includes metaphysics, epistemology, aesthetics, and logic. To do ethics involves establishing a set of first principles, and developing a system for determining right action as a consequence of those principles. For example, if we presume the existence of a creator god that has given us some kind of access to true knowledge, then we can apply that knowledge to our day-to-day life as a guide to evaluating right or wrong courses of action. Or, instead of appealing to the transcendent, we might begin with certain assumptions about human nature and develop ethical guidelines meant to cultivate those essential and unique attributes. Or, if we decide that the limits of our knowledge preclude us from knowing anything about the divine, or even ourselves, except for the limits of our knowledge, there are ethical consequences of that as well. There are many approaches and variations here, but the key thing to understand is that ethics is hard. It requires us to be thoughtful about arriving at a set of first principles, being transparent, and systematically deriving ethical judgements as consequences of our metaphysical, epistemological, and logical commitments.

What ethics is NOT, is a set of unsystematicly articulated opinions about situations that make us feel uneasy. Unfortunately, when we read about ethics in data science, in education, and in general, this is typically what we end up with. Indeed, the field of education is particularly bad about talking about ethics (and of philosophy in general) in this way.

What do we mean by a ‘framework’?

The interesting thing about the language of frameworks is that it has the potential to liberate us from much of the heavy burden placed on us by ethical thinking. The reason for this is that the way this language is used in relation to ethics — as in an ‘ethical framework’ — already presupposes a specific philosophical perspective: Pragmatism.

What is Pragmatism? I’m going to do it a major disservice here, but it is a perspective that rejects our ability to know ‘truth’ in any transcendent or universal way, and so affirms that the truth in any given situation is a belief that ‘works.’ In other words, the right course of action is the one with the best practical set of consequences. (There’s a strong and compelling similarity here between Pragmatism and Pyrrhonian Skepticism, but won’t go into that here…except to note that, in philosophy, everything new is actually really old).

The reason that ethical frameworks are pragmatic is that they do not seek to define sets of universal first principles, but instead set out to establish methods or approaches for arriving at the best possible result at a given time, and in a given place.

The idea of an ethical framework is really powerful when discussing the human consequences of technological innovation. Laws and culture are constantly changing, and they differ radically around the globe. Were we to set out to define an ethics of educational data use, it could be a wonderful and fruitful academic exercise. A strong undergraduate thesis, or perhaps even a doctoral dissertation. But it would never be globally adopted, if for no other reason than because it would rest on first principles, the very definition of which is that they cannot themselves be justified. There will always be differences in opinion.

But an ethical framework CAN claim universality in a way that an ethics cannot, because it defines an approach to weighing a variety of factors that may be different from place to place, and that may change over time, but in a way that nevertheless allows people to make ethical judgments that work here and now. Where differences of opinion create issues for ethics, they are a valuable source of information for frameworks, which aim to balance and negotiate differences in order to arrive at the best possible outcome.

 

Laying my cards in the table (as if they weren’t on the table already), I am incredibly fond of the framework approach. Ethical frameworks are good things, and we should definitely strive to create an ethical frameworks for AI in education. We have already seen several attempts, and these have played an important role in getting the conversation started, but I see the language of ‘ethical framework’ being used with a lack of precision. The result has been some helpful, but rather ungrounded and unsystematic sets of claims pertaining to how data should be used in certain situations. These are not frameworks. Nor are they ethics. They are merely opinions. These efforts have been great for promoting public dialogue, but we need something more if we are going to make a difference.

Only by being absolutely clear from the outset about what an ethical framework is, and what it is meant to do, can we begin to make a significant and coordinated impact on law, public policy, data standards, and industry practices.