Product Roadmaps: Just One Damn Thing After Another?

Dostoyevsky once wrote (paraphrasing) that every man needs both a place to be, and a place to go. Others very cleverly talk about the difference between roots and routes, arguing that in order for humans to reach their full potential, they need to know both who they are, and have a vision for what they wish to become.

The same applies to products.

Unfortunately, product managers and humans alike rarely think deeply about either being or becoming. They think of life (that of the themselves or their products) as simply the cumulative effect of adding one thing after another. True, this may be life in the strictest and barest sense, but would anyone call this ‘flourishing?’ I think not.

Let’s talk about product roadmaps.

A product roadmap is NOT a list of features on a timeline. A roadmap is not a prioritized list of feature requests. A product roadmap, insofar as it IS a roadmap, MUST begin with a clear idea of what the product is, and what it aspires to become. It must have roots and routes.

Of course, the way that a product thinks of its roots and routes is always subject to change in the same way as a human being may change their self-conception and aspirations. What is VITAL, however, is that they HAVE a self-conception and a vision for the future.

You can’t steer a parked car.

It’s incredibly easy for product managers to fall into the same trap as humans in general, thinking of their roadmaps in terms of ‘what’

  • WHAT am I going to do?
  • WHAT am I going to do next?
  • WHAT are my product gaps?
  • WHAT are my customers requesting?

But a product roadmap should NOT first and foremost be concerned with ‘what’ questions. It needs to instead be laser focused on the ‘how.’

The ‘what’ is fundamentally about vision.

  • What is it?
  • What should it become?

Only once these questions are asked and answered can a product manager start thinking about creating and prioritizing specific features and enhancements. A clear vision gives a product a ‘why,’ and makes it possible to frame a roadmap as the ‘how.’

In the absence of this vision work (which is hard to do), however, there is no roadmap. There is no beginning. There is no end. Without a clear vision framed in terms of what a product is, and what it aspires to become, a ‘product roadmap’ is simply one damn thing after another.

Chat bots

AI, Higher Education, and Standardizing Values in Human Decision-Making

Our current use of AI in higher education involves automating parts (and at times the whole) of the human decision-making process. Where there is automation there is standardization. Where there are decisions, there are values. As a consequence, we can think of one of the functions of AI as the standardization of values. Depending on what your values are, and the extent to which they are reflected by algorithms as they are deployed, this may be more or less a good or bad thing.

Augmenting Human Decision-Making

An example of how AI is being used to automate parts of the decision-making process is through nudging. According to Thaler and Sunstein, the concept of nudging is rooted in an ethical perspective that they term ‘libertarian paternalism.’ Wanting to encourage people to behave in ways that are likely to benefit them, but not also wanting to undermine human freedom of choice (which Thaler, Sunstein, and many others view as an unequivocal good), nudging aims to structure environments so as to increase the chances that human beings will freely make the ‘right decisions.’ In higher education, a nudge could be something as simple as an automated alert reminding a student to register for the next semester or begin the next assignment. It could be an approach to instructional design meant to increase a student’s level of engagement in an online course. It could be student-facing analytics meant to promote increased reflection about one’s level of interaction in a discussion board. Nudges don’t have to involve AI (a grading rubric is a great example of a formative assessment practice designed to increase the salience of certain values at the expense of others), but what AI allows us to do is to scale and standardize nudges in a way that was, until recently, unimaginable.

Putting aside the obvious ‘having one’s cake and eating it too’ tension at the heart of the idea of libertarian paternalism, the fact of the matter is that a nudge functions by making decisions easier through the (at least partial) automation of the decision-making process. It serves to make choices easier my making some factors more salient than others, reducing an otherwise large and complex set of factors to a set that is much more manageable. The way a nudge works is by universalizing a set of values by using them as criteria for pre-selecting relevant factors for use in the decision-making process.

I don’t want to say whether this is a good or a bad thing. It is happening, and it certainly brings with it the possibility of promoting a range of social goods. But it is important for us to recognize that values are involved. We need to be aware of, and responsible for, the values that we are choosing to standardize in a given nudge. And we need to constantly revisit those values to ensure that they are consistent with our views and in light of the impact on human behavior that they are designed to have.

Automating Human Decision-Making

An example of where AI is being used to automate the entire decision process is in chat bots. Chat bots make a lot of sense for institutions looking to increase efficiency. During the admissions process, for example, university call centers are bombarded with phone calls from students seeking answers to common questions. Call centers are expensive and so universities are looking for ways to reduce cost. But lower cost has traditionally meant decreased capacity, and if capacity isn’t sufficient to handle demand from students, institutions run the risk of losing the very students they are looking to admit. AI is helping institutions to scale their ability to respond to common student questions by, in essence, personalizing a student’s experience with a knowledge base. A chat bot is an interface. In contrast to automated nudges, which serve to augment human decision-making, chat bots automate the entire process, since they are (1) defining a situation, and (2) formulating a response, (3) without the need for human intervention.

What kinds of assumption do chat bots like this make about the humans they serve? First, they assume that the only reason a student is reaching out to the university is for information. While this may be the case for some, or even most, it may not be for all. In addition to information, a student may also be in need of reassurance (whether they realize it or not). For first generation students especially, they may not know what questions to ask in the first place, and may need to be encouraged to think about factors they may not have otherwise considered. There is a huge amount to gain from one-one-one contact with a human being, and these benefits are lost when an interaction is reduced to a single function. Subtlety and lateral thinking are not virtues of AI (at least not today).

This is not to say that chat bots are bad. The increased efficiency they bring to an institution means that an institution can invest in other ways that enhance the student experience. The increased satisfaction from students who no longer have to sit on hold for hours is also highly beneficial, not to mention that some students simply feel more comfortable asking what they think are ‘dumb questions’ when they know they are talking to a robot. But we also need to be aware of the specific values we assume through the use of these technologies, and the opportunities that we are giving up, including a diversity of perspective, inter-personal support, narrative/biographical interjection, personalized nudging based on the experience and intuition of an advisor, and the ability to co-create meaning.

Is AI in higher education a good thing? It certainly carries with it an array of goods, but the good it brings is certainly not unequivocal. Because it automates at least parts of the decision-making process, it involves the standardization of values in a way, and at a scale, that until now has not been possible.

AI is here to stay. It is a bell that we can’t unring. Knowing that AI functions through the automation of at least some parts of human decision-making, then, it is incumbent upon us to think carefully about our values, and to take responsibility for the ways (both expected and unanticipated) that the standardization of values through information technology will affect how we think about ourselves, others, and the world we cohabit.

Predictive analytics are not social science: A common misunderstanding with major consequences for higher education

This is the second in my series on common misunderstandings about predictive analytics that hinder their adoption in higher education. Last week I talked about the language of predictive analytics. This week, I want to comment on another common misconception: that predictive analytics (and educational data mining more generally) is a social science. Continue reading

Ethical AI in Higher Education: Are we doing it wrong?

In higher education, and in general, an increasing amount of attention is being paid to questions about the ethical use of data. People are working to produce principles, guidelines and ethical frameworks. This is a good thing.

Despite being well-intentioned, however, most of these projects are doomed to failure. The reason is that, amidst talk about arriving at an ethics, or developing an ethical framework, the terms ‘ethics’ and ‘framework’ are rarely well-defined from the outset. If you don’t have a clear understanding of your goal, you can’t define a strategy to achieve it, and you won’t know if you have reached it if you ever do.

As a foundation to future blog posts that I will write on the matter of ethics in AI, what I’d like to do is propose a couple of key definitions, and invite comment where my assumptions might not make sense.

What do we mean by ‘ethics’?

Ethics is hard to do. It is one of those five inter-related sub-disciplines of philosophy defined by Aristotle that also includes metaphysics, epistemology, aesthetics, and logic. To do ethics involves establishing a set of first principles, and developing a system for determining right action as a consequence of those principles. For example, if we presume the existence of a creator god that has given us some kind of access to true knowledge, then we can apply that knowledge to our day-to-day life as a guide to evaluating right or wrong courses of action. Or, instead of appealing to the transcendent, we might begin with certain assumptions about human nature and develop ethical guidelines meant to cultivate those essential and unique attributes. Or, if we decide that the limits of our knowledge preclude us from knowing anything about the divine, or even ourselves, except for the limits of our knowledge, there are ethical consequences of that as well. There are many approaches and variations here, but the key thing to understand is that ethics is hard. It requires us to be thoughtful about arriving at a set of first principles, being transparent, and systematically deriving ethical judgements as consequences of our metaphysical, epistemological, and logical commitments.

What ethics is NOT, is a set of unsystematicly articulated opinions about situations that make us feel uneasy. Unfortunately, when we read about ethics in data science, in education, and in general, this is typically what we end up with. Indeed, the field of education is particularly bad about talking about ethics (and of philosophy in general) in this way.

What do we mean by a ‘framework’?

The interesting thing about the language of frameworks is that it has the potential to liberate us from much of the heavy burden placed on us by ethical thinking. The reason for this is that the way this language is used in relation to ethics — as in an ‘ethical framework’ — already presupposes a specific philosophical perspective: Pragmatism.

What is Pragmatism? I’m going to do it a major disservice here, but it is a perspective that rejects our ability to know ‘truth’ in any transcendent or universal way, and so affirms that the truth in any given situation is a belief that ‘works.’ In other words, the right course of action is the one with the best practical set of consequences. (There’s a strong and compelling similarity here between Pragmatism and Pyrrhonian Skepticism, but won’t go into that here…except to note that, in philosophy, everything new is actually really old).

The reason that ethical frameworks are pragmatic is that they do not seek to define sets of universal first principles, but instead set out to establish methods or approaches for arriving at the best possible result at a given time, and in a given place.

The idea of an ethical framework is really powerful when discussing the human consequences of technological innovation. Laws and culture are constantly changing, and they differ radically around the globe. Were we to set out to define an ethics of educational data use, it could be a wonderful and fruitful academic exercise. A strong undergraduate thesis, or perhaps even a doctoral dissertation. But it would never be globally adopted, if for no other reason than because it would rest on first principles, the very definition of which is that they cannot themselves be justified. There will always be differences in opinion.

But an ethical framework CAN claim universality in a way that an ethics cannot, because it defines an approach to weighing a variety of factors that may be different from place to place, and that may change over time, but in a way that nevertheless allows people to make ethical judgments that work here and now. Where differences of opinion create issues for ethics, they are a valuable source of information for frameworks, which aim to balance and negotiate differences in order to arrive at the best possible outcome.

 

Laying my cards in the table (as if they weren’t on the table already), I am incredibly fond of the framework approach. Ethical frameworks are good things, and we should definitely strive to create an ethical frameworks for AI in education. We have already seen several attempts, and these have played an important role in getting the conversation started, but I see the language of ‘ethical framework’ being used with a lack of precision. The result has been some helpful, but rather ungrounded and unsystematic sets of claims pertaining to how data should be used in certain situations. These are not frameworks. Nor are they ethics. They are merely opinions. These efforts have been great for promoting public dialogue, but we need something more if we are going to make a difference.

Only by being absolutely clear from the outset about what an ethical framework is, and what it is meant to do, can we begin to make a significant and coordinated impact on law, public policy, data standards, and industry practices.

The difference between IT and Ed Tech

In a recent interview with John Jantsch for the Duct Tape Marketing podcast, Danny Iny argued that the difference between information and education essentially comes down to responsibility. Information is simply about presentation. Here are some things you might want to know. Whether and the extent to which you come to know them is entirely up to you.

In contrast, education implies that the one presenting information also takes on a degree of responsibility for ensuring that it is learned. Education is a relationship in which teachers and learners agree to share in the responsibility for the success of the learning experience.

This distinction, argues Iny, accounts for why books are so cheep and university is so expensive. Books merely present information, while universities take on an non-trivial amount of responsibility for what is learned, and how well.

(It is a shame that many teachers don’t appreciate this distinction, and their role as educators. I will admit that, when I was teaching, I didn’t fully grasp the extent of my responsibility for the success of my students. I wish I could go back and reteach those courses as an educator instead of as a mere informer.)

If we accept Iny’s distinction between information and education, what are the implications for what we today call educational technologies, or ‘Ed Tech’? As we look to the future of technology designed to meet specific needs of teachers and learners, is educational technology something that we wish to aspire to, or avoid?

Accepting Iny’s definition, I would contend that what we call educational technologies today are not really educational technologies at all. The reason is that neither they nor the vendors that maintain them take specific responsibility for the success or failure of the individual students they touch. Although vendors are quick to take credit for increased rates of student success, taking credit is not the same as taking responsibility. In higher education, the contract is between the student and the institution. If the student does not succeed, responsibility is shared between the two. No technology or ed tech vendor wants to be held accountable for the success of an individual student. In the absence of such a willingness or desire to accept a significant degree of responsibility for the success of particular individuals, what we have are not educational technologies, but rather information technologies designed for use in educational contexts. Like books…but more expensive.

With the advent of AI, however, we are beginning to see an increasing shift as technologies appear to take more and more responsibility for the learning process itself. Adaptive tutoring. Automated nudging. These approaches are designed to do more than present information. Instead, they are designed to promote learning itself. Should we consider these educational technologies? I think so. And yet they are not treated as such, because vendors in these areas are still unwilling (accountability is tricky) or unable (because of resistance from government and institutions) to accept responsibility for individual student outcomes. There is no culpability. That’s what teachers are for. In the absence of a willingness to carry the burden of responsibility for a student’s success, even these sophisticated approaches are still treated as information technologies, when they should actually be considered far more seriously.

As we look to the future, it does seem possible that the information technology platforms deployed in the context of education will, indeed, increasingly become and be considered full educational technologies. But this can only happen if vendors are willing to accept the kind of responsibility that comes with such a designation, and teachers are willing to share responsibility with technologies capable of automating them out of a job. This possible future state of educational technology may or may not be inevitable. It also may or may not be desirable.


RESOURCES

Is Facebook making us more adventurous?

When was the last time you heard someone say “get off of Facebook (or Instagram? or twitter, or …) and DO something!”?

I have a favorite passage from Jean-Paul Sartre’s Nausea:

This is what I thought: for the most banal even to become an adventure, you must (and this is enough) begin to recount it. This is what fools people: a man is always a teller of tales, he sees everything that happens to him through them; and he tries to live his own life as if he were telling a story. But you have to choose: live or tell.

We experience life through the stories we tell, and through the stories of others. It has always been the case. Even before the internet.

So does that mean that social media, which demands the persistent sharing of ‘adventures,’ actually make our lives richer? Does the compulsion to share more moments as if they were significant events render our lives more event-ful?

I eat the same breakfast every day and never remember it. I take a single picture of my meal, and oatmeal becomes an event.

Research suggests that kids today are doing less. And that is probably right. But as they have more opportunities to narrate their lives, perhaps they are more adventurous.

Ethics and Predictive Analytics in Higher Education

In March 2017, Manuela Ekowo and Iris Palmer co-authored a report for New America that offered five guiding practices for the ethical use of predictive analytics in higher education.  This kind of work is really important.  It acknowledges that, to the extent that analytics in higher education is meant to have an impact on human behavior, it it is a fundamentally ethical enterprise.

Work like the recent New America report is not merely about educational data science.  It is an important facet of educational data science itself.

Are we doing ethics well?

But ethics is hard.  Ethics is not about generating a list of commandments.  It is not about cataloging common opinion.  It is about carefully establishing a set of principles on the basis of which it becomes possible to create a coherent system of knowledge and make consistent judgements in specific situations.

Unfortunately, most work on the ethics of analytics in higher education lacks this kind of rigor.  Instead, ethical frameworks are the result of a process of pooling opinions in such a way as to strike a balance between the needs of a large number of stakeholders including students, institutions, the economy, the law, and public opinion.  To call this approach ethics is to confuse the good with the expedient.

Where should we begin?

An ethical system worthy of the name needs to begin with a strong conception of the Good.  Whether stated or implied, the most common paradigm is essentially utilitarian, concerned with maximizing benefit for the greatest number of people.  The problem with this approach, however, is that it can only ever concern perceived benefit.  People are famously bad at knowing what is good for them.

A benefit of this utilitarian approach, of course, is that it allows us to avoid huge epistemological and metaphysical minefields.  In the absence of true knowledge of the good, we can lean on the wisdom of crowds.  By pooling information about perceived utility, so the theory goes, we can approximate something like the good, or at least achieve enough consensus to mitigate conflict as much as possible.

But what if we were more audacious?  What if our starting point was not the pragmatic desire to reduce conflict, but rather an interest in fostering the fullest expression of our potential as humans?  As it specifically pertains to the domain of educational data analytics, what if we abandoned that instrumental view of student success as degree completion?  What if we began with the question of what it means to be human, and wrestled with the ways in which the role of ‘student’ is compatible and incompatible with that humanity?

Humane data ethics in action

Let’s consider one example of how taking human nature seriously affects how we think about analytics technologies.  As the Italian humanist Pier Paolo Vergerio observed, all education is auto-didactic.  When we think about teaching and learning, the teacher has zero ability to confer knowledge.  It is always the learner’s task to acquire it.  True, it is possible to train humans just as we can train all manner of other creatures (operant and classical forms of conditioning are incredibly powerful). but this is not education.  Education is a uniquely human capability whereby we acquire knowledge (with the aim of living life in accord with the Good). Teachers do not educate.  Teachers do not ‘teach.’ Rather, it is the goal of the teacher to establish the context in which the student might become actively engaged as learners.

What does this mean for Education?  Viewed from this perspective, it is incumbent on us as educators to create contexts that bring students to an awareness of themselves as learners in the fullest sense of the word.  It is crucial that we develop technologies that highlight the student’s role as autodidact.  Our technologies need to help bring students to self-knowledge at the same time as they create robust contexts for knowledge acquisition (in addition to providing opportunities for exploration, discovery, experimentation, imagination and other humane attributes).

It is in large part this humanistic perspective that has informed my excitement about student-facing dashboards.  As folks like John Fritz have talked about, one of the great things about putting data in the hands of students is that it furthers institutional goals like graduation and retention as a function of promoting personal responsibility and self-regulated learning.  In other words, by using analytics first and foremost with an interest in helping students to understand and embrace themselves as learners in the fullest sense of the term, we cultivate virtues that translate into degree completion, but also career success and life satisfaction.

In my opinion, analytics (predictive or otherwise) are most powerful when employed with a view to maximizing self-knowledge and the fullest expression of human capability rather than as way to constrain human behavior to achieve institutional goals.  I am confident that such a virtuous and humanistic approach to educational data analytics will also lead to institutional gains (as indeed we have seen at places like Georgia State University), but worry that where values and technologies are not aligned, both human nature and institutional outcomes are bound to suffer.