Our current use of AI in higher education involves automating parts (and at times the whole) of the human decision-making process. Where there is automation there is standardization. Where there are decisions, there are values. As a consequence, we can think of one of the functions of AI as the standardization of values. Depending on what your values are, and the extent to which they are reflected by algorithms as they are deployed, this may be more or less a good or bad thing.
Augmenting Human Decision-Making
An example of how AI is being used to automate parts of the decision-making process is through nudging. According to Thaler and Sunstein, the concept of nudging is rooted in an ethical perspective that they term ‘libertarian paternalism.’ Wanting to encourage people to behave in ways that are likely to benefit them, but not also wanting to undermine human freedom of choice (which Thaler, Sunstein, and many others view as an unequivocal good), nudging aims to structure environments so as to increase the chances that human beings will freely make the ‘right decisions.’ In higher education, a nudge could be something as simple as an automated alert reminding a student to register for the next semester or begin the next assignment. It could be an approach to instructional design meant to increase a student’s level of engagement in an online course. It could be student-facing analytics meant to promote increased reflection about one’s level of interaction in a discussion board. Nudges don’t have to involve AI (a grading rubric is a great example of a formative assessment practice designed to increase the salience of certain values at the expense of others), but what AI allows us to do is to scale and standardize nudges in a way that was, until recently, unimaginable.
Putting aside the obvious ‘having one’s cake and eating it too’ tension at the heart of the idea of libertarian paternalism, the fact of the matter is that a nudge functions by making decisions easier through the (at least partial) automation of the decision-making process. It serves to make choices easier my making some factors more salient than others, reducing an otherwise large and complex set of factors to a set that is much more manageable. The way a nudge works is by universalizing a set of values by using them as criteria for pre-selecting relevant factors for use in the decision-making process.
I don’t want to say whether this is a good or a bad thing. It is happening, and it certainly brings with it the possibility of promoting a range of social goods. But it is important for us to recognize that values are involved. We need to be aware of, and responsible for, the values that we are choosing to standardize in a given nudge. And we need to constantly revisit those values to ensure that they are consistent with our views and in light of the impact on human behavior that they are designed to have.
Automating Human Decision-Making
An example of where AI is being used to automate the entire decision process is in chat bots. Chat bots make a lot of sense for institutions looking to increase efficiency. During the admissions process, for example, university call centers are bombarded with phone calls from students seeking answers to common questions. Call centers are expensive and so universities are looking for ways to reduce cost. But lower cost has traditionally meant decreased capacity, and if capacity isn’t sufficient to handle demand from students, institutions run the risk of losing the very students they are looking to admit. AI is helping institutions to scale their ability to respond to common student questions by, in essence, personalizing a student’s experience with a knowledge base. A chat bot is an interface. In contrast to automated nudges, which serve to augment human decision-making, chat bots automate the entire process, since they are (1) defining a situation, and (2) formulating a response, (3) without the need for human intervention.
What kinds of assumption do chat bots like this make about the humans they serve? First, they assume that the only reason a student is reaching out to the university is for information. While this may be the case for some, or even most, it may not be for all. In addition to information, a student may also be in need of reassurance (whether they realize it or not). For first generation students especially, they may not know what questions to ask in the first place, and may need to be encouraged to think about factors they may not have otherwise considered. There is a huge amount to gain from one-one-one contact with a human being, and these benefits are lost when an interaction is reduced to a single function. Subtlety and lateral thinking are not virtues of AI (at least not today).
This is not to say that chat bots are bad. The increased efficiency they bring to an institution means that an institution can invest in other ways that enhance the student experience. The increased satisfaction from students who no longer have to sit on hold for hours is also highly beneficial, not to mention that some students simply feel more comfortable asking what they think are ‘dumb questions’ when they know they are talking to a robot. But we also need to be aware of the specific values we assume through the use of these technologies, and the opportunities that we are giving up, including a diversity of perspective, inter-personal support, narrative/biographical interjection, personalized nudging based on the experience and intuition of an advisor, and the ability to co-create meaning.
Is AI in higher education a good thing? It certainly carries with it an array of goods, but the good it brings is certainly not unequivocal. Because it automates at least parts of the decision-making process, it involves the standardization of values in a way, and at a scale, that until now has not been possible.
AI is here to stay. It is a bell that we can’t unring. Knowing that AI functions through the automation of at least some parts of human decision-making, then, it is incumbent upon us to think carefully about our values, and to take responsibility for the ways (both expected and unanticipated) that the standardization of values through information technology will affect how we think about ourselves, others, and the world we cohabit.
In higher education, and in general, an increasing amount of attention is being paid to questions about the ethical use of data. People are working to produce principles, guidelines and ethical frameworks. This is a good thing.
Despite being well-intentioned, however, most of these projects are doomed to failure. The reason is that, amidst talk about arriving at an ethics, or developing an ethical framework, the terms ‘ethics’ and ‘framework’ are rarely well-defined from the outset. If you don’t have a clear understanding of your goal, you can’t define a strategy to achieve it, and you won’t know if you have reached it if you ever do.
As a foundation to future blog posts that I will write on the matter of ethics in AI, what I’d like to do is propose a couple of key definitions, and invite comment where my assumptions might not make sense.
What do we mean by ‘ethics’?
Ethics is hard to do. It is one of those five inter-related sub-disciplines of philosophy defined by Aristotle that also includes metaphysics, epistemology, aesthetics, and logic. To do ethics involves establishing a set of first principles, and developing a system for determining right action as a consequence of those principles. For example, if we presume the existence of a creator god that has given us some kind of access to true knowledge, then we can apply that knowledge to our day-to-day life as a guide to evaluating right or wrong courses of action. Or, instead of appealing to the transcendent, we might begin with certain assumptions about human nature and develop ethical guidelines meant to cultivate those essential and unique attributes. Or, if we decide that the limits of our knowledge preclude us from knowing anything about the divine, or even ourselves, except for the limits of our knowledge, there are ethical consequences of that as well. There are many approaches and variations here, but the key thing to understand is that ethics is hard. It requires us to be thoughtful about arriving at a set of first principles, being transparent, and systematically deriving ethical judgements as consequences of our metaphysical, epistemological, and logical commitments.
What ethics is NOT, is a set of unsystematicly articulated opinions about situations that make us feel uneasy. Unfortunately, when we read about ethics in data science, in education, and in general, this is typically what we end up with. Indeed, the field of education is particularly bad about talking about ethics (and of philosophy in general) in this way.
What do we mean by a ‘framework’?
The interesting thing about the language of frameworks is that it has the potential to liberate us from much of the heavy burden placed on us by ethical thinking. The reason for this is that the way this language is used in relation to ethics — as in an ‘ethical framework’ — already presupposes a specific philosophical perspective: Pragmatism.
What is Pragmatism? I’m going to do it a major disservice here, but it is a perspective that rejects our ability to know ‘truth’ in any transcendent or universal way, and so affirms that the truth in any given situation is a belief that ‘works.’ In other words, the right course of action is the one with the best practical set of consequences. (There’s a strong and compelling similarity here between Pragmatism and Pyrrhonian Skepticism, but won’t go into that here…except to note that, in philosophy, everything new is actually really old).
The reason that ethical frameworks are pragmatic is that they do not seek to define sets of universal first principles, but instead set out to establish methods or approaches for arriving at the best possible result at a given time, and in a given place.
The idea of an ethical framework is really powerful when discussing the human consequences of technological innovation. Laws and culture are constantly changing, and they differ radically around the globe. Were we to set out to define an ethics of educational data use, it could be a wonderful and fruitful academic exercise. A strong undergraduate thesis, or perhaps even a doctoral dissertation. But it would never be globally adopted, if for no other reason than because it would rest on first principles, the very definition of which is that they cannot themselves be justified. There will always be differences in opinion.
But an ethical framework CAN claim universality in a way that an ethics cannot, because it defines an approach to weighing a variety of factors that may be different from place to place, and that may change over time, but in a way that nevertheless allows people to make ethical judgments that work here and now. Where differences of opinion create issues for ethics, they are a valuable source of information for frameworks, which aim to balance and negotiate differences in order to arrive at the best possible outcome.
Laying my cards in the table (as if they weren’t on the table already), I am incredibly fond of the framework approach. Ethical frameworks are good things, and we should definitely strive to create an ethical frameworks for AI in education. We have already seen several attempts, and these have played an important role in getting the conversation started, but I see the language of ‘ethical framework’ being used with a lack of precision. The result has been some helpful, but rather ungrounded and unsystematic sets of claims pertaining to how data should be used in certain situations. These are not frameworks. Nor are they ethics. They are merely opinions. These efforts have been great for promoting public dialogue, but we need something more if we are going to make a difference.
Only by being absolutely clear from the outset about what an ethical framework is, and what it is meant to do, can we begin to make a significant and coordinated impact on law, public policy, data standards, and industry practices.
As I reached for the gasoline nozzle, I realized at the very last minute that what I thought was regular gasoline was actually ‘plus,’ a grade that I did not want and that I would have paid a premium for. The reason for my near mistake? The way my options were ordered. I expected the grades to be ordered by octane as they almost always are. But in this case, regular 87 was sandwiched between two more premium grades.
The strategy that was employed at the pump at this Shell station in Virginia is an example of ‘nudging.’ It is an example of leveraging preexisting expectations and habits to increase the chances of a particular behavior. There is nothing dishonest about the practice. Information is complete and transparent, and personal freedom to choose is not affected. It is simply that the environment is structured in such a way as to promote one decision instead of others.
Ethically, I like the position of Thaler and Sunstein when they talk about ‘libertarian paternalism.’ In their view, nudging can be a way to reconcile a strong belief in personal freedom with an equally strong belief that certain decisions are better than others. But not all nudges are created equal. Just as it is possible to promote decisions that are better for individuals, so too is it possible to increase the likelihood of choices that serve other interests, and that even serve to subvert the fullest expression of personal liberty, as in the gasoline example above.
One way to think of marketing is as the use of the principles of behavioral economics to change consumer behavior. Marketers are in the business of nudging. Because nudging has a direct impact in human behavior, it is also a fundamentally ethical enterprise. Marketing carries with it a huge burden of responsibility.
What ethical positions do you take in your marketing efforts? What would marketing look like if we were all libertarian paternalists?
7 November 2014 Microsoft and Other Firms Pledge to Protect Student Data Fourteen companies, including Microsoft and Mifflin Harcourt, Amplify, and Edmodo, have pledged to adopt nationwide policies that will restrict and protect data collected from K-12 students. The group in pledging not to (1) sell student information, (2) target students with advertisements, or (3) compile personal student profiles unless authorized by parents or schools. The pledge, which is not legally binding, was developed by the Future of Privacy Forum.
6 November 2014 Lecturer Calls for Clarity in Use of Learning Analytics Sharon Slade (Open University) talks about her university’s effort to develop and ethical policy on the use of student data, that attempts to carefully address conflicting student concerns: (1) concerns about institutional ‘snooping’ on the one hand, and (2) an interest in personalized modes of communication. The Ethical Use of Student Data for Learning Analytics Policy produced at the Open University is the first of its kind, and the result of an exemplary effort that should be repeated widely.
6 November 2014 Echo360 Appoints Dr. Bradley S. Fordham as Global Chief Technology Officer Echo360, an active learning and lecture capture platform, has appointed Dr. Fordham as Global Technology Officer. With a wealth of industry and scholarly experience, Dr. Fordham will add significant expertise, legitimacy, and exposure to the platform. The this is the latest in a series of recent investments in developing the platform’s real-time analytics capabilities, which until recently, have been rather limited and unsophisticated.
6 November 2014 Harvard Researchers Used Secret Cameras to Study Attendance. Was That Unethical? In the spring of 2013, cameras in 10 Harvard classrooms recorded one image per minute, and the photographs were scanned to determine which seats were filled. The study rankled computer-science professor, Harry R. Lewis, who viewed the exercise as an obvious intrusion into student privacy. George Siemens notes that attendance data is the ‘lowest of the low,’ and notes that the level of surveillance taking place in online courses far exceeds what was collected as part of the attendance-tracking exercise. Since Lewis raised his concerns, Harvard has committed itself to reaching out to every faculty member and student whose image may have been captured to inform them of the research, a not-so-easy effort, as images were captured anonymously and have subsequently been destroyed as part of the research methodology.
5 November 2014 Disadvantages Students in Georgia District Get Home Internet Service Fayette County Schools in Georgia have partnered with Kajeet to give Title 1 students a Kajeet SmartSpot so that they can access online textbooks, apps, email, documents, sites, and their teachers while outside of school. The mobile hotspot works with the Kajeet cloud service and allows districts and schools to restrict access according to site- and time- base rules. The service also monitors student activity and provides teachers and administrators with learning analytics reports.
1 November 2014 Track Your Child’s Development Easily In May 2011, Jayashankar Balaraman — a serial entrepreneur with a background in advertising and marketing — moved into the education space with the launch of KNEWCLEUS, which in just three years has grown to become India’s largest parent-school engagement platform. The platform’s success is a result of the ease with which it makes parent-teacher communication, and the analytics engine that monitors student performance, identifies areas in need of remediation, and recommends relevant content.
Does Exercise (and Learning) Count If Not Counted? by Joshua Kim Kim asks the age-old question, “If I exercise and my fitness app does not record my steps, did my exercise ever happen?” He wonders about how the ability to track certain forms of activity, including learning activity, ends up altering behavior and shifting values on the basis of ‘trackability.’ The danger here, cautions Kim, is that we may come to conflate good teaching with digital practices that are more amenable to datafication.
10 Hottest Technologies in Higher Education by Vala Afshar Afshar summarizes the hottest technologies discussed by CIOs at the 2014 Annual EDUCUASE conference last month. Included in the list are wifi, social media, badges, analytics, wearables, drones, 3D printing, digital courseware, Small Private Online Courses (SPOCs), and virtual reality. Although analytics is included as one of many trends, it of course is also a major driver for each of these technologies as well.
Schools keep track of students’ online behavior, but do parents even know? by Taylor Armerding A truly exceptional review of literature and debates surrounding the collection and use of data from K-12 students. What kinds of data are a school’s ‘business’ to collect? How does an institution ensure informed consent, when privacy policies are often so complex as to be inaccessible by many parents? What is a school’s responsibility if it discovers something with implications for student success? Are schools ‘grooming kids for a lifetime of surveillance?’
Why I’m Voting ‘Yes’ on the Smart Schools Bond Act, Proposition 3 by Leonie Haimson New York Proposition 3 (also known as the Smart Schools Bond Act) would allow the sale of bonds top generate $2 billion statewide for capital funding. In spite of her resistance to using bond revenue to purchase electronic devices in schools (one of the key ways in which the bond revenues are meant to be spent), Haimson notes the urgent need that many schools have for an injection of funding, and notes that the finds may be spent in a wide variety of ways. She raises a concern about the proliferation of technolgies driven by companies interested in educational data mining, but notes that, thanks to the Children’s Online Privacy Protection Act, all parents have the right to opt out of any online data-mining instructional or testing program that collects personal data, whether their children participate in this program at school or home.
In the new era of big educational data, learning analytics (LA) offer the possibility of implementing real–time assessment and feedback systems and processes at scale that are focused on improvement of learning, development of self–regulated learning skills, and student success. However, to realize this promise, the necessary shifts in the culture, technological infrastructure, and teaching practices of higher education, from assessment–for–accountability to assessment–for–learning, cannot be achieved through piecemeal implementation of new tools. We propose here that the challenge of successful institutional change for learning analytics implementation is a wicked problem that calls for new adaptive forms of leadership, collaboration, policy development and strategic planning. Higher education institutions are best viewed as complex systems underpinned by policy, and we introduce two policy and planning frameworks developed for complex systems that may offer institutional teams practical guidance in their project of optimizing their educational systems with learning analytics.
In recent years, learning analytics (LA) has attracted a great deal of attention in technology-enhanced learning (TEL) research as practitioners, institutions, and researchers are increasingly seeing the potential that LA has to shape the future TEL landscape. Generally, LA deals with the development of methods that harness educational data sets to support the learning process. This paper provides a foundation for future research in LA. It provides a systematic overview on this emerging field and its key concepts through a reference model for LA based on four dimensions, namely data, environments, context (what?), stakeholders (who?), objectives (why?), and methods (how?). It further identifies various challenges and research opportunities in the area of LA in relation to each dimension.
Simon Fraser University (Victoria, BC, Canada) Tenure Track Position In Educational Technology And Learning Design – The Faculty of Education, Simon Fraser University (http://www.sfu.ca/education.html) seeks applications for a tenure-track position in Educational Technology and Learning Design at the Assistant Professor rank beginning September 1, 2015, or earlier. The successful candidate will join an existing complement of faculty engaged in Educational Technology and Learning Design, and will contribute to teaching and graduate student supervision in our vibrant Masters program at our Surrey campus and PhD program at our Burnaby campus. DEADLINE FOR APPLICATION: December 1, 2014
NEW!University at Buffalo (Buffalo, NY, USA) Associate for Institutional Research/Research Scientist: Online Learning Analytics – The University at Buffalo (UB), State University of New York seeks a scholar in online learning analytics to join its newly formed Center for Educational Innovation. Reporting to the Senior Vice-Provost for Academic Affairs, the Center for Educational Innovation has a mission to support and guide the campus on issues related to teaching, learning and assessment, and at the same time serves as a nexus for campus-wide efforts to further elevate the scholarship of and research support for pedagogical advancement and improved learning. The Research Scientist in online learning analytics will work in the area of Online Learning within the department and join a campus-wide network of faculty and researchers working on “big data”. DEADLINE FOR APPLICATION: December 6, 2014
NEW!University of Boulder Colorado (Boulder, Colorado, USA) Multiple Tenure Track Positions in Computer Science – The openings are targeted at the level of Assistant Professor, although exceptional candidates at higher ranks may be considered. Research areas of particular interest include secure and reliable software systems, numerical optimization and high-performance scientific computing, and network science and machine learning. DEADLINE FOR APPLICATION: Posted Until Filled
University of Technology, Sydney (Sydney, AUS) Postdoctoral Research Fellow: Academic Writing Analytics – Postdoctoral research position specialising in the use of language technologies to provide learning analytics on the quality of student writing, across diverse levels, genres and domains DEADLINE FOR APPLICATION: Posted Until Filled
University of Michigan (Ann Arbor, MI) Senior Digital Media Specialist – The University of Michigan is seeking a qualified Senior Digital Media Specialist to create digital content in support of online and residential educational experiences for the Office of Digital Education & Innovation (DEI). DEADLINE FOR APPLICATION: Posted Until Filled
NYU Steinhardt School of Culture, Education,and Human Developments Center for Research on Higher Education Outcomes (USA) 12-month postdoctoral position – available for a qualified and creative individual with interests in postsecondary assessment, learning analytics, data management, and institutional research.The Postdoctoral Fellow will be responsible for promoting the use of institutional data sources and data systems for the purpose of developing institutional assessment tools that can inform decision making and contribute to institutional improvement across New York University (NYU). DEADLINE FOR APPLICATION: Open Until Filled
argues that the value of data lies, not only in opportunities for increased personalization within MOOCs themselves, but in their potential to inform decisions about more traditional learning environments as well. For the Chronicle of Higher Education, Jeffrey R. Young sat down for a conversation with L. Todd Rose to discuss the opportunities that data afford for personalizing content delivery, but also the challenges of data sharing, particularly in the case of MOOCs, where sharing of educational data is largely precluded as a consequence of existing business models. Lastly, in an article written for ACM Queue, Daries, et al add that the sharing of MOOC data is not only limited by business considerations, but also out of respect for student privacy. They observe that anonymization processes often function to radically undermine the possibilities for future analysis. The authors argue that, even if researchers can identify individuals and their actions, privacy can still be upheld by if those researchers are bound to an ethical and legal framework.
Another set of ethical issues that were raised this week involve the intersection of analytics and the humanities. Joshua Kim sparked a conversation about the place of analytics in the liberal arts. In the discussion following Kim’s post, greatest attention was paid to issues of definition: What is ‘assessment’? What are the ‘Liberal Arts’? (Mike Sharkey, for example, suggests that the liberal arts simply imply “small classes and a high-touch environment,” and argues that analytics offers very little value in such contexts. Timothy Harfield argues that the liberal arts provide a critical perspective on analytics, and are crucial to ensuring that educational institutions are learning-driven rather than data-driven). Lastly, in an article for Educause Review Online, James E. Willis discusses the failure of ethical discussions in learning analytics, and offers an ethical framework that highlights some of the complexities involved in the debate. He categorizes ethical questions in terms of three distinct philosophical perspectives, what he calls “Moral Utopianism,” “Moral Ambiguity,” and “Moral Nihilism.” The framework itself is at once overly pedantic and lacking in the clarity and sophistication that one would expect from a piece with tacit claims to a foundation in the history of philosophy, but nevertheless represents an interesting attempt to push the debate outside of the more comfortable legal questions that most often frame conversations about data and privacy.
Open data has tremendous potential for science, but, in human subjects research, there is a tension between privacy and releasing high-quality open data. Federal law governing student privacy and the release of student records suggests that anonymizing student data protects student privacy. Guided by this standard, we de-identified and released a data set from 16 MOOCs (massive open online courses) from MITx and HarvardX on the edX platform. In this article, we show that these and other de-identification procedures necessitate changes to data sets that threaten replication and extension of baseline analyses. To balance student privacy and the benefits of open data, we suggest focusing on protecting privacy without anonymizing data by instead expanding policies that compel researchers to uphold the privacy of the subjects in open data sets. If we want to have high-quality social science research and also protect the privacy of human subjects, we must eventually have trust in researchers. Otherwise, we’ll always have the strict tradeoff between anonymity and science illustrated here.
Pedagogy with learning analytics is shown to facil- itate the teaching-learning process through analyzing student’s behaviours. In this paper, we explored the possibility of using learning analytics tools Coh-Metrix and Lightside for analyzing and improving writing skills of students in a technological common core curriculum course. In this study, we i) investigated linguistic characteristics of student’s essays, and ii) applied a machine learning algorithm for giving instant sketch feedback to students. Results illustrated the necessity of improving student’s writing skills in their university learning through e- learning technologies, so that students can effectively circulate their ideas to the public in the future.
In November 2012, in response to threats of expulsion from John Jay Science & Engineering Academy on account of her refusal to wear a mandatory RFID badge, Andrea Hernandez filed a law suit against San Antonio’s Northside Independent School District. If she continues to refuse even to wear an RFID-disabled badge–an accommodation sanctioned by a federal district judge who ruled against her–Hernandez will be placed in Taft High School beginning in September 2013, the public school to which she would normally be assigned.
In refusing to wear even an RFID-disabled badge, Hernandez’s case seems to have lost its ‘bite’ (it’s difficult to justify her appeal to religious freedom once tracking mechanisms are disabled). In spite of the fact that her concerns were ultimately voiced in terms of an interest in preserving religious freedom, however, the case nonetheless draws attention to the potential costs of privacy.
As elite institutions increasingly adopt comprehensive analytics programs that require students to give up their privacy in exchange for student success, are they also strongly contributing to a culture in which privacy is no longer valued? A robust analytics program requires every student to opt-in (i.e. students are not given the option of opting out). If analytics programs are seen as effective mechanisms to increase the chances of student success, and such programs are effective only to the extent that they gather data that is representative of their entire student body, and, as such, consenting to being tracked is made a condition of enrollment at the most elite universities (universities with the resources necessary to build and sustain such programs), then students must ask what it is that they value more: an education at a world-class institution (and all of the job prospects and other opportunity that such an education affords), or the ability to proverbially click ‘do not track.’ My suspicion is that, if explicitly given the choice, the vast majority of students are willing to give up the latter for the former, a symptom of our growing acceptance of, and complacence toward, issues of electronic privacy, but perhaps also an indication that a willingness to sacrifice privacy for success increasingly forms a key part of the ‘hidden curriculum.’
(Interestingly, in addition to gathering data from Learning Management and operational systems, universities also regularly collect data from student id card swipes. This data can easily be mobilized as part of a kind of ‘card-swipe surveillance’ program, as in fact has been done by Matthew S. Pittinksy (co-founder of Blackboard) at Arizona State University. According to Pittinsky, tracking card-swipe behavior can allow an institution to effectively map a student’s friend group, determine their level of social integration, and predict their chances of attrition.)