Does Student Success start with Diversity in Higher Ed Administration?

Twitter has finally begun to add tools to mitigate harassment.

Harassment on Twitter has been a huge problem in recent years, and the amount of poor citizenship on the platform has only increased post-election. Why has it taken so long to respond? On the one hand, it is a very hard technical problem: how can users benefit from radical openness at the same time as they are protected from personal harm? In certain respects, this is a problem with free speech in general, but the problem is even greater for Twitter as it looks to grow its user base and prepare for sale. On the other hand, Twitter insiders have said that dealing with harassment has simply not been a priority for the mostly white male leadership team. Diversity is famously bad at Twitter. A lack of diversity within an organization leads to a lack of empathy for the concerns of ‘others.’ It leads to gaps in an organization’s field of vision, since we as people naturally pursue goals that are important to us, and what is important to us is naturally a product of our own experience. Values create culture. And culture determines what is included and excluded (both people and perspectives). Continue reading

Product as Praxis: How Learning Analytics tools are ACTUALLY Differentiated

I’ve been thinking a lot recently about product as praxis. Without putting too much conceptual weight behind the term ‘praxis,’ what I mean is merely that educational technologies are not just developed in order to change behavior. Ed tech embodies values and beliefs (often latent) about what humans are and should be, about what teaching and learning are, and about the role that institutions should play in guiding the development of subjectivity. As valued, educational technology also has the power to shape, not just our practices, but also how we think.

When thought of as praxis, product development carries with it a huge burden. Acknowledging that technology has the power (and the intention) to shape thought and action, the task of creating an academic technology becomes a fundamentally ethical exercise.

Vendors are not merely responsible for meeting the demands of the market. ‘The market’ is famously bad at understanding what is best for it. Instead, vendors are responsible for meeting the needs of educators. It is important for vendors to think carefully about their own pedagogical assumptions. It is important for them to be explicit about how those assumptions shape product development. The product team at Blackboard (of which I am a part), for example, is committed to values like transparency and interoperability. We are committed to an approach to learning analytics that seeks to amplify the power existing human capabilities rather than exclude them from the process (the value of augmentation over automation). These values are not shared by everyone in educational technology. They are audacious in that they fly in the face of some taken-for-granted assumptions about what constitute good business models in higher education.

Business models should not determine pedagogy. It is the task of vendors in the educational technology space to begin with strong commitments to a set of well-defined values about education, and to ensure that business models are consistent with those fundamental beliefs. It will always be a challenge to develop sustainable business models that do not conflict with core values. But that’s not a bad thing.

When it comes to the market for data in eduction, let’s face it: analytics are a commodity. Every analytics vendor is applying the same basic set of proven techniques to the same kinds of data. In this, it is silly (and even dangerous) to talk about proprietary algorithms. Data science is not a market differentiator.

What DOES differentiate products are the ways in which information is exposed. It is easy to forget that analytics is a rhetorical activity. The visual display of information is an important interpretive layer. The decisions that product designers make about WHAT and HOW information is displayed prompt different ranges of interpretation and nudge people to take different types of action. Dashboards are the front line between information and practice. It is here where values become most apparent, and it is here where products are truly differentiated.

Student Success and Liberal Democracy

The political environment in the United States has increasingly highlighted huge problems in our education system. These problems, I would argue, are not unrelated to how we as a country conceptualize student success. From the perspective of the student, success is about finding a high-paying job that provides a strong sense of personal fulfillment. From the perspective of colleges and universities, student success is about graduation and retention. From the perspective of government, it’s about making sure that we have a trained workforce capable of meeting labor market demands. For all of the recent and growing amount of attention paid to student success, however, what is woefully absent seems to be any talk about the importance of education to producing a liberal democratic citizenry. In the age of ‘big data,’ of course, part of this absence may be the fact that the success of a liberal education is difficult to measure. From this perspective, the success of a country’s education system cannot be measured directly. Instead, it is measured by the extent to which it’s citizens demonstrate things like active engagement, an interest/ability to adjudicate truth claims, and a desire to promote social and societal goods. Now, more than any time in recent history, we are witnessing the failure of American education. In the US, the topic of education has been largely absent from the platforms of individual presidential candidates.  This is, perhaps, a testament to the fact that education is bad for politics.  Where it has been discussed, we hear Trump talk about cutting funding to the Department of Education, if not eliminating it entirely. We hear Clinton talk about early childhood education, free/debt-free college, and more computer science training in k-12, but in each of these cases, the tenor tends to be about work and jobs rather than promoting societal goods more generally.

But I don’t want to make this post about politics. Our political climate is merely a reflection of the values that inform our conceptions of student success. These values — work, personal fulfillment, etc — inform policy decisions and university programs, but they also inform the development of educational technologies. The values that make up our nation’s conception of ‘student success’ produce the market demand that educational technology companies then try to meet. It is for this reason that we see a recent surge (some would say glut) of student retention products on the market, and relatively few that are meant to support liberal democratic values. It’s easy to forget that our technologies are not value-neutral. It’s easy to forget that, especially when it comes to communication technologies, the ‘medium is the message.’

What can educational technology companies do to meet market demands (something necessary to survival) while at the same time being attuned to the larger needs of society? I would suggest three things:

  1. Struggle. Keeping ethical considerations and the needs of society top of mind is hard.  For educational technologies to acknowledge the extent to which they both shape and are shaped by cultural movements produces a heavy burden of responsibility.  The easy thing to do is to abdicate responsibility, citing the fact that ‘we are just a technology company.’  But technologies always promote particular sets of values.  Accepting the need to meet market demand at the same time as the need to support liberal democratic education can be hard. These values WILL and DO come into conflict. But that’s not a reason to abandon either one or the other.  It means constantly struggling in the knowledge that educational technologies have a real impact on the lives of people.  Educational technology development is an inherently ethical enterprise.  Ethics are hard.
  2. Augment human judgment.  Educational technologies should not create opportunities for human beings to avoid taking responsibility for their decisions.  With more data, more analytics, and more artificial intelligence, it is tempting to lean on technology to make decisions for us.  But liberal democracy is not about eliminating human responsibility, and it is not about making critical thinking unnecessary.  To the contrary, personal responsibility and critical thinking are hallmarks of a liberal democratic citizen — and are essential to what it means to be human.  As tempting as it may be to create technologies that make decisions for us because they can, I feel like it is vitally important that we design technologies that increase our ability to participate in those activities that are the most human.
  3. Focus on community and critical thinking.  Creating technologies that foster engagement with complex ideas is hard.  Very much in line with the ‘augmented’ approach to educational technology development, I look to people like Alyssa Wise and Bodong Chen, who are looking at ways that a combination of embedded analytics and thoughtful teaching practices can produce reflective moments for students, and foster critical thinking in the context of community.  And it is for this reason that I am excited about tools like X-Ray Learning Analytics, a product for Moodle that makes use of social network analysis and natural language processing in a way that empowers teachers to promote critical thinking and community engagement.

The Trouble with ‘Student Success’

I’m increasingly troubled by ‘student success,’ and am even somewhat inclined to stop using the term entirely.

The trouble with ‘student success,’ it seems to me, is that it actually has very little to do with people. It’s not about humans, but rather about a set of conditions required for humans to successfully fill a particular role: that of a student.

So, what is a student?

A student (within the context of higher education, and as the term is employed within student success literature) is someone who is admitted to an institution of higher education, is at least minimally retained by that institution (many colleges and universities require at least 60 non-transferred credit hours in order to grant a degree), and graduate with some kind of credential (at least an Associate’s degree, but preferably a Bachelor’s). The student is the product of higher education. It is the task of colleges and universities to convert non-students into students (through the admissions process), only to convert them into a better kinds of non-students (through the graduation process). The whole thing is not entirely different from that religious process whereby an individual must first be converted from an a-sinner (someone who doesn’t grasp what sin is) into a sinner (they need to learn what sin is, and that they have committed it) in order to be transformed into a non-sinner through a process of redemption.

The language of ‘student success’ assumes that ‘being a student’ is an unmitigated good. But being a student is not a good in itself. The good of being a student is a direct consequence of the fact that being a student is requisite for attaining other higher goods. Having been a successful student is necessary in order to become a good worker. From the perspective of the individual, having been a successful student translates into being able to get a better job and earn a higher salary. From the perspective of a nation, a well-educated populace translates into an ability to meet labor demands in the service of economic growth. If this is the end of being a student, then, shouldn’t we talk about ‘Worker Success’? Replacing ‘student-‘ with ‘worker-‘ would retain every feature of ‘student success,’ but with the advantage of acknowledging that a post-secondary degree is not an end in itself, but is rather in the service of something greater. It would be more honest. It might also have the effect of increasing graduation rates by extending the horizon of students beyond the shoreline of their college experience and out toward the open sea of what will become something between a job and a vocation.

But I find the idea of ‘worker success’ still troubling in the same way as ‘student success.’ As with ‘student success,’ ‘worker success’ speaks to a role that humans occupy. It refers to something that a person does, rather than what a person is. As with being a successful student, being a successful worker implied having satisfactorily met the demands of a particular role, a set of criteria that come from outside of you, and that it is incumbant upon you to achieve. A successful student is someone who is admitted, retained, and graduates and so it is unsurprising that these are the measures against which colleges and universities are evaluated. A successful institution is one that creates successful students. Pressure is increasingly being put on institutions to ensure that students find success in career, but this is far more difficult to track (great minds are working on it). A successful worker is one who earns a high-paying job (high-salary serving as a proxy for the amount of value that a particular individual contributes to the overall economy).

What if we were to shift the way that we think about student success, away from focusing on conditional and instrumental goods, and instead toward goods that are unconditional and intrinsic? What if we viewed student success, not as an end in itself, but rather as something that may or may not help human beings contract their full potential as human beings? Would it mean eliminating higher education as it is today? I don’t think so. I’m not a utopian. I readily understand the historical, social, cultural, and material conditions that make school and work important. To the contrary, shifting out perspective toward what makes us human may in fact serve to underline the importance of an undergraduate education, and even of that piece of paper they call a degree. To the extent that an undergraduate education exposes minds to a world of knowledge, at the same time as it provides them with an opportunity to earn a good wage means that they are freed from the conditions of bare life (i.e. living paycheck to paycheck) and can commit their energies to higher order pursuits. Considered in this way, the importance of eliminating achievement gaps on the basis of race. ethnicity, gender, income, etc is also increased. For these groups who have been traditionally underserved by higher education, what is at stake in NOT having a post-secondary credential is not just a wage, but also perhaps their potential as human beings. At the same time as it make higher education more important, considering the student journey from the perspective of human success also opens up legitimate alternative pathways to formal education through which it is also possible to flourish. Higher education might be a way, but it might not be the way. And that should be okay.

I don’t know what this shift in perspective would mean for evaluating institutions. As long as colleges and universities are aimed at producing student-graduates, their reason for being is to solve a tactical problem — “how do we admit and graduate more students” — and they can be evaluated empirically and quantitatively by the extent to which they have solved the problem. The minute that colleges and universities start to reconceive their mission, not in terms of students, but in terms of humans, their success becomes far more difficult to measure, because the success of students-as-humans is difficult to measure. By thinking of education as a way of serving humans as opposed to serving students, our task becomes far more important, and also far more challenging.

But since when were the Good and the Easy the same thing?

Number Games: Data Literacy When You Need It

My wife’s coach one told her that “experience is what you get the moment after you needed it.”  Too often the same can be said for data literacy.  Colleges and universities looking to wisely invest in analytics to support the success of their students and to optimize operational efficiency are confronted with the daunting task of having to evaluate a growing number of options before selecting a products and approaches that are right for them.  What products and services are most likely to see the greatest returns on investment?  What approaches have other institutions taken that have already seen high rates of success?  On the one hand, institutions that are just now getting started with analytics have the great advantage of being able to look to many who have gone before and who are beginning to see promising results.  On the other hand, the analytics space is still immature and there is little long-term high-quality evidence to support the effectiveness of many products and interventions.

Institutions and vendors who have invested heavily in analytics have a vested interest in representing promising results (and they ARE promising!) in the best light possible.  This makes sense.  This is a good thing.  The marketing tactics that both institutions of higher education and educational technology vendors employ as they represent their results are typically honest and in good faith as they earnestly work in support of student success.  But the representation of information is always a rhetorical act.  Consequently, the ways in which results are presented too often obscure the actual impact of technologies and interventions.  The way that results are promoted can make it difficult for less mature institutions to adjudicate the quality of claims and make well-informed decisions about the products, services, and practices that will be best for them.

Perhaps the most common tactic that is used to make results appear more impressive than they are involves changing the scale used on the y-axis of bar and line charts.  A relatively small difference can famously be made to appear dramatic if the range is small enough.  But there are other common tactics that are not as easily spotted that are nonetheless just as important when it comes to evaluating the impact of interventions.  Here are three:

There is a difference between a percentage increase and an increase in percentage points.  For example, an increase in retention from 50% to 55% may be represented as either an increase of 5 points or 10%.  It is also important to note that the same number of points will translate into a different percentage increase depending on the starting rate.  For example, a 5-point increase from a retention rate of 25% represents an increase of 20%.  A 5-point increase from a starting retention rate of 75%, on the other hand, is only an increase of 7%.  Marketing literature will tend to choose metrics based on what sounds most impressive, even if it obscures the real impact.

A single data point does not equal a trend.  Context and history are important.  When a vendor or institution claims that an intervention saw a significant increase in retention/graduation in only a year, it is possible that such an increase was due to chance, an existing trend, or else was the result of other initiatives or shifts in student demographics.  For example, one college recently reported a 10% increase in its retention rate after only one year of using a student retention product.  Looking back at historical retention rates, however, one finds that the year prior to tool adoption marked a significant and uncharacteristic drop in retention, which means that any increase could just as easily have been due to chance or other factors.  In the same case, close inspection finds that the retention rate following tool adoption was still low from an historical perspective, and part of an emerging downward trend rather than the reverse.

It’s not the tool.  It’s the intervention. One will ofter hear vendors take credit for significant increases in retention / graduation rates, when there are actually other far more significant causal factors.  One school, for example, is praised for using a particular analytics system to double its graduation rates.  What tends not to be mentioned, however, is the fact that the same school also radically reduced its student : advisor ratio, centralized its administration, and engaged in additional significant programmatic changes that contributed to the school’s success over and above the impact that the analytics system might have made by itself.  The effective use of an analytics solution can definitely play a major role in facilitating efforts to increase retention and graduation rates.  If fact, all things being equal, it is reasonable to expect a 1 to 3 point increase in student retention as a result of using early alerts powered by predictive analytics.  Significant gains above this, however, are only possible as a result of significant cultural change, strategic policy decisions, and well-designed interventions.  It can be tempting for a vendor specially to at least implicitly take credit for more than is due, but it can be misleading and have the effect of obscuring the tireless efforts of institutions and people who are working to support their students.  More than this, overemphasizing products over institutional change can impede progress.  It can lead institutions to falsely believe that a product will do all the work, and encourage them to naively embark on analytics projects and initiatives without fully understanding the change in culture, policy, and practice to make them fully successful.

How to plan a conference that doesn’t suck

recent article, Kristen Eshleman and Josh Kim explore five reasons for why educational technology conferences suck (my word, not theirs) and, consequently, five ways to make conferences better.  In their view, organizers too easily forget that a conference is by/for/about people.  Conferences are often not planned with a view to enabling practice and overcoming silos.  They are frequently over-hyped and create too little value.

I agree.

The second annual Southeast Educational Data Symposium (SEEDS16) has been guided by what I am going to call the Golden Rule of Conference Panning: “Don’t plan a conference that you wouldn’t want to attend.”  To my mind, you should only plan a conference or event if it is actually going to provide significant value to participants.  A conference takes a lot of effort to plan and, all things being equal, it would be much easier for everyone involved if it didn’t exist.  The bar here is pretty high.  Another way or articulating the golden rule of conference planning, perhaps, is to say “Don’t plan a conference unless it is going to be better than having no conference at all.”  Not planning or going to a conference is pretty great, in my opinion.  Any conference I organize has to be better than that.  As I have worked with an incredible organizing committee to put together that I think will be a valuable and successful event (fingers crossed!), several guiding principles have emerged that are direct consequences of the golden rule:

  1. Start with Goals – too often, conferences are organized with the goal of organizing a conference.  The problem with deciding to organize a conference is that you are likely to end up with one.  If you set out to organize a conference, it will be modeled on what you think a conference looks like.  Since most conferences suck, yours will too.  Instead of setting out to create something that looks like a conference, begin with a specific set of goals.  Once you have those goals in mind, decide whether a conference is the right way to achieve them in the first place.  If so, then design the conference strategically in order to achieve those goals.  What you end up with may look pretty ‘unconferency.’  It might not.
  2. Support the network – Bars don’t sell alcohol.  They sell the promise of human connection.  Too often, conference organizers think that the value of their event comes from content.  They worry about whether a particular keynote is going to draw a crowd, and if particular events are going to entice enough people.  But there is very little in the way of conference content that I cannot access in other ways.  Conferences don’t sell knowledge.  They sell the promise of human connection.  What draws people to a conference is the promise of entering into a community of like-interested people, of forming relationships, of developing opportunities for collaboration, and of being excited to pursue new projects.
  3. Create Value – Regardless of group size, the strongest relationships are formed as a result of a shared vision, and of movement toward a common goal.  Picture two people with eyes fixed upon a common point on the horizon.  It doesn’t take much knowledge of geometry to realize that the closer two people get to that common point, the closer they will get to each other.  If the goal of a conference is to encourage human connection, the best way to do that is to create opportunities for people to work together to solve shared problems.  It’s one thing to have your brain tickled.  It’s another thing entirely to walk away from a conference with a tangible solution (ideally an artifact).  This emphasis on practice can be seen in the afternoon workshops at SEEDS16 (inspired by the format of the Learning Analytics Summer Institute organized each year by the Society of Learning Analytics Research).  Our workshop on practical learning analytics will give participants the opportunity to acquire new skills as they collaborate to answer really questions using real student data.  Our workshop on ethics will lead participants to develop codes of practice for learning analytics that will guide their own efforts, and hopefully make an impact at their home institutions.
  4. More isn’t better – One of the most common pieces of feedback from SEEDS15 was that participants were energized at the end of the day.  They still had energy, and they wanted more of a good thing.  Wanting to be responsive to this feedback, our original plans for SEEDS16 included extending the event over two days.  But looking at a draft schedule, I couldn’t help but think that the two-day conference was no longer something that I wanted to attend.  How many times have you been to an event and felt energized at the end of day one, only to feel ‘over it’ at the end of day two?  More of a good thing isn’t always better.  By retaining the one-day format for SEEDS16, our goal is to leave on a high note, and to leave participants with the energy they need to carry ideas and practices back to their home institutions.
  5. Relatedly, bigger isn’t better.  Group size matters. One of the primary goals of the Southeast Educational Data Symposium is to foster a strong sense of community around the effective use of educational data in the southeast region of the US.  With this aim in mind, SEEDS15 had a hard cap of 50 participants.  The result was magical.  The conference drew administrators, faculty, researchers, and graduate students from 25 institutions.  Name tags did not include information about title or rank, and so an environment was fostered in which all entered as equal partners in a shared conversation.  But the event had a waiting list, and our hard cap meant that many who wanted to attend the conference could not.  In order to meet demand this year we increased the cap to 100.  Hopefully an increase in scale won’t spell a decrease in ‘magic.’  Scale is tricky.
  6. Lastly, more complex isn’t better.  Another piece of feedback that we received out of last year’s event was that not all presentations were of interest or relevant to everyone, and that the conference would benefit from having multiple ‘tracks.’  Wanting to be responsive, early plans for the conference included plans to have a leadership track and a practitioner track.  But we soon changed our minds.  Regardless of what one or two past participants might have said, the organizing committee felt that part of the magic of last year came about as a result of the fact that it did not respect disciplinary or hierarchical silos.  That everyone participated in a common conversation about a shared set of material was powerful.  In order to achieve small-scale intimacy despite an increase in the number of participants, we have organized simultaneous sessions.  Life is about trade-offs.  With an increase in choice and intimacy comes a decrease in shared experience.

It would be far easier NOT to plan a conference.  Truth be told, conference planning is not my favorite thing in the world to do.  What drives me and other organizers, however, is a commitment to serving our community, and to meeting a clear need – the need to better understand how to put educational data into practice.  As Aristotle famously observed, the best leader is one who leads from a desire, not for power, but out of duty and a sense of commitment to the good of their community.  If the same guiding principle was applied to conferences, there would be far fewer, and they would suck a whole lot less.