Three business lessons I learned from my father

My father is retiring today.

My father is leaving his working life as I feel that mine is getting started. It seems fitting, then, to use may father’s retirement as an occasion to look back at the lessons he has taught me over the years, and that continue to shape how I approach business and life. There are many. Here are three.

Don Harfield

My father, Don Harfield

You can’t steer a parked car.

You can’t steer a parked car. This is great advice for surviving and thriving amidst conditions of uncertainty. None of us know what the future holds. Increasingly, we need to expect the unexpected. Rather than be paralyzed in the face of the unknown, what I have learned from my father is the importance of passionately pursuing a goal, committing yourself to a particular direction, while also being flexible and open to changing trajectory (sometimes radically) as conditions change. As my father retires, his advice continues to be relevant regardless of your stage in career and in life.

Don’t be risk averse. Be risk aware.

Being risk averse produces fear, and leads to an inability to act. Being afraid of risk leads to decisions that are as bad as if risk is unacknowledged. What risk aversion and its opposite have in common is a kind of laziness. If you don’t understand a project and the factors that condition its success, then you are stuck with temperament, simple heuristics, and ‘intuition.’ It is important to put in the work necessary to understand potential risks as much as possible, establish mechanisms to mitigate those risks, and build contingency into any plan to account for risks that you may not have identified or fully appreciated.  

Do the right thing. Put people first.

In many ways, I feel like my belief in the importance of virtue can be traced back to the model my father set for me. Do the right thing. Put people first. Have faith that, in doing what’s right, success will happen as a matter of course. An important part of this is to avoid overdetermining what success looks like. It might mean fame of fortune, but it might also mean forming important relationships, achieving a sense of peace, or leaving an indelible mark on your community. If you go about your life chasing after success, whatever it is you’ll always miss the mark. If, on the other hand, you seek only after what is good, you’ll achieve success every time.

How to fail with data

Sometimes the most effective way of communicating the right way to do something is by highlighting the consequences of doing the opposite.  It’s how sitcoms work.  By creating humorous situations that highlight the consequences of breeching social norms, those same norms are reinforced.

At the 2017 Blackboard Analytics Symposium, A. Michael Berman, ‎VP for Technology & Innovation at CSU Channel Islands and Chief Innovation Officer for California State University, harnessed his inner George Costanza to deliver an ironic, hilarious, and informative talk about strategies for failing with data.

What does this self-proclaimed ‘Tony Robbins of project failure’ suggest?

  1. Set unclear goalssetting unclear goals takes a lot of hard work and may require compromise. It’s way more democratic to let everyone set their own goals.  That way, everyone can have their own criteria for success, which guarantees that whatever you do almost everyone is going to think of you as a failure.
  2. Avoid Executive SupportGoing out and getting executive support is also a lot of work. It means going to busy executives, getting time of their calendar, and speaking to them in terms they understand.  It also means taking the time to listen and understand what is important to them.  Why not go it alone?  Sure, it’s unlikely that you will achieve very much, but it’ll be a whole lot of fun.
  3. Emphasize the Tech Make the project all about technology. And make sure to use as many acronyms as possible.  Larger outcomes don’t matter.  They are not your problem.  Focus on what you do best: processing the data and making sure it flows through your institution’s systems.
  4. Minimize Communication Why even bother to make people’s eyes glaze over when talking about technology when you can avoid talking to anyone at all? Instead of having a poor communication strategy, it’s better to have no communication strategy at all.  You’ll save the time and inconvenience of dealing with people questioning what you do, because they won’t know what you’re doing.
  5. Don’t Celebrate SuccessIf you have done everything to fail, but still succeed despite yourself, it’s very important not to celebrate. Why bother having a party when people are already getting paid?  Why take time out of the work day to reward people for doing their jobs?  Isn’t it smarter to just tell everyone to get back to work?  Seems like a far more efficient use of institutional resources.

Speaking from personal experience, Michael Berman insists that following these five strategies will virtually guarantee that you drive your data project into the ground. If failing isn’t your thing, and you’d rather succeed in your analytics projects, do the opposite of these five things and you should be just fine.

What is product marketing? A kind of manifesto

There is remarkably little written about product marketing.

For the last month, I have been tracking the terms “Product Marketing” and “Product Marketer” using Google alerts. In that time, except for a few exceptions, all I have see are job advertisements. A LOT of job advertisements. For a position that is in such high demand, the fact that there is so little written about it is remarkable indeed.

So, what is product marketing? It’s complicated.

It is commonly accepted that product marketing exists at the intersection of marketing, product management, and sales. A product marketer ‘owns’ messaging for a product or product line. In support of field and central marketing, they work to ensure that what a product ‘means’ is coherent, consistent with broader corporate messaging and brand standards, and compelling to a full range of buying personas. The messaging produced by a product marketer comes to life in two forms: through outward-facing collateral used for demand generation, and inward-facing resources used for sales enablement.

So what is a product marketer? They are a story-teller who serves the interests of marketing, product, and sales through the creation of messaging that is coherent, consistent, and compelling.

It would be easy to stop here and think of the product marketer as a person in the present, as someone who creates stories that strike a balance between the three types of organizational interest it serves. Is a product marketer someone who creates messages that ‘work’ here and now? Yes. But if we also take seriously the role of a product marketer in creating, not just meaning, but also vision, then the product marketer also bears a kind of responsibility to the future. And as it turns out, the most effective and impactful product narratives are those that point beyond an immediate need and toward a future in which a thing is not only useful, but also important.

For me, the most exciting part of product marketing is its relationship to product management. This relationship is not one-way. It is not as if product management creates a thing, and then hands it to ‘the marketing guy’ to ‘market.’ To the extent that a product marketer is responsible for what a thing means, they also have a direct impact on what it becomes. With a meaning that is coherent, consistent, and compelling comes an understanding of the problems and needs of the market. It also necessarily defines values. By working with product management to understand, not just what is possible, but also what is meaningful, the product marketer importantly contributes to a vision for a product that is actualized in the form of a roadmap.

If you can’t say something important, don’t say anything at all.

How common is the commitment to importance among product marketers? I can’t say. But I would like to think that a commitment to importance is essential to being an excellent product marketer. It renders the role itself important (as opposed to merely useful). But with importance comes greater responsibility. It means developing domain expertise over and above the general expertise of being a product marketer. With domain expertise comes a greater sense of empathy for the industries your product supports.

The minute that a product marketer shifts their perspective from the present to the future, their locus of responsibility also changes. Focused on the present, the product marketer is an advocate on behalf of the product to the market. Focused on the future, the product marketer serves as an advocate to the product on behalf of the market.

What, then, is a product marketer? They are a story-teller who advocates on behalf of the market to an organization’s marketing, product, and sales departments through the creation of narratives that are coherent, consistent, and compelling.

Analytics isn’t a thing…it’s a relation

In response to the 2017 NMC Horizon report, Mike Sharkey recently observed that analytics had disappeared from the educational technology landscape. After being on the horizon for many years, it seems to have vanished from the report without pomp or lamentation.

For those of us tracking the state of analytics according to the New Media Consortium, we have eagerly awaited analytics’ arrival. In 2011, the time to wide-scale adoption was expected to be four to five years. In 2016, time to adoption was a year or less. In 2017, I would have expected one of two things from the Horizon Report: either (a) great celebration as the age of analytics had finally arrived, or (b) acknowledgment that analytics had not arrived on time.

But we saw neither.

Upon first inspection, analytics seems to have vanished into thin air. But, as Sharkey observes, this was not actually the case. Instead, analytics’ absence from the report was itself a kind of acknowledgement that analytics is not actually ‘a thing’ that can be bought and sold. It is not something that can be ‘adopted.’ Instead, analytics is simply an approach that can be taken in response to particular institutional problems. In other words, to call out analytics as ‘a thing,’ is to establish a solution in search of a problem, as if ‘not having analytics’ was a problem itself that needed to be solved. Analytics never arrived because it was never on its way. The absence of analytics from the horizon report, then, points to the fact that we now understand analytics far better than we did in 2011. If we knew then what we know now, analytics would not have been featured in the horizon report in the first place. We would have put understanding ahead of tools, and bypassed the kind of hype out of which we are only now beginning to emerge.

I agree with Mike. But I want to go a step further. I have always been fascinated by ontologies, and the ways in which the assumptions we make about ‘thingness’ affect our behavior. I have a book in press about the emergence of the modern conception of society. I have written about love (Is it a thing? Is it an activity? Is it a relation? Is it something else?). And I have written about dirt. Mike’s post has served as a catalyst for the convergence of some of my thinking about analytics and ‘thingness.’

Analytics is not a thing. I can produce a dashboard, but I can’t point to that dashboard and say “there is analytics.” There is a important sense in which analytics involves the rhetorical act of translating information in such a way as to render it meaningful. In this, a dashboard only becomes ‘analytics’ when embedded within the act of meaning-making. That’s why a lot of ‘analytics’ products are so terrible. They assume that analytics is the same as data science with a visualization layer. They don’t acknowledge that analytics only happens when someone ‘makes sense’ out of what is presented.

Analytics is like language. Just like language is not the same as what is represented in the dictionary, analytics is not the same as what is represented in charts and graphs. Sure, words and visualizations are important vehicles for meaning. But just as language goes beyond words (or may not involve words at all), so too does analytics.

It is a mistake to confuse analytics with data science. An it is a mistake to confuse it with visualization. If analytics is about meaning-making, then we are working toward a functional definition rather than a structural one. This shift away from structure to function opens up some really exciting possibilities. For example, SAS is doing some incredible work on the sonic representation of data.

As soon as we begin to think analytics beyond ‘thingness,’ and adopt a more functional definition, its contours dissolve really quickly. If what we are talking about is a rhetorical activity according to which data is rendered meaningful, then we are no longer talking about visualization. We are talking about representation. In a recent talk, I suggested that, to the extend that analytics is detached from a particular mode of representation, and what we are talking about is intentional meaning-making — meaning making intended to solve a particular problem — then a conversation can easily become ‘analytics.’

So analytics is not a ‘thing.’ It is not something that we can point to. Is it an activity? Do we ‘do analytics’? No, analytics isn’t an activity either. Why? Because it is communicative, and so requires the complicity of at least one other. Analytics is not something that we do. It is something we do together. But it is not something that we do together in the same way that we might build a robot together, or watch television together, where what we are talking about is the aggregation of activities. What we are engaged in is something more akin to communication, or love.

Analytics is not a thing. Analytics is not an activity. Analytics is a relation.

Is it ethical for marketers to ‘nudge’?

They almost got me.

As I reached for the gasoline nozzle, I realized at the very last minute that what I thought was regular gasoline was actually ‘plus,’ a grade that I did not want and that I would have paid a premium for. The reason for my near mistake? The way my options were ordered. I expected the grades to be ordered by octane as they almost always are. But in this case, regular 87 was sandwiched between two more premium grades.

The strategy that was employed at the pump at this Shell station in Virginia is an example of ‘nudging.’ It is an example of leveraging preexisting expectations and habits to increase the chances of a particular behavior. There is nothing dishonest about the practice. Information is complete and transparent, and personal freedom to choose is not affected. It is simply that the environment is structured in such a way as to promote one decision instead of others. 

Ethically, I like the position of Thaler and Sunstein when they talk about ‘libertarian paternalism.’ In their view, nudging can be a way to reconcile a strong belief in personal freedom with an equally strong belief that certain decisions are better than others. But not all nudges are created equal. Just as it is possible to promote decisions that are better for individuals, so too is it possible to increase the likelihood of choices that serve other interests, and that even serve to subvert the fullest expression of personal liberty, as in the gasoline example above.

One way to think of marketing is as the use of the principles of behavioral economics to change consumer behavior. Marketers are in the business of nudging. Because nudging has a direct impact in human behavior, it is also a fundamentally ethical enterprise. Marketing carries with it a huge burden of responsibility.

What ethical positions do you take in your marketing efforts? What would marketing look like if we were all libertarian paternalists?

Overcoming early analytics challenges at UMBC

In 2011, Long and Siemens famously announced that big data and analytics represented “the most dramatic factor shaping the future of higher education.” Now, five years later, conversations about the use of data and analytics in higher education are more mixed. In 2016, the Campus Computing Project release an annual report that used the language of “analytics angst.” In a recent blog series for the EDUCAUSE Review, Mike Sharkey and I argue that analytics has fallen into a “trough of disillusionment.” What makes some institutions successful in their analytics where others flounder? How can we work to scale, not technologies, but high impact practices? Let’s examine one example.

The University of Maryland Baltimore County (UMBC) began working with Blackboard Analytics in 2006. At that time, they simply wanted to support access to basic information to ensure that the institution was effective and efficient in its operations. Shortly after gaining access to their institutional data, however, they quickly began asking deeper questions about student success.

READ FULL STORY HERE >> http://blog.blackboard.com/overcoming-early-analytics-challenges-at-umbc/

Equestrian Data Science: Ranking Eventers in 2016

Coming up with a list of the top eventers based on their performance in 2016 is hard.  The sport of three-day eventing is complex and multi-faceted, and the decisions we make about which factors to consider make a significant difference to the final result of any evaluation process. It is a result of this complexity, and the fact that there is bound to be strong disagreement about who ends up being included in a list of this kind, that it is rare to see anything like this published. And yet, I still believe that this exercise has value, particularly for fans like myself who find rankings a useful way of understanding the sport.

Note that the ranking that I have produced is the result of a lot of thinking and expert consultation. It is also a work in progress. I have tried to document some of the theory and methods underlying the list(s), but if you want to bypass this discussion, feel free to skip over these sections and see the lists themselves.

Guiding Principles

All ranking schemes involve subjective judgement. They involve establishing criteria on the basis of values. Since values differ from individual to individual, disagreement is bound to happen and conflicting lists are bound to appear. But there are two guiding principles that I believe should apply to all rankings:

(1) Look to the dataHuman beings are great at making decisions and at coming up with justifications after the fact. We all have biases, and we are all terrible at overcoming them. By limiting ourselves to measurable qualities and available data, we can lessen the impact of irrelevant and inconsistently applied preferences.

(2) Be transparentBeing data-driven in our decision-making processes doesn’t mean being objective. Decisions have to be made about the kinds of data to include, the ways in which that data is transformed, and the analytical tools that are applied. This is not a bad thing. Not only are these decisions necessary, they are also important because it is here that data becomes meaningful. Here, I argue that making the ‘right’ decisions is less important than making your decisions explicit.

Method

Inclusion Criteria

Who should be considered for inclusion in a list of top eventers world-wide? Here is a list of criteria that I believe any eventer needs to satisfy in order to be considered among the top in the sport. This is where values and judgement come in, and there is bound to be some disagreement. So it goes.

CCI only
There are several significant differences between CCI and CIC events. The demands that each of these event types place on horse and rider are so different that, for all intents and purposes, they should be considered different sports entirely. Compared to CIC events, CCIs are characterized by longer cross country courses, have stricter vetting requirements, and include show jumping as the final of the three phases.  CIC competitions are developmental.  The most elite riders in the world must be able to compete, complete, and excel in CCI events.  For this reason, I have chosen only to include CCI riders in the list.

3* and 4* only
This list is meant to include the best of the best. What this means is only including riders who have successfully competed at either 3 star or 4 star levels. Why not just include riders who have competed at the 4 star level and exclude 3 star results? The fact that there are only six 4 star events means that we don’t have a whole lot of data from year to year. The decision to include 3 star data also makes sense in light of recent decisions to downgrade Olympic and World Equestrian Games events to the 3 star level.

At least two competitions
There is a difference between CCI 3*/4* pairs and pairs that have merely competed at that level. In order to be considered in the list, a horse and rider combination must have completed a minimum of two CCI events at either the 3 star or four star level.

100% event completion rate
As recent Olympic history has underscored, the most important quality of an elite rider is the ability to consistently complete events at the highest level. Consistency is key. So I have only included riders in the list that successfully completed every CCI event they entered in 2016.

Statistical Methods

Once we have established a pool of eligible pairs, what is the best way to rank them? Do we simply take an average of their final scores? How do we account for the fact that some pairs excel in dressage while others shine on cross country or in show jumping? How to we account for the fact that judging differs from event to event, and for differences in terrain, weather, and course design? From a statistical perspective, we know that some events are ‘easier’ than others. How do we fairly compare the relative performance of horses and riders competing under different sets of conditions, even at the same level?

One way of overcoming differences is through a statistical process called standardization. A z-score is the difference between the number of points that a pair earned and the average number of points earned by all competitors at the same event in standard deviation units. A score of 0 means that a pair is average.  A negative z-score means the pair is above average, and a positive score means that it is below.  By converting points into z-scores, we are able to account for various differences from event to event. By comparing average final z-scores, we can more easily and reliably compare horse and rider combinations on an even playing field.

Once we have standardized final scores, we can sort pairs according to their average z-score and take the top 10.  VOILA!  We have a list of top riders.  Here are the results, along with a little bit of more useful information about their performance at 3* and 4* levels.

The Results (worldwide)

  1. Michael Jung & Fischerrocana FST (GER)
  2. Maxime Livio & Qalao des Mers (FRA)
  3. Hazel Shannon & Clifford (Aus)
  4. Oliver Townend & ODT Ghareeb (GBR)
  5. Jonelle Price & Classic Moet (NZL)
  6. Andrew Nicholson & Teseo (NZL)
  7. Hannah Sue Burnett & Under Suspection (USA)
  8. Nicola Wilson & Annie Clover (GBR)
  9. Andreas Dibowski & FRH Butts Avedon (GER)
  10. Oliver Townend & Lanfranco (GBR)

The Results (USA)

If we apply the same criteria above, but only consider American CCI 3*/4* riders in 2016, we get the following list:

  1. Hannah Sue Burnett & Under Suspection
  2. Hannah Sue Burnett & Harbour Pilot
  3. Boyd Martin & Welcome Shadow
  4. Buck Davidson & Copper Beach
  5. Elisa Wallace & Simply Priceless
  6. Lauren Kieffer & Landmark’s Monte Carlo
  7. Lillian Heard & LCC Barnaby
  8. Kurt Martin & Delux Z
  9. Phillip Dutton & Fernhill Fugitive
  10. Sharon White & Cooley on Show

Some may find it odd that Phillip Dutton & Mighty Nice didn’t make either top 10 list, in spite of being a bronze medalist at the 2016 Olympic Games in Rio, Brazil.  The reason for this is that the FEI dataset that I have used intentionally excludes Olympic results because they are kind of strange…a horse of a different color, so to speak.  Not including the Olympics, this pair only competed at one CCI event in 2016: the Rolex Kentucky Three Day Event, where they finished in 4th with a final score of  57.8, which converts to a z-score of -1.11.  Based on this score, the pair would rank first in terms of national rankings, and fifth in the world.  But this is only one CCI event, and so I could not include them in the lists based on the criteria I established above.


Originally posted to horseHubby.com