Worrying about worry is worrying

Worry is something that happens because we want control and can’t have it.

We worry BECAUSE we have no ultimate control over what the future will bring. And we worry IN ORDER TO exercise control over the future.

Do you ever find yourself free from worry and think, “I’m not worrying. ACK! I must not be on top of everything I need to be on top of. I’d better start thinking about everything I should be thinking about … and start worrying about everything that might go wrong. Because if I stop worrying about what might go wrong, everything will go wrong.”

I do.

Augustine famously observed that the root of all evil lies in worrying about things over which you have no control.

It seems to me that the key to managing anxiety — which for me often comes in the form of dread, or worry without a well-defined object — involves coming to realize that the ONLY influence we have over the future involves our activity today. And that worry today is an impediment to exactly the kind of activity required to meaningfully affect the future.

Should we each have a plan for the future? Certainly. But every plan should be loosely held and flexible in the face of change. Our focus should be on ensuring excellence in what we do today, because it is a lack of presence today that introduces future risk, far more than a lack of sufficient concern for the future.

P.S. It can be hard to stop worrying when worry is itself a kind of superpower. For example, I personally have a tremendous ability to produce excellent work today BECAUSE I project into the future and think about all of the potential downstream implications of each and every minute detail. And I worry that giving up worry will actually negatively impact the quality of products here and now.

A question for which I don’t have an answer: how is it possible to stop worrying without also giving up concern for the future? How possible is emotionless time travel?

Grandma Harfield’s Marmalade Recipe

My father’s mother wasn’t particularly known for her cooking. Very British. Usually pretty bland. But I remember her marmalade with great fondness. When I was a child, I thought ALL marmalade was gross (too bitter). Now, however, it is against my Grandma’s recipe that I judge all others.


  • 8 seville oranges
  • 2 lemons
  • 6 lbs of sugar
  • 9 cups of water


Cut oranges in half and remove seeds (pips) and juice.  Grind the rind and add to the juice. Cover with 9 cups of water.  Soak the pips in 1 cup of water overnight.  

In the morning, boil the orange mixture until the skins are tender, about 45 minutes.

Boil pips separately, strain and add the water to the pulp.

Add sugar to the pulp and boil until it gels (about 1.5 hours). Pour into jars and seal.

An evolving dictionary of Intelligent Automation

When I was actively involved as a researcher, it was a running joke (and also a truism) that scholarly distinction wasn’t earned through originality, but through the invention of new terms to describe old things. The same is most certainly true when it comes to industry jargon. And it’s especially difficult to keep up with terms used to describe the use of artificial intelligence to automate business processes.

The following is a list of common — and frequently confused — terms. This is an evolving field, and in some cases there is still no general consensus around terminology. If you feel a term is missing, or a definition is lacking, please leave a comment below.

Autonomous Enterprise

Sarah Burnett (2022) simply describes an autonomous enterprise as one in which most, if not all, high-volume, repetitive, and transactional business processes that make up core business functions are automated. According to Burnett, key characteristics of the Autonomous Enterprise include:

  • Conducts core, daily business functions using AI, with minimum human touchpoints
  • Employs people who are required to perform few repetitive tasks
  • Empowers staff to automate repetitive tasks for themselves
  • Uses operational data to create digital twins in order to identify inefficiencies and improve processes
  • Analyses operational data to support strategic decisions, including risk management and innovation

Autonomous Operations

Most commonly, the term is used to refer to the operations of vehicles, heavy equipment and physical robots in ways that make use of webs to sensors and AI in order to automate things like transportation, manufacturing, mining, agriculture, and the like.

For the sake of clarity, I would recommend the term ‘Autonomous Business Operations’ to describe the use of AI to automate decisions, actions, and self-learning in the business process domain.

Decentralized Autonomous Organization (DAO)

Decentralized autonomous organizations (DAOs) are blockchain-based organizations fed by a peer-to-peer (P2P) network of contributors. Their management is decentralized without top executive teams and built on automated rules encoded in smart contracts, and their governance works autonomously based on a combination of on-chain and off-chain mechanisms that support community decision-making.

Santana & Albareda (2022) Blockchain and the emergence of Decentralized Autonomous Organizations (DAOs): An integrative model and research agenda


Hyperautomation is not a thing or a specific set of technologies. Instead, it is an organizational mindset according to which everyone is automating everything that can be, all the time.

Intelligent Automation

While many definitions would tie the term Intelligent Automation explicitly to particular kinds of technology (i.e. as the combination of Business Process Management (BPM), Artificial intelligence (AI), and Robotic Process Automation (RPA), I prefer the more common sense and technology-agnostic definition proposed Coombs, et al (2020): “the application of AI in ways that can learn, adapt, and improve over time to automate tasks that were formally undertaken by a human.”

Virtual Enterprise (VE)

A virtual enterprise (VE) is a temporary alliance of businesses that come together to share skills or core competencies and resources in order to better respond to business opportunities, and whose cooperation is supported by computer networks. (Wikipedia)

Wellness Day


Friday was a ‘Wellness Day’ at Pegasystems. What I love about these days is that you don’t feel like you’re missing anything (because everyone is off) AND it’s unlikely that your family also has the day off, which means it’s an opportunity to truly focus on yourself.

I’ve been enjoying posts from colleagues about how they spent their day, so I thought I’d share as well.

Music was once a very important part of my life, but it’s something I’ve neglected for the better part of 20 years. Passion reignited, this is something I plan to do a lot more of.

You’re Going to Die: A Children’s Story

By Timothy Furstnau (2000). Film by Dennis Palazzolo and narrated by Vito Acconci.

One day, you’re going to die.

Your life will just end, right then and there. You will be no more.

It probably won’t be pretty like in the movies or in fairytales. Maybe something extraordinary will happen, but probably not. Sometimes people have enough energy to say or do something meaningful right before they die. And that’s nice. But that is uncommon. Usually, people just die.

And you’re going to die, too.

Some people will say nice things to make you feel comfortable before you die. Others will tell you wild stories that say you’re not actually going to die. But those people are only nice, not honest. Because you really are going to die.

There are many ways to think about dying, different stories to tell and different things to think dying is like.

But death happens to people on the same way, no matter which way they think about it.

Everyone dies and is no more.

Because we are so good at making up stories and believing them, you may even begin to experience dying in a way that may fit one of your stories.

And that is okay.

But then you’ll die and it won’t matter what story you had. When you die, you won’t be able to see if your story was right or not, because there will be no story and no person to believe or compare stories.

You will be dead.

You won’t go anywhere and you will not even stay in the same place, because there will be no “you”.

There won’t be anything and there won’t be even nothing, because you won’t be there to feel that there is nothing.

There won’t be forever because you won’t be there to feel there’s forever.

You will be dead.

And you won’t be able to think; “Gee, I’m dead”. Because you can’t even think or feel or be anything besides dead when you’re dead.

Death can happen at any time.

Sometimes little babies die. Sometimes mommies and daddies die. And when people get too old, after they have had lot of life, they die.

And you’re going to die, too.

Sometimes it hurts when you die, and other times people don’t feel a thing. We only know what people say and do right before they die.

After you die, you won’t feel anything. Not even pain.

But people who are still alive when you die might hurt because you are gone. And that is ok.

People love other people and it usually hurts when people we love die. We even comfort ourselves with those stories that the dead person is not really dead. And that is ok too.

But of course, everyone dies.

And you will, too.

When you say, “I want this” or “I feel this”, you are talking about yourself… You might think yourself different from the rest of the world or you might think you are just a little part of the world. But when you die, yourself dies and so does your view of the world. So you won’t have a world or a self or any thoughts about either because you just won’t exist.

When you die, you won’t feel sad anymore, you won’t worry anymore, you won’t care about anything, or want anything. You won’t even enjoy being dead, because you won’t be anything, only dead.

That is why you should love life.

There are lots of things in life that seem bad, but since life only happens once, and there is only one “you”, love life while you can. No matter what happens.

If you feel sad, at least you can feel sad. If you are worried, at least you can be worried. You exist. You are alive.

If you love life this way, when it comes time to die, you will be happy because death is the perfect end to life. Life is only good because death ends it.

So live and love life…

Because you don’t even know how, when or why, but since you’re alive…

You’re going to die.

The Protestant Ethic and the Spirit of Capitalism

I am re-reading Max Weber’s classic sociological work “The Protestant Ethic and the Spirit of Capitalism.”

The main thesis (as I remember it from grad school) is that the Enlightenment represented a splintering of Worldview such that traditionally shared belief systems about God, humans, and the role of human beings relative to the rest of nature could no longer be taken for granted. As a result, Protestants (guided by the notion of a ‘priesthood of all believers’) began searching for external validation of the extent to which their beliefs and actions were true and good. And the form that external validation took was the creation of wealth.

Fast forward to today, the effects of this shift are felt more than ever before. The proliferation of worldviews that has taken place since the Reformation is tremendous. Everyone is looking for validation. I believe that this need for external validation is at the root of how important social media has become since its invention. And it is most certainly this need for external validation that is responsible for most economic activity in the West.

We want meaning. We want to to be worthy of salvation. And we look for external rewards as a way of signaling in our value.

But all of these signals are ephemeral. The money I make today provides me with validation in the moment, but what about the next? The attention I get on social media provides me with the relief of approval now, but what about later?

We are all striving for evidence of our salvation, even if we don’t have a clear or shared conception of what salvation is. The popularity of super hero movies is a symptom of this widespread neurosis. We all want to be special. We all want to be chosen. We all want proof that ours is a life worth living.

The best and worst career advice I ever received

If you don’t wake up every morning feeling like you’re about to be fired, you’re doing something wrong.

I received this piece of advice from a mentor early in my career. It stuck.

What he meant to convey was that complacence does not breed success. A ‘fighting for your life’ mindset ensures productivity because you are constantly working to prove your value by adding value to the organization you serve.

This advice was accompanied by the following:

Change your job every two years.

Until recently I didn’t make the connection between these two pieces of advice. I assumed that the former was a mindset, and that the latter was simply a tactic for achieving rapid career progression (since it is usually easier to earn more pay and more title faster by changing employers rather than staying with the same one).

But the two pieces of advice are not unrelated. In fact, the former makes the latter NECESSARY.

Living in constant fear is emotionally exhausting. It’s hard work waking up every morning hoping that you’ll prove your worth, and going to sleep each night (if you CAN sleep) questioning whether your efforts have been enough.

Living in constant fear of failure is also incredibly damaging to work relationships. It means holding others to the same impossible standards to which you hold yourself. It leads to a lack of empathy as results trump all else. And it leads to a micro-management leadership style that stifles innovation on the part of others.

If I’m fighting for my life, I’m going to do everything I can to remove risk associated with reliance on others. Fear leads to a command and control approach to leadership that we have come to learn is actually pretty ineffective.

After about two years of living in fear at an organization, you are going to hit a wall. You’re going to burn out. More than that, you’re going to burn others out as well.

After two years, you’re exhausted and in desperate need of a reset. And the fear of being fired has built up to the point that you are CERTAIN that it’ll happen any day now, and so you take things into your own hands. You start planning your next move.

And after two years, you’ve done so much damage to your relationships with colleagues and burned so many bridges that any future you once felt you had now feels foreclosed.

So every two years you change jobs. And every two years that change in jobs comes with a better job title and bigger salary. My mentor’s advice pays off. But it’s also toxic and unsustainable.

How do you break the cycle?

First, stop doing that. Recognize that the toxic effects of your fear of being fired are actually making it MORE LIKELY that you will be fired. Not because of poor performance, but because you have become cancerous: hyper-productive individually but at the expense of the health of those around you and of your organization as a whole. At some point the only way for an organization to preserve itself is to excise you.

At the end of the day, you don’t have control over your destiny at a business. What you DO have control over is the extent to which your behavior is harmful, and the extent to which you are likely to be fired because you’re an ass hole.

Second, rethink your role. Is your purpose to preserve yourself? Or is it to support the overall health of your organization? By rethinking your role in terms of the latter, you create a space of empathy. You become concerned with the interests and feelings of others. You listen rather than barking orders. and you become invested in the success of others as the NUMBER ONE measure of your own success.

The sentience of Google’s LaMDA AI has been grossly exaggerated

On Artificial Intelligence and Intellectual Virtue

It is highly unlikely that Google’s LaMDA AI chat it has become sentient, which is to say ‘self-aware.’ What is more likely the case is that it simply represents a sophisticated simulation.

But how would we know for certain? At a time when the philosophical problem of other minds has yet to be solved, and at a time when our understanding of the cognitive and emotional states of even animals is at best incomplete, we simply lack the tools necessary to determine the sentience of ANYTHING except by inference.

That which is perceived as real will be real in its consequences. Most behave as if other humans are self-aware. And how a person treats an animal is largely dependent on the extent to which they believe that animal has a mind, is self-aware, and is capable of things like empathy.

Google engineer Blake Lemoine is unfortunately — and dangerously — misguided in his claims that LaMDA has become sentient. His mistake comes as a result of conflating two types of knowledge: episteme, which involves the ability to reason from first principles, and phronesis, which involves ethical decision-making, or decisions about right action under conditions of uncertainty.

The history of computers has taught us that there is nothing uniquely human about episteme, because it simply involves the application of logical functions to a set of propositions in order to derive valid conclusions. Episteme is about applying rules to facts (which may or may not also be true), and that is something that a computer does all day long.

A disembodied chat bot, however, cannot be sentient because it does not sense. Because it does not sense, it may have an abstract conception of something like pain, but it is not something that it can experience. The same applies to other important concepts like goodness, love, death, and responsibility. It certainly does not feel empathy.

In other words, until an AI is sentient — having the ability to experience sensations — it cannot be sentient — being self-aware. In the absence of an ability to experience the world around it, there is no sense of responsibility. And all acts of cognition are reduced to episteme. (Even probabilistic judgements made under conditions of uncertainty are reduced to episteme, since they are merely the result of applying rules to facts). This is a major reason why we should not trust an AI to make ethical decisions: computers are never uncertain, and they are never responsible for their calculations.

Phronesis (also known as ‘prudence’ or ‘practical wisdom’) involves far more than applying a set of rules to a set of conditions (although this is certainly what fundamentalist religions try to do). It involves struggle. It involves uncertainty. And it involves personal responsibility.

TyTy and me

For example, when my wife and I had to make the decision to put our dog to sleep a few weeks ago, the decision-making process did not involve reasoning from first principles. It involved empathy. It involved struggle. And it involved an understanding that failure to make a decision would itself be a decision for which we were responsible.

Phronesis is hard. It involves struggle because it is something that is only possible by sentient beings interacting in empathy alongside other sentient beings. As Frans De Waal reminds us, empathy is not abstract. It is lived. It is embodied.

If phronesis is only possible by things that sense, feel, act, and are personally responsible in the world (i.e. sentient beings) and a disembodied chat bot like Google’s is not capable of sensation or meaningful activity, then we cannot consider it sentient in the way that Blake Lemoine would have us believe it is. Instead, it is an opportunity for us to test our assumptions about what it means to be human and to understand that our humanity DOES NOT lie uniquely in either our ability to calculate (episteme), nor in our ability to manufacture (techne) because the rule-based nature of each of these activities allows for automation via machines. Instead, our humanity comes from our ability to make practical and ethical decisions under conditions of uncertainty and in ways that ultimately make both episteme and techne possible.

Bots don’t care if you live or die

The best approaches to automation don’t automate everything. Where processes are automated, exceptions are bound to happen. And when those exceptions happen it’s important that humans are able to address them…especially when the life and death of humans are concerned.

When too much is automated, humans lose skill and vigilance. They become less responsive and less capable when things go badly. It’s for this reason that responsibly-built airplanes automate less than they can, and in fact intentionally build friction into their processes.

A bot doesn’t care if we live or die, succeed or fail. That’s why a robust approach to hyperautomation must consider, not just whether a process CAN be automated, but also whether and the extent to which it SHOULD be.

How American musket manufacturing paved the way for low-code software development

The early days of computing relied on mechanical devices. From the abacus through to Charles Babbage’s engines, these devices — much like the modern computer — were developed to increase the speed with which individuals could perform calculations in support of commercial (accounting, insurance), scientific (especially astronomy), and military applications.

The purpose of these machines was not to perform mathematical operations that were impossible by human beings, but to really democratize access to mathematics in much the same way as the printing press democratized access to the rest of human knowledge.

The history of computing is concerned with automating mathematical calculation in support of some practical end.

As a result of the mechanical nature of these devices, it should come as no surprise that there is a strong relationship between computation, mass production, and the military.

Manufacturing muskets in the 18th century

Up until the mid-18th century, armaments needed to be built by skilled craftsmen using custom fitting parts. The move toward standardization and the ability to measure with exacting precision made it possible by 1765 to produce interchangeable gun carriages.

By 1785, Honoré Blanc’s experiments with interchangeable musket parts came to the attention of Thomas Jefferson, who saw Blanc’s vision for interchangeability as a solution to a chronic shortage of skilled craftsmen in the US, which made the US dependent upon Europe for most of it weapons manufacture.

In 1798, based on his vision for a water-powered machine tool, Eli Whitney won a contract to deliver 10,000 muskets to the US War Department despite having no factory, no workers, and no experience in gun manufacturing.

Whitney struggled to deliver, so in the same ‘fake it till you make it’ spirit that characterizes modern Silicon Valley, Whitney bought some time through a demonstration of what he called the Uniformity System in 1801 the demonstration was likely faked. He was only ever able to deliver 500 muskets, none of which had any interchangeable parts.

Despite early failures of the Uniformity System, the vision remained, was eventually perfected by John Hall in 1827, and became known as the American System of Manufacturing.

Low-code, craftsmanship, and competitive advantage

What does this have to do with low-code software development?

Just as in the early days of American weapons manufacturing, today specialized craftsmen are in short supply. by the end of 2021, there were nearly 1 million unfilled IT jobs. And by 2030, the number of software job vacancies is expected to rise by almost 22%.

And just as in the early days of American manufacturing, a heavy reliance on professional developers (craftsmen) means that reuse is limited, patching is difficult, and quality is inconsistent.

Looking back at the history of manufacturing, the ‘app factory’ metaphor is a powerful way of describing the value of low-code approaches to software development. It would be silly to hold on to old notions of craftsmanship (which are slow, inconsistent, and non-interchangeable) when there are wars to be won. And it is just as silly to hold on to romantic notions of craftsmen approaches to software development when the global landscape across every industry is as competitive as ever.