Does ChatGPT ‘understand’?

In discussions about generative AI, I frequently hear folks state it imply that it’s fundamentally different from human cognition because AI doesn’t ‘understand.’

However, I’m not convinced that we actually understand ‘understanding.’ And I’m not entirely convinced that I — or any other human — ‘understand’ in the ways we want to think about it (and yes, I realize that sounds paradoxical).

I hear people criticize ChatGPT for being nothing more than elaborate auto-complete. But that’s kind of what language is: it streams. It doesn’t chunk.

Yes, language demands coherence (because inconsistency makes us anxious…which is a feeling), but that coherence frequently comes in the form of what Miller refers to as anacolouthonic lies…coherence is not something that precedes an utterance, but is often something that arrives THROUGH an utterance. Intentionality is something we ascribe to language IN language only after we speak. It is a feeling. It’s an effect of speech rather than its cause.

It’s kind of a pathos vs logos thing: we are feeling things masquerading as thinking things.

I don’t know that we can claim superiority over machines in terms of our understanding (which is just patterns of thought — like reason — applied to memory as far as I can tell), which is why our ‘understanding’ of things changes over time as language recursively consumes itself. Where we DO differ from AI (although not from many non-human animals) is in our primordial capacity for emotion…the pathos bit.

The sentience of Google’s LaMDA AI has been grossly exaggerated

On Artificial Intelligence and Intellectual Virtue

It is highly unlikely that Google’s LaMDA AI chat it has become sentient, which is to say ‘self-aware.’ What is more likely the case is that it simply represents a sophisticated simulation.

But how would we know for certain? At a time when the philosophical problem of other minds has yet to be solved, and at a time when our understanding of the cognitive and emotional states of even animals is at best incomplete, we simply lack the tools necessary to determine the sentience of ANYTHING except by inference.

That which is perceived as real will be real in its consequences. Most behave as if other humans are self-aware. And how a person treats an animal is largely dependent on the extent to which they believe that animal has a mind, is self-aware, and is capable of things like empathy.

Google engineer Blake Lemoine is unfortunately — and dangerously — misguided in his claims that LaMDA has become sentient. His mistake comes as a result of conflating two types of knowledge: episteme, which involves the ability to reason from first principles, and phronesis, which involves ethical decision-making, or decisions about right action under conditions of uncertainty.

The history of computers has taught us that there is nothing uniquely human about episteme, because it simply involves the application of logical functions to a set of propositions in order to derive valid conclusions. Episteme is about applying rules to facts (which may or may not also be true), and that is something that a computer does all day long.

A disembodied chat bot, however, cannot be sentient because it does not sense. Because it does not sense, it may have an abstract conception of something like pain, but it is not something that it can experience. The same applies to other important concepts like goodness, love, death, and responsibility. It certainly does not feel empathy.

In other words, until an AI is sentient — having the ability to experience sensations — it cannot be sentient — being self-aware. In the absence of an ability to experience the world around it, there is no sense of responsibility. And all acts of cognition are reduced to episteme. (Even probabilistic judgements made under conditions of uncertainty are reduced to episteme, since they are merely the result of applying rules to facts). This is a major reason why we should not trust an AI to make ethical decisions: computers are never uncertain, and they are never responsible for their calculations.

Phronesis (also known as ‘prudence’ or ‘practical wisdom’) involves far more than applying a set of rules to a set of conditions (although this is certainly what fundamentalist religions try to do). It involves struggle. It involves uncertainty. And it involves personal responsibility.

TyTy and me

For example, when my wife and I had to make the decision to put our dog to sleep a few weeks ago, the decision-making process did not involve reasoning from first principles. It involved empathy. It involved struggle. And it involved an understanding that failure to make a decision would itself be a decision for which we were responsible.

Phronesis is hard. It involves struggle because it is something that is only possible by sentient beings interacting in empathy alongside other sentient beings. As Frans De Waal reminds us, empathy is not abstract. It is lived. It is embodied.

If phronesis is only possible by things that sense, feel, act, and are personally responsible in the world (i.e. sentient beings) and a disembodied chat bot like Google’s is not capable of sensation or meaningful activity, then we cannot consider it sentient in the way that Blake Lemoine would have us believe it is. Instead, it is an opportunity for us to test our assumptions about what it means to be human and to understand that our humanity DOES NOT lie uniquely in either our ability to calculate (episteme), nor in our ability to manufacture (techne) because the rule-based nature of each of these activities allows for automation via machines. Instead, our humanity comes from our ability to make practical and ethical decisions under conditions of uncertainty and in ways that ultimately make both episteme and techne possible.

Bots don’t care if you live or die

The best approaches to automation don’t automate everything. Where processes are automated, exceptions are bound to happen. And when those exceptions happen it’s important that humans are able to address them…especially when the life and death of humans are concerned.

When too much is automated, humans lose skill and vigilance. They become less responsive and less capable when things go badly. It’s for this reason that responsibly-built airplanes automate less than they can, and in fact intentionally build friction into their processes.

A bot doesn’t care if we live or die, succeed or fail. That’s why a robust approach to hyperautomation must consider, not just whether a process CAN be automated, but also whether and the extent to which it SHOULD be.

Are robots slaves? On the contemporary relevance of Čapek’s R.U.R.

  • Written in 1920, R.U.R. (Rossum’s Universal Robots) by Karel Čapek is most well-known for having coined the term ‘robot.’ Although derided by many (including Isaac Asimov, who called the play ‘terribly bad’) it anticipates and responds to an important argument that continues to be used to justify automation projects today.
  • The most common argument for automation (one that is used by almost every vendor) is not new. It dates back to Aristotle who used the same logic to justify using slaves, women, and children in similar ways.
  • The most important contribution of Čapek’s play is not just that it coins a term, but in the work the term does. By deliberately connecting automation with Aristotelian slavery, and then viewing the results through a pragmatic lens, Čapek challenges us to consider the consequences of a technology-centered approach to automation and consider whether a more human approach is possible.

R.U.R. (Rossum’s Universal Robots) is … strange. What begins with a factory tour and a bizarre marriage proposal eventually concludes with the end of humanity and a new robot Adam and Eve.

Many have dismissed the work as an historical curiosity, whose only contribution was the term ‘robot.’ But that’s not entirely fair. Indeed, in spite of the fact that Isaac Asimov called the play ‘terribly bad,’ his famous three laws of robotics are in many ways a direct response to the issues anticipated by Karel Čapek.

A word on Čapek. He earned a doctorate in Philosophy in 1915. Because of spinal problems, he was unable to participate in WWI, and instead began his career as a journalist. As a philosopher and observer of the war in Europe, he was a pragmatist (in the philosophical sense), and critic of nationalism, totalitarianism, and consumerism. When evaluating these ideologies, as well as other popular philosophical positions of the time like rationalism and positivism, his approach was to ask, not whether something was true, but whether it worked. And this is exactly the approach we see performed in R.U.R.

As is often observed, the word ‘robot’ is derived from the Czech word robota. Although some writers suggest that robota merely refers to ‘hard work,’ it more accurately implies the kind of hard work that would be done by obligation, like that of a serf or slave.

The relationship between robots and slaves is surely intentional on Čapek’s part, and I would argue that it is intended to signal that we should think about robots in the same way as Aristotle famously described the function of slaves, women, and children: as bearing responsibility for the burdensome work related to sustaining bare life in order that (male) citizens might occupy themselves with a more elevated life of the mind. Indeed, this is the exact position taken by the character Domin in the first act of the play:

But in ten years Rossum’s Universal Robots will produce so much corn, so much cloth, so much everything, that things will be practically without price. There will be no poverty. All work will be done by living machines. Everybody will be free from worry and liberated from the denigration of labor. Everybody will live only to perfect himself.

At the center of Čapek’s play is a criticism of this belief, of this longing for a kind of guilt-free slavery capable of freeing us from burdensome work that we believe is somehow beneath us. It is a dramatization of the consequences of such a belief, were it to be fully realized.

What happens when humans get exactly what they ask for? In just 10 years (an inevitable timeline that is accelerated because a robot is given ‘a soul’), every human except for one stops working. They become superfluous relative to a productive system that values efficiency at all costs. They cease to be productive in every way, even to the point of losing their ability to procreate. They become so dependent on robots that they can’t bear the thought of living without them, even as this dependence is clearly leading to the destruction of the human race.

DR. GALL: You see, so many Robots are being manufactured that people are becoming superfluous; man is really a survival. But that he should begin to die out, after a paltry thirty years of competition. That’s the awful part of it. You might almost think that nature was offended at the manufacture of the Robots. All the universities are sending in long petitions to restrict their production. Otherwise, they say, mankind will become extinct through lack of fertility.but the R.U.R. shareholders, of course, won’t hear of it. All the governments, on the other hand, are clamoring for an increase in production, to raise the standards of their armies. And all the manufacturers in the world are ordering Robots like mad.

HELENA: And has no one demanded that the manufacture should cease altogether?

DR. GALL: No one has the courage.

HELENA: Courage!

DR. GALL: People would stone him to death. You see, after all, it’s more convenient to get your work done by the Robots.

Stepping outside of the play itself, Čapek’s argument seems to go something like this:

  • Many believe that human beings can only realize their true potential if they are liberated from the kind of work or drudgery that is necessary to sustain bare life. Human slaves are obviously unethical. Robots represent an opportunity to achieve all the benefits of slaves without the moral problems that accompany slaves in a more traditional sense.
  • But in practice, when liberated from work, human beings do not actually dedicate themselves to a life in accord with reason or to the contemplation of the Good. Instead, they are wont to squander freedom in pure leisure.
  • And a life of pure leisure is at odds with the values of efficiency and productivity that are essential to achieving such a life.
  • Hence, actually realizing the Aristotelian vision of ‘better living through slavery’ would actually place us in a contradiction, according to which we are forced to absolutely value both work and leisure at the same time. And because absolute leisure is not sustainable in itself, work (and those who perform it) is the only value that could possibly survive.

What can we who work in artificial intelligence and other forms of automation learn from R.U.R.? Even the most cursory look at the industry landscape will show that Aristotle’s dream of ‘better living through slavery’ is alive and well. Almost every automation company makes some version of the claim that their technology increases efficiency by taking care of the stuff that human beings suck at and hate so they can focus on the stuff they’re good at and like. But in the vast majority of cases, these companies are more fundamentally driven by a technology-centered rather than human-centered view of work, making arguments that are not fundamentally difference from the one made by Fabry, that “One Robot can replace two and a half workmen. The human machine, Miss Glory, was terribly imperfect. It had to be removed sooner or later.”

Čapek doesn’t help us by proposing an alternative relationship to robots that might be more sustainable. But that’s not really the point. The point, and the greatest value of his work in my opinion, is that it forces us to think carefully about what a human-centered approach to work and automation might look like. In reading R.U.R. we are forced to acknowledge that human-centered technology can’t mean freeing humans from work, because there’s something about work itself that is an inextricably part of what it means to be human. Čapek asks us to be more nuanced in how we view the relationship between human beings and technology, and to carefully consider how technology might complement work rather than replace it entirely. And it is here, in this call for nuance in support of a truly human vision of technology that R.U.R. is as relevant today as it was when it was first performed in 1921.