A few new things I’m trying here. A droning 5th note, inspired by some Raga music that I’ve been listening to recently. Some ambient weirdness thanks to a Mood pedal from Chase Bliss (I know I’m just starting to scratch the surface). And some vocals Vito Acconci’s “The American Gift” (1969)
I recently bought a Zoom Q2n-4k video recorder. I took it for its maiden voyage today.
I would change the angle and lighting (although there are some settings on the unit to account for different lighting scenarios…settings that I did not use). There’s a line in, but I used the internal mic, which is great (as one might expect).
Here’s what I did:
I’ve liked this song from Zedd for a long time. This part, which basically repeats through the whole song, was written as a piano part. The only tricky but is the little thumb-over gnarly bit that would make my classical guitar teacher cringe. The position of my elbow would make him cringe too.
Something I’ve been twiddling around with for the past week. Definitely not during zoom meetings.
Basically just a major scale.
Does ChatGPT ‘understand’?
In discussions about generative AI, I frequently hear folks state it imply that it’s fundamentally different from human cognition because AI doesn’t ‘understand.’
However, I’m not convinced that we actually understand ‘understanding.’ And I’m not entirely convinced that I — or any other human — ‘understand’ in the ways we want to think about it (and yes, I realize that sounds paradoxical).
I hear people criticize ChatGPT for being nothing more than elaborate auto-complete. But that’s kind of what language is: it streams. It doesn’t chunk.
Yes, language demands coherence (because inconsistency makes us anxious…which is a feeling), but that coherence frequently comes in the form of what Miller refers to as anacolouthonic lies…coherence is not something that precedes an utterance, but is often something that arrives THROUGH an utterance. Intentionality is something we ascribe to language IN language only after we speak. It is a feeling. It’s an effect of speech rather than its cause.
It’s kind of a pathos vs logos thing: we are feeling things masquerading as thinking things.
I don’t know that we can claim superiority over machines in terms of our understanding (which is just patterns of thought — like reason — applied to memory as far as I can tell), which is why our ‘understanding’ of things changes over time as language recursively consumes itself. Where we DO differ from AI (although not from many non-human animals) is in our primordial capacity for emotion…the pathos bit.
Grandma Harfield’s Marmalade Recipe
My father’s mother wasn’t particularly known for her cooking. Very British. Usually pretty bland. But I remember her marmalade with great fondness. When I was a child, I thought ALL marmalade was gross (too bitter). Now, however, it is against my Grandma’s recipe that I judge all others.
- 8 seville oranges
- 2 lemons
- 6 lbs of sugar
- 9 cups of water
Cut oranges in half and remove seeds (pips) and juice. Grind the rind and add to the juice. Cover with 9 cups of water. Soak the pips in 1 cup of water overnight.
In the morning, boil the orange mixture until the skins are tender, about 45 minutes.
Boil pips separately, strain and add the water to the pulp.
Add sugar to the pulp and boil until it gels (about 1.5 hours). Pour into jars and seal.
You’re Going to Die: A Children’s Story
By Timothy Furstnau (2000). Film by Dennis Palazzolo and narrated by Vito Acconci.
One day, you’re going to die.
Your life will just end, right then and there. You will be no more.
It probably won’t be pretty like in the movies or in fairytales. Maybe something extraordinary will happen, but probably not. Sometimes people have enough energy to say or do something meaningful right before they die. And that’s nice. But that is uncommon. Usually, people just die.
And you’re going to die, too.
Some people will say nice things to make you feel comfortable before you die. Others will tell you wild stories that say you’re not actually going to die. But those people are only nice, not honest. Because you really are going to die.
There are many ways to think about dying, different stories to tell and different things to think dying is like.
But death happens to people on the same way, no matter which way they think about it.
Everyone dies and is no more.
Because we are so good at making up stories and believing them, you may even begin to experience dying in a way that may fit one of your stories.
And that is okay.
But then you’ll die and it won’t matter what story you had. When you die, you won’t be able to see if your story was right or not, because there will be no story and no person to believe or compare stories.
You will be dead.
You won’t go anywhere and you will not even stay in the same place, because there will be no “you”.
There won’t be anything and there won’t be even nothing, because you won’t be there to feel that there is nothing.
There won’t be forever because you won’t be there to feel there’s forever.
You will be dead.
And you won’t be able to think; “Gee, I’m dead”. Because you can’t even think or feel or be anything besides dead when you’re dead.
Death can happen at any time.
Sometimes little babies die. Sometimes mommies and daddies die. And when people get too old, after they have had lot of life, they die.
And you’re going to die, too.
Sometimes it hurts when you die, and other times people don’t feel a thing. We only know what people say and do right before they die.
After you die, you won’t feel anything. Not even pain.
But people who are still alive when you die might hurt because you are gone. And that is ok.
People love other people and it usually hurts when people we love die. We even comfort ourselves with those stories that the dead person is not really dead. And that is ok too.
But of course, everyone dies.
And you will, too.
When you say, “I want this” or “I feel this”, you are talking about yourself… You might think yourself different from the rest of the world or you might think you are just a little part of the world. But when you die, yourself dies and so does your view of the world. So you won’t have a world or a self or any thoughts about either because you just won’t exist.
When you die, you won’t feel sad anymore, you won’t worry anymore, you won’t care about anything, or want anything. You won’t even enjoy being dead, because you won’t be anything, only dead.
That is why you should love life.
There are lots of things in life that seem bad, but since life only happens once, and there is only one “you”, love life while you can. No matter what happens.
If you feel sad, at least you can feel sad. If you are worried, at least you can be worried. You exist. You are alive.
If you love life this way, when it comes time to die, you will be happy because death is the perfect end to life. Life is only good because death ends it.
So live and love life…
Because you don’t even know how, when or why, but since you’re alive…
You’re going to die.
The Protestant Ethic and the Spirit of Capitalism
I am re-reading Max Weber’s classic sociological work “The Protestant Ethic and the Spirit of Capitalism.”
The main thesis (as I remember it from grad school) is that the Enlightenment represented a splintering of Worldview such that traditionally shared belief systems about God, humans, and the role of human beings relative to the rest of nature could no longer be taken for granted. As a result, Protestants (guided by the notion of a ‘priesthood of all believers’) began searching for external validation of the extent to which their beliefs and actions were true and good. And the form that external validation took was the creation of wealth.
Fast forward to today, the effects of this shift are felt more than ever before. The proliferation of worldviews that has taken place since the Reformation is tremendous. Everyone is looking for validation. I believe that this need for external validation is at the root of how important social media has become since its invention. And it is most certainly this need for external validation that is responsible for most economic activity in the West.
We want meaning. We want to to be worthy of salvation. And we look for external rewards as a way of signaling in our value.
But all of these signals are ephemeral. The money I make today provides me with validation in the moment, but what about the next? The attention I get on social media provides me with the relief of approval now, but what about later?
We are all striving for evidence of our salvation, even if we don’t have a clear or shared conception of what salvation is. The popularity of super hero movies is a symptom of this widespread neurosis. We all want to be special. We all want to be chosen. We all want proof that ours is a life worth living.
The best and worst career advice I ever received
I received this piece of advice from a mentor early in my career. It stuck.
What he meant to convey was that complacence does not breed success. A ‘fighting for your life’ mindset ensures productivity because you are constantly working to prove your value by adding value to the organization you serve.
This advice was accompanied by the following:
Until recently I didn’t make the connection between these two pieces of advice. I assumed that the former was a mindset, and that the latter was simply a tactic for achieving rapid career progression (since it is usually easier to earn more pay and more title faster by changing employers rather than staying with the same one).
But the two pieces of advice are not unrelated. In fact, the former makes the latter NECESSARY.
Living in constant fear is emotionally exhausting. It’s hard work waking up every morning hoping that you’ll prove your worth, and going to sleep each night (if you CAN sleep) questioning whether your efforts have been enough.
Living in constant fear of failure is also incredibly damaging to work relationships. It means holding others to the same impossible standards to which you hold yourself. It leads to a lack of empathy as results trump all else. And it leads to a micro-management leadership style that stifles innovation on the part of others.
If I’m fighting for my life, I’m going to do everything I can to remove risk associated with reliance on others. Fear leads to a command and control approach to leadership that we have come to learn is actually pretty ineffective.
After about two years of living in fear at an organization, you are going to hit a wall. You’re going to burn out. More than that, you’re going to burn others out as well.
After two years, you’re exhausted and in desperate need of a reset. And the fear of being fired has built up to the point that you are CERTAIN that it’ll happen any day now, and so you take things into your own hands. You start planning your next move.
And after two years, you’ve done so much damage to your relationships with colleagues and burned so many bridges that any future you once felt you had now feels foreclosed.
So every two years you change jobs. And every two years that change in jobs comes with a better job title and bigger salary. My mentor’s advice pays off. But it’s also toxic and unsustainable.
How do you break the cycle?
First, stop doing that. Recognize that the toxic effects of your fear of being fired are actually making it MORE LIKELY that you will be fired. Not because of poor performance, but because you have become cancerous: hyper-productive individually but at the expense of the health of those around you and of your organization as a whole. At some point the only way for an organization to preserve itself is to excise you.
At the end of the day, you don’t have control over your destiny at a business. What you DO have control over is the extent to which your behavior is harmful, and the extent to which you are likely to be fired because you’re an ass hole.
Second, rethink your role. Is your purpose to preserve yourself? Or is it to support the overall health of your organization? By rethinking your role in terms of the latter, you create a space of empathy. You become concerned with the interests and feelings of others. You listen rather than barking orders. and you become invested in the success of others as the NUMBER ONE measure of your own success.
The sentience of Google’s LaMDA AI has been grossly exaggerated
On Artificial Intelligence and Intellectual Virtue
It is highly unlikely that Google’s LaMDA AI chat it has become sentient, which is to say ‘self-aware.’ What is more likely the case is that it simply represents a sophisticated simulation.
But how would we know for certain? At a time when the philosophical problem of other minds has yet to be solved, and at a time when our understanding of the cognitive and emotional states of even animals is at best incomplete, we simply lack the tools necessary to determine the sentience of ANYTHING except by inference.
That which is perceived as real will be real in its consequences. Most behave as if other humans are self-aware. And how a person treats an animal is largely dependent on the extent to which they believe that animal has a mind, is self-aware, and is capable of things like empathy.
Google engineer Blake Lemoine is unfortunately — and dangerously — misguided in his claims that LaMDA has become sentient. His mistake comes as a result of conflating two types of knowledge: episteme, which involves the ability to reason from first principles, and phronesis, which involves ethical decision-making, or decisions about right action under conditions of uncertainty.
The history of computers has taught us that there is nothing uniquely human about episteme, because it simply involves the application of logical functions to a set of propositions in order to derive valid conclusions. Episteme is about applying rules to facts (which may or may not also be true), and that is something that a computer does all day long.
A disembodied chat bot, however, cannot be sentient because it does not sense. Because it does not sense, it may have an abstract conception of something like pain, but it is not something that it can experience. The same applies to other important concepts like goodness, love, death, and responsibility. It certainly does not feel empathy.
In other words, until an AI is sentient — having the ability to experience sensations — it cannot be sentient — being self-aware. In the absence of an ability to experience the world around it, there is no sense of responsibility. And all acts of cognition are reduced to episteme. (Even probabilistic judgements made under conditions of uncertainty are reduced to episteme, since they are merely the result of applying rules to facts). This is a major reason why we should not trust an AI to make ethical decisions: computers are never uncertain, and they are never responsible for their calculations.
Phronesis (also known as ‘prudence’ or ‘practical wisdom’) involves far more than applying a set of rules to a set of conditions (although this is certainly what fundamentalist religions try to do). It involves struggle. It involves uncertainty. And it involves personal responsibility.
For example, when my wife and I had to make the decision to put our dog to sleep a few weeks ago, the decision-making process did not involve reasoning from first principles. It involved empathy. It involved struggle. And it involved an understanding that failure to make a decision would itself be a decision for which we were responsible.
Phronesis is hard. It involves struggle because it is something that is only possible by sentient beings interacting in empathy alongside other sentient beings. As Frans De Waal reminds us, empathy is not abstract. It is lived. It is embodied.
If phronesis is only possible by things that sense, feel, act, and are personally responsible in the world (i.e. sentient beings) and a disembodied chat bot like Google’s is not capable of sensation or meaningful activity, then we cannot consider it sentient in the way that Blake Lemoine would have us believe it is. Instead, it is an opportunity for us to test our assumptions about what it means to be human and to understand that our humanity DOES NOT lie uniquely in either our ability to calculate (episteme), nor in our ability to manufacture (techne) because the rule-based nature of each of these activities allows for automation via machines. Instead, our humanity comes from our ability to make practical and ethical decisions under conditions of uncertainty and in ways that ultimately make both episteme and techne possible.
Bots don’t care if you live or die
The best approaches to automation don’t automate everything. Where processes are automated, exceptions are bound to happen. And when those exceptions happen it’s important that humans are able to address them…especially when the life and death of humans are concerned.
When too much is automated, humans lose skill and vigilance. They become less responsive and less capable when things go badly. It’s for this reason that responsibly-built airplanes automate less than they can, and in fact intentionally build friction into their processes.
A bot doesn’t care if we live or die, succeed or fail. That’s why a robust approach to hyperautomation must consider, not just whether a process CAN be automated, but also whether and the extent to which it SHOULD be.
Are robots slaves? On the contemporary relevance of Čapek’s R.U.R.
- Written in 1920, R.U.R. (Rossum’s Universal Robots) by Karel Čapek is most well-known for having coined the term ‘robot.’ Although derided by many (including Isaac Asimov, who called the play ‘terribly bad’) it anticipates and responds to an important argument that continues to be used to justify automation projects today.
- The most common argument for automation (one that is used by almost every vendor) is not new. It dates back to Aristotle who used the same logic to justify using slaves, women, and children in similar ways.
- The most important contribution of Čapek’s play is not just that it coins a term, but in the work the term does. By deliberately connecting automation with Aristotelian slavery, and then viewing the results through a pragmatic lens, Čapek challenges us to consider the consequences of a technology-centered approach to automation and consider whether a more human approach is possible.
R.U.R. (Rossum’s Universal Robots) is … strange. What begins with a factory tour and a bizarre marriage proposal eventually concludes with the end of humanity and a new robot Adam and Eve.
Many have dismissed the work as an historical curiosity, whose only contribution was the term ‘robot.’ But that’s not entirely fair. Indeed, in spite of the fact that Isaac Asimov called the play ‘terribly bad,’ his famous three laws of robotics are in many ways a direct response to the issues anticipated by Karel Čapek.
A word on Čapek. He earned a doctorate in Philosophy in 1915. Because of spinal problems, he was unable to participate in WWI, and instead began his career as a journalist. As a philosopher and observer of the war in Europe, he was a pragmatist (in the philosophical sense), and critic of nationalism, totalitarianism, and consumerism. When evaluating these ideologies, as well as other popular philosophical positions of the time like rationalism and positivism, his approach was to ask, not whether something was true, but whether it worked. And this is exactly the approach we see performed in R.U.R.
As is often observed, the word ‘robot’ is derived from the Czech word robota. Although some writers suggest that robota merely refers to ‘hard work,’ it more accurately implies the kind of hard work that would be done by obligation, like that of a serf or slave.
The relationship between robots and slaves is surely intentional on Čapek’s part, and I would argue that it is intended to signal that we should think about robots in the same way as Aristotle famously described the function of slaves, women, and children: as bearing responsibility for the burdensome work related to sustaining bare life in order that (male) citizens might occupy themselves with a more elevated life of the mind. Indeed, this is the exact position taken by the character Domin in the first act of the play:
But in ten years Rossum’s Universal Robots will produce so much corn, so much cloth, so much everything, that things will be practically without price. There will be no poverty. All work will be done by living machines. Everybody will be free from worry and liberated from the denigration of labor. Everybody will live only to perfect himself.
At the center of Čapek’s play is a criticism of this belief, of this longing for a kind of guilt-free slavery capable of freeing us from burdensome work that we believe is somehow beneath us. It is a dramatization of the consequences of such a belief, were it to be fully realized.
What happens when humans get exactly what they ask for? In just 10 years (an inevitable timeline that is accelerated because a robot is given ‘a soul’), every human except for one stops working. They become superfluous relative to a productive system that values efficiency at all costs. They cease to be productive in every way, even to the point of losing their ability to procreate. They become so dependent on robots that they can’t bear the thought of living without them, even as this dependence is clearly leading to the destruction of the human race.
DR. GALL: You see, so many Robots are being manufactured that people are becoming superfluous; man is really a survival. But that he should begin to die out, after a paltry thirty years of competition. That’s the awful part of it. You might almost think that nature was offended at the manufacture of the Robots. All the universities are sending in long petitions to restrict their production. Otherwise, they say, mankind will become extinct through lack of fertility.but the R.U.R. shareholders, of course, won’t hear of it. All the governments, on the other hand, are clamoring for an increase in production, to raise the standards of their armies. And all the manufacturers in the world are ordering Robots like mad.
HELENA: And has no one demanded that the manufacture should cease altogether?
DR. GALL: No one has the courage.
DR. GALL: People would stone him to death. You see, after all, it’s more convenient to get your work done by the Robots.
Stepping outside of the play itself, Čapek’s argument seems to go something like this:
- Many believe that human beings can only realize their true potential if they are liberated from the kind of work or drudgery that is necessary to sustain bare life. Human slaves are obviously unethical. Robots represent an opportunity to achieve all the benefits of slaves without the moral problems that accompany slaves in a more traditional sense.
- But in practice, when liberated from work, human beings do not actually dedicate themselves to a life in accord with reason or to the contemplation of the Good. Instead, they are wont to squander freedom in pure leisure.
- And a life of pure leisure is at odds with the values of efficiency and productivity that are essential to achieving such a life.
- Hence, actually realizing the Aristotelian vision of ‘better living through slavery’ would actually place us in a contradiction, according to which we are forced to absolutely value both work and leisure at the same time. And because absolute leisure is not sustainable in itself, work (and those who perform it) is the only value that could possibly survive.
What can we who work in artificial intelligence and other forms of automation learn from R.U.R.? Even the most cursory look at the industry landscape will show that Aristotle’s dream of ‘better living through slavery’ is alive and well. Almost every automation company makes some version of the claim that their technology increases efficiency by taking care of the stuff that human beings suck at and hate so they can focus on the stuff they’re good at and like. But in the vast majority of cases, these companies are more fundamentally driven by a technology-centered rather than human-centered view of work, making arguments that are not fundamentally difference from the one made by Fabry, that “One Robot can replace two and a half workmen. The human machine, Miss Glory, was terribly imperfect. It had to be removed sooner or later.”
Čapek doesn’t help us by proposing an alternative relationship to robots that might be more sustainable. But that’s not really the point. The point, and the greatest value of his work in my opinion, is that it forces us to think carefully about what a human-centered approach to work and automation might look like. In reading R.U.R. we are forced to acknowledge that human-centered technology can’t mean freeing humans from work, because there’s something about work itself that is an inextricably part of what it means to be human. Čapek asks us to be more nuanced in how we view the relationship between human beings and technology, and to carefully consider how technology might complement work rather than replace it entirely. And it is here, in this call for nuance in support of a truly human vision of technology that R.U.R. is as relevant today as it was when it was first performed in 1921.