Does ChatGPT ‘understand’?

In discussions about generative AI, I frequently hear folks state it imply that it’s fundamentally different from human cognition because AI doesn’t ‘understand.’

However, I’m not convinced that we actually understand ‘understanding.’ And I’m not entirely convinced that I — or any other human — ‘understand’ in the ways we want to think about it (and yes, I realize that sounds paradoxical).

I hear people criticize ChatGPT for being nothing more than elaborate auto-complete. But that’s kind of what language is: it streams. It doesn’t chunk.

Yes, language demands coherence (because inconsistency makes us anxious…which is a feeling), but that coherence frequently comes in the form of what Miller refers to as anacolouthonic lies…coherence is not something that precedes an utterance, but is often something that arrives THROUGH an utterance. Intentionality is something we ascribe to language IN language only after we speak. It is a feeling. It’s an effect of speech rather than its cause.

It’s kind of a pathos vs logos thing: we are feeling things masquerading as thinking things.

I don’t know that we can claim superiority over machines in terms of our understanding (which is just patterns of thought — like reason — applied to memory as far as I can tell), which is why our ‘understanding’ of things changes over time as language recursively consumes itself. Where we DO differ from AI (although not from many non-human animals) is in our primordial capacity for emotion…the pathos bit.