Ununderstanding

Intellectual Work in the Age of AI

He had become so caught up in building sentences that he had almost forgotten the barbaric days when thinking was like a splash of colour landing on a page.
– Edward St. Aubyn

My dog and I walk every day. When she smells something interesting, everything else disappears. Seventy five pounds of maniacal focus circling and pawing and rolling. She drops her chest low to the ground expecting the tug of her leash. She knows this drives me crazy. But she also knows that my will to keep walking is weaker than her will to keep smelling. As she circles and paws and rolls, I wait. This is our stilted, arhythmic waltz.

She loves eating debris of all kinds. I usually check to make sure there are no obvious hazards while she’s rooting around in something. One winter morning I recall a special kind of frustration welling up. The wind and the cold had leeched out my good nature. She was smelling what seemed like nothing at all. I got close and mumbled “what could you possibly be smelling?” over hard, bare earth. She glanced up, assessing how much patience I had left, and returned to her work. Walking home, between sharp breaths, I felt stupid. If she could talk, in that moment she would have said: “How could I know until I smell it?”

I’m not the first person to notice their pet’s modes of engagement with the world. But I’ve been thinking about it again after reading research published by METR. They studied developer productivity with AI tools and discovered that developers took 19% longer to complete tasks when using AI assistance. More puzzling still was the gap in perceived productivity. Developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed it had accelerated their work by 20%.1

The productivity metrics surprised me less than the cognitive disconnect in perceived productivity. I suspect that these findings are a symptom of something deeper shifting under foot. AI tools bring promises of faster code gen, faster fact finding, and deeper engagement with dozens of sources that you wouldn’t have had time to read. In short, a more streamlined, productive information apparatus. But in this move towards higher throughput I think that we are ceding confrontation with uncertainty. We trade off the difficulty of being totally lost for something shallower. In this motion, I think that we lose something vital. So what is it that happens when we outsource uncertainty?

I’m an AI optimist. This generation of tools is a step change in the leverage I get learning and exploring topics I know and topics I don’t. Most of my usage follows two patterns: educational dialogue, “Help me understand this mathematical notation” and research synthesis, “Produce a report on the following topic”. It’s run of the mill stuff, but the latitude feels remarkable. Yesterday I cared about tornadoes, today lobsters. There is virtually no area of the map that feels unseeable, and the avenues of creativity that that opens up stretch out in every direction. At any moment, I have a digital mediator ready to contort into the shape of whatever answer I need, granting carte blanche permission to be privately stupid about any subject. This is the stuff of fantasy.

I’ve spent lots of time discussing this essay with Claude. Reflecting on this creates an obvious tension. What is there left to say about what we lose when we use AI tools to think while using an AI tool to help me think. The computers won, and perhaps I am not so smart after all. But for every moment of genuine creative ignition that I’ve had, there are heaps of dead end chats and deep research reports collecting dust. All my lazy attempts to learn about something that turned out to be a fleeting interest have failed. Using language models to engage with a topic can feel productive, but I don’t feel ownership over a deep research report where I skimmed and clicked a couple references. The code generated for a quick automation is valuable because it saves me a few minutes, not because it deepens my understanding of the language itself. The hard work is still hard, and when I cede the collection and cross-checking and verifying, I’m left with something that feels alien and hollow in comparison.

A finite game is played for the purpose of winning, an infinite game for the purpose of continuing the play.
– James Carse

I think we’re attempting to collapse infinite games into finite ones. Finite games have clear endpoints, and uncertainty is friction to be eliminated as efficiently as possible. This happens to be the mode where AI tools excel. They are quick to carve out and chop up uncertainty. Here is your summary of the history of pistachios, job done. Infinite games require sustained engagement with not-knowing. The uncertainty is the game. When you’re genuinely learning something new, you can’t know in advance what you’ll discover or how your thinking will change. The process of gradual knowing builds cognitive capacity in ways that immediate answers cannot. Having the answers in the back of the book does not make the task of integrating the reasoning any less of a challenge, or any less rewarding.

My dog is not trying to win at smelling, she is following her curiosity. She’s letting uncertainty unfold naturally. She’s not engaging in something that is circumscribed or conditional, but that is instead defined wholly by what she is; a stubborn lab with an active nose. The world churns restlessly around the same oak trees, and she likes to ask the same simple questions. There is no dissatisfaction with the inefficiency.

We seem to be losing this capacity because our tools are converting intellectual infinite games into finite ones. Our patience to wade through wrong turns and minutiae is wearing thin. We rarely tolerate not-knowing. We default to solutions over wonder. Knowing implicitly that at a moment’s notice we can solve the puzzle is proving to be difficult to ignore, but there is no strain or heart burn. The more we opt out of the discomfort, the more we eliminate the skills that are needed to produce real capability. The process of sitting with confusion and gradually building understanding reinforces that “I am able” in a way that instant answers don’t. I see this sentiment echoed in a recent Nature paper.

“Writing compels us to think — not in the chaotic, non-linear way our minds typically wander, but in a structured, intentional manner. By writing it down, we can sort years of research, data and analysis into an actual story, thereby identifying our main message and the influence of our work.”2

Coexisting with uncertainty is fundamental to personal growth. Intellectual breakthroughs require sustained engagement with not knowing. Deep relationships require tolerance for ambiguity. Creative works emerge from the struggle with branching possibilities. But these capacities atrophy without use. Once we lose them, we lose access to entire categories of human experience, and we’ve only just begun to see what this means when we extend this reasoning beyond knowledge work. Anthropic cites that a small but noticeable portion of their traffic has been conversations about interpersonal advice, psychotherapy or counseling, companionship, and romantic/sexual roleplay.3 Grok’s new avatar Ani is an opinion on this trend. Our lived experience is beginning to be shaped by LLMs. Unfortunately, there is no simple answer for living well.

The solution can’t be to abandon AI tools because they provide real value. But we need to bristle against the tempatation to collapse everything we seem to care about into call-and-response simplicity. Some uncertainties deserve our full attention. Some games are worth playing slowly, with genuine curiosity about what we might unfold along the way. We should remember the joy of asking “What could this possibly be?”, and then take the time to find out.


Top