Ununderstanding
Intellectual Work in the Age of AI
He had become so caught up in building sentences that he had almost forgotten the barbaric days when thinking was like a splash of colour landing on a page.
– Edward St. Aubyn
My dog and I walk every day. When she catches a scent, everything else disappears. Seventy five pounds of maniacal focus, circling and pawing and rolling. She drops her chest low to the ground expecting the tug of her leash. She knows this drives me insane. She also knows that my will to keep walking is weaker than her will to keep smelling. As she circles and paws and rolls, I wait. This is our stilted, arhythmic waltz.
To my family of veterinarians, she has a storied history of eating debris. I like to check and make sure there are no obvious hazards while she’s rooting around in something. One winter morning I recall a special kind of frustration welling up. The wind and the cold had leeched out my good nature. She was smelling what seemed like nothing at all. I got close and mumbled “what could you possibly be smelling?” over hard, bare earth. She glanced up, assessing how much patience I had left, and returned to her work. Walking home, between sharp breaths, I felt stupid. If she could talk, in that moment she would have said: “How could I know until I smell it?”
I’m not the first person to notice their pet’s modes of engagement with the world. But I’ve been thinking about it again after reading research published by METR. They studied developer productivity with AI tools and discovered something unexpected: developers took 19% longer to complete tasks when using AI assistance. More puzzling still was the gap in perceived productivity. Developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed it had accelerated their work by 20%.1
The productivity metrics surprised me less than the cognitive disconnect in perceived productivity. I suspect that these findings are a symptom of something deeper shifting under foot. AI tools bring promises of faster code gen, faster fact finding, and deeper engagement with dozens of sources that you wouldn’t have had time to read. In short, a more streamlined, productive information apparatus. But what we lose in this shift towards higher throughput is our ability to confront uncertainty. We are trading off the difficulty of being totally lost for something shallower; for the answer key. And I think that we are becoming increasingly blind to this transaction. This raises a simple question: what happens when we outsource uncertainty?
I’m an AI optimist. This generation of tools is a step change in the leverage I get learning and exploring topics I know and topics I don’t. Most of my usage follows two patterns: educational dialogue, “Help me understand this mathematical notation” and research synthesis, “Produce a report on the following topic”. It’s run of the mill stuff, but the latitude feels remarkable. Yesterday I cared about tornadoes, today lobsters. There is virtually no area of the map that feels unseeable, and the avenues of creativity that that opens up stretch out in every direction. At any moment, I have a digital mediator ready to contort into the shape of whatever answer I need, granting carte blanche permission to be privately stupid about any subject. This is the stuff of fantasy.
Here’s what I’ve noticed though. I’ve spent hours discussing this essay with Claude. Reflecting on this creates an obvious tension. What is there left to say about what we lose when we use AI tools to think while using an AI tool to help me think. The computers won, and perhaps I am not so smart after all. But for every moment of inspiration, every moment of genuine creative ignition that I’ve had, there are dead end chats and deep research reports collecting dust. All my lazy attempts to learn about something that turned out to be a fleeting interest have failed. Using language models to engage with a topic can feel productive, but I don’t feel ownership over a deep research report that I skimmed and clicked a couple references. The code generated for a quick automation is valuable because it saves me a few minutes, not because it deepens my understanding of the language it was written in. The hard work is still hard, and when I cede all of the collection and cross-checking and verifying, I’m left with something that feels alien and hollow in comparison.
A finite game is played for the purpose of winning, an infinite game for the purpose of continuing the play.
– James Carse
I think we’re beginning to mistake finite for infinite games in intellectual work. Finite games have clear endpoints. Get the answer, solve the problem, complete the task. Uncertainty is friction to be eliminated as efficiently as possible. This happens to be the mode where AI tools excel. They are quick to carve out and chop up uncertainty. Here is your summary of the history of pistachios, job done. Infinite games on the other hand require sustained engagement with not-knowing. The uncertainty is the game. When you’re genuinely learning something new, you can’t know in advance what you’ll discover, what connections you’ll make, or how your thinking will change. The process of gradual knowing builds cognitive capacity in ways that immediate answers cannot. Having the answers in the back of the book does not make the task of integrating the reasoning any less of a challenge.
My dog understands this. She’s not trying to win at smelling. She’s engaging with pure curiosity. She’s letting uncertainty unfold naturally. She’s playing an infinite game. Not one that is circumscribed or conditional, but that is instead defined wholly by what she is; a stubborn lab with an active nose. The world churns restlessly around the same oak trees, and she likes to ask the same simple questions. There is no dissatisfaction with the inefficiency.
We seem to be losing this capacity because our tools are converting intellectual infinite games into finite ones. Our patience to wade through wrong turns and minutiae is wearing thin. We rarely tolerate not-knowing. Instead of “I wonder how this works,” we default to “I need a solution.” The first builds cognitive muscle, the second just transfers information. Knowing implicitly that at a moment’s notice we can solve the puzzle is proving to be a difficult proposition to ignore, but there is no strain, no deliberation, and no heart burn. The more we opt out of the discomfort, the more we eliminate the skills that are needed to produce real capability. The process of sitting with confusion, following dead ends, and gradually building understanding reinforces that “I am able” in a way that instant answers cannot. I see this sentiment echoed in a recent Nature paper.
“Writing compels us to think — not in the chaotic, non-linear way our minds typically wander, but in a structured, intentional manner. By writing it down, we can sort years of research, data and analysis into an actual story, thereby identifying our main message and the influence of our work.”2
Our relationship with the unknown is changing. Coexisting with uncertainty is fundamental to growth. Breakthroughs require sustained engagement with not knowing. Deep relationships require tolerance for ambiguity. Creative works emerge from productive struggle with unclear possibilities. But these capacities atrophy without use. Once we lose them, we lose access to entire categories of human experience, and we’ve only just begun to see what this means when we extend this reasoning beyond knowledge work. Anthropic cites that a small but noticeable portion of their traffic has been conversations about interpersonal advice, psychotherapy or counseling, companionship, and romantic/sexual roleplay.3 Grok’s new avatar Ani is an opinion on this trend. Our lived experience is beginning to be shaped by LLMs. Unfortunately, there is no answer key for living well.
The solution can’t be to abandon AI tools, they have proven too useful. We instead need to learn to distinguish between uncertainty avoiding and uncertainty engaging uses of these technologies. Ask yourself: Am I using ChatGPT to skip ahead, or to dive deeper? Am I rushing past the smell? Some things are worth the discomfort of not-knowing. Some uncertainties deserve our full attention. Some games are worth playing slowly, carefully, with genuine curiosity about what we might discover along the way. The last thing we want to lose is our ability to be ask: What could this possibly be? And then take the time to find out.