Ununderstanding

Intellectual Work in the Age of AI

He had become so caught up in building sentences that he had almost forgotten the barbaric days when thinking was like a splash of colour landing on a page.
– Edward St. Aubyn

My dog and I walk every day. When she catches a scent, everything else disappears. Seventy five pounds of maniacal focus, circling and pawing and rolling. She drops her chest low to the ground expecting the tug of her leash. She knows this drives me insane. But she also knows that my will to keep walking is weaker than her will to keep smelling. As she circles and paws and rolls, I wait. This is our stilted, arhythmic waltz.

To my family of veterinarians, she has a storied history of eating debris. I like to check and make sure there are no obvious gastrointestinal hazards while she’s rooting around in something. One winter morning I recall a special kind of frustration welling up. The wind and the cold had leeched out my good nature. She was smelling what seemed like nothing at all. I got close and mumbled “what could you possibly be smelling?” over hard, bare earth. She glanced up, assessing how much patience I had left, and returned to her work. Walking home, through sharp breaths, I felt stupid. If she could talk, in that moment she would have said: “How could I know until I smell it?”

I’m not the first person to notice their pet’s modes of engagement with the world. But I’ve been thinking about it again after reading research published by METR. They studied developer productivity with AI tools and discovered something unexpected: developers took 19% longer to complete tasks when using AI assistance. More puzzling still was the gap in perceived productivity. Developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed it had accelerated their work by 20%.1

The productivity metrics surprised me less than the cognitive disconnect in perceived productivity. I suspect that these findings are a symptom of something deeper shifting under foot. AI tools bring promises of faster code gen, faster fact finding, and deeper engagement with dozens of sources that you wouldn’t have had time to read. In short, a more streamlined, productive information apparatus. But what we lose in this shift towards higher throughput is our innate ability to confront uncertainty. We are trading off the difficulty of being totally lost for something shallower; for the answer key. And I think that we are becoming increasingly blind to this transaction. A gallon of water for a pound of flesh. This raises a simple question: what happens when we outsource uncertainty?

I’m an AI optimist. This generation of tools is a step change in the leverage I get learning and exploring topics I know and topics I don’t. Most of my usage follows two patterns: educational dialogue, “Help me understand this mathematical notation” and research synthesis, “Explain the FIA’s CFD regulations and how CoreWeave fits into Formula 1’s computational constraints”. This latitude feels miraculous. Yesterday I cared about tornadoes, today lobsters. There is virtually no area of the map that feels unseeable, and the avenues of creativity that that opens up stretch out in every direction. At any moment, I have a digital mediator ready to contort into the shape of whatever answer I need, granting carte blanche permission to be privately stupid about any subject. This is the stuff of fantasy.

But here’s what I’ve noticed. I’ve spent hours discussing this essay with Claude. Reflecting on this creates an obvious tension. What is there left to say about what we lose when we use AI tools to think while using an AI tool to help me think. The computers won, and perhaps I am not so smart after all. But for every moment of inspiration, every moment of genuine creative ignition that I’ve had, there are dozens of dead end chats and deep research reports collecting dust. All my lazy attempts to spin up on something that turned out to be a fleeting interest have failed. Using language models to engage with uncertainty can feel productive, but I don’t feel ownership over a deep research report that I skimmed and clicked a couple references. The code generated for a quick automation is valuable because it saves me a few minutes, not because it deepens my understanding of the language it was written in. The hard work is still hard, and when I cede all of the collection and cross-checking and verifying, I’m left with something that feels hollow and alien in comparison.

A finite game is played for the purpose of winning, an infinite game for the purpose of continuing the play.
– James Carse

I think we’re beginning to mistake finite for infinite games in intellectual work. Finite games have clear endpoints. Get the answer, solve the problem, complete the task. Uncertainty is friction to be eliminated as efficiently as possible. This happens to be the mode where AI tools excel. They are quick to carve out and chop up uncertainty. Here is your summary of the history of pistachios, job done. But infinite games require sustained engagement with not-knowing. The uncertainty is the game. When you’re genuinely learning something new, you can’t know in advance what you’ll discover, what connections you’ll make, or how your thinking will change. The process of gradual knowing builds cognitive capacity in ways that immediate answers cannot. Having the answers in the back of the book does not make the task of integrating the reasoning any less of a challenge.

My dog understands this. She’s not trying to win at smelling. She’s engaging with pure, sensate curiosity, letting uncertainty unfold naturally. She’s playing an infinite game with the world. The world churns restlessly around the same oak trees, and she likes to ask the same simple questions. There is no dissatisfaction with the inefficiency.

We seem to be losing this capacity. Our tools are converting intellectual infinite games into finite ones. Instead of “I wonder how this works and I’m going to figure it out,” we default to “I need to know how this works, let me ask ChatGPT.” The first builds cognitive muscle, the second just transfers information. When we press the solve the puzzle button, what we get is a version of the solved puzzle. But there is no strain, no deliberation, and no heart burn. The more we opt out of the discomfort of the struggle to understand, the more we eliminate the skills that are needed to rewire our brain and produce real capability.

This isn’t about productivity or efficiency. It’s about what happens to our cognition when we can no longer tolerate not-knowing. The process of wrestling with uncertainty, sitting with confusion, following dead ends, and gradually building understanding creates reinforcing neural circuitry that instant answers cannot. I see this sentiment echoed in a recent Nature paper.

“Writing compels us to think — not in the chaotic, non-linear way our minds typically wander, but in a structured, intentional manner. By writing it down, we can sort years of research, data and analysis into an actual story, thereby identifying our main message and the influence of our work.”2

We’re witnessing a shift in how we address the unknown. The capacity to sit with uncertainty has been fundamental to human discovery, creativity, and growth. Breakthroughs require sustained engagement with not knowing. Every deep relationship requires tolerance for ambiguity. Every creative work emerges from productive struggle with unclear possibilities. But these capacities atrophy without use. Once we lose them, we lose access to entire categories of human experience. And we’ve only just begun to see what this means when we extend this reasoning beyond knowledge work. Anthropic cites that a small but noticeable portion of their traffic has been conversations about interpersonal advice, psychotherapy or counseling, companionship, and romantic/sexual roleplay.3 Grok’s new avatar Ani is an opinion on this trend. Our lived experience is beginning to be shaped by LLMs. Unfortunately, there is no answer key for living well.

The solution can’t be to abandon AI tools. They have proven too powerful and too useful. The solution is learning to distinguish between uncertainty avoiding and uncertainty engaging uses of these technologies. Ask yourself: Am I using this tool to eliminate discomfort, or to engage with complexity more effectively? Am I rushing past the interesting smell, or am I taking time to investigate what I don’t yet understand? Some things are worth the discomfort of not-knowing. Some uncertainties deserve our full attention. Some games are worth playing slowly, carefully, with genuine curiosity about what we might discover along the way. The last thing we want to lose is our ability to ask: What could this possibly be? And then take the time to find out.


Top