Ununderstanding
AI and Uncertainty
My dog and I walk three times a day. When she smells something interesting, everything else disappears. Seventy five pounds of maniacal focus circling and pawing and rolling. She drops her chest low to the ground expecting the tug of her leash. She knows this drives me crazy, but she knows that my will to keep walking is weaker than her will to keep smelling. As she circles and paws and rolls, I wait. This is our stilted, arhythmic waltz.
She loves to eat things off the ground. I check to make sure there are no obvious hazards while she’s rooting around in something. One winter morning a special kind of frustration welled up in me. The wind and the cold had leeched out my good nature. She was smelling what seemed like nothing at all. I got close and mumbled “what could you possibly be smelling?” over hard, bare earth. She glanced up, assessing how much patience I had left, and returned to her work. Walking home, between sharp breaths, I felt stupid. If she could talk, in that moment she would have said: “How could I know until I smell it?”
I’m not the first person to notice their pet’s modes of engagement with the world, but I’ve been thinking about them again after reading research published by METR. They studied developer productivity with AI tools and discovered that developers took 19% longer to complete tasks when using AI assistance. More puzzling still was the gap in perceived productivity. Developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed it had accelerated their work by 20%.[1]
The productivity metrics surprised me less than the cognitive disconnect in perceived productivity. I suspect that these findings are a symptom of something deeper shifting under foot. AI tools bring promises of faster code gen, faster fact finding, and deeper engagement with sources that you wouldn’t have otherwise made time for. In short, a more streamlined, productive information apparatus. But in this move towards higher throughput I think that we are ceding confrontation with uncertainty. We trade off the difficulty of being totally lost for something shallower. In this motion we grow less capable.
This generation of AI tools is a step change in the leverage I get exploring topics new and old. Most of my usage follows two patterns: educational dialogue, “Help me understand this mathematical notation” and research synthesis, “Produce a report on the following topic”. It’s run of the mill, but the latitude feels remarkable. Yesterday I cared about tornadoes, today lobsters. There is no area of the map that feels unseeable, and the avenues of creativity that that opens up stretch out in every direction. At any moment I have a digital mediator ready to contort into the shape of whatever answer I need, granting carte blanche permission to be privately stupid.
I’ve discussed this essay with Claude extensively. Reflecting on this creates an obvious tension. What is there left to say about what we lose using AI tools to think while using an AI tool to think. The computers won, and perhaps I am not so smart after all. But for every moment of genuine creative ignition that I’ve had, there are heaps of dead end chats and deep research reports collecting dust. All my lazy attempts to learn about a fleeting interest have failed. Using language models to engage with information can feel productive, but I don’t feel ownership over a deep research report that I skimmed. The code generated for a quick automation is valuable because it saves me a few minutes, not because it deepens my understanding of the language or system. The hard work is still hard, and when I cede the collection and cross-checking and verifying, I’m left with something that feels alien and hollow.
I think we are collapsing infinite games into finite ones. Finite games want clear end states and rules of engagement. AI tools are quick to carve out and chop up uncertainty, leaving us with a sense that the game is won. Here is your summary of the history of pistachios, job done. Infinite games require sustained engagement with endless horizons. The uncertainty is the game. When you’re genuinely learning something new, you can’t know in advance what you’ll discover or how your thinking will change. The process of gradual knowing builds cognitive capacity in ways that immediate answers cannot. Having the answers in the back of the book does not make the task of integrating the reasoning any less of a challenge, or any less rewarding.
My dog is not trying to win at smelling, she is following her curiosity. She’s letting uncertainty unfold naturally. She’s not engaging in something that is circumscribed or conditional, but that is instead defined by what she is; a stubborn lab with an active nose. The world churns restlessly around the same oak trees and she likes to ask the same simple questions. There is no dissatisfaction with the inefficiency.
We are losing this capacity because our tools compress intellectual infinite games into finite ones. Our patience to wade through wrong turns and minutiae wears thin. We rarely tolerate the great unknown, defaulting to solutions over wonder. Knowing implicitly that we can generate solutions to vexing puzzles is proving to be difficult to ignore, but there is no strain or heart burn. The more we opt out of the discomfort, the more we eliminate the skills that are needed to produce real capability. The process of sitting with confusion and gradually building understanding reinforces that “I am able” in a way that instant answers don’t. I see this sentiment echoed in a recent Nature paper.
“Writing compels us to think — not in the chaotic, non-linear way our minds typically wander, but in a structured, intentional manner. By writing it down, we can sort years of research, data and analysis into an actual story, thereby identifying our main message and the influence of our work.”[2]
Coexisting with uncertainty is a fundamental feature of our lived experience. Intellectual breakthroughs require sustained engagement with the edges of our comprehension. Deep relationships require tolerance for ambiguity. Creativity emerges from the struggle with branching possibilities. But these capacities atrophy without use. Once our abilities weaken, we lose access to rich categories of human experience. We’ve only just begun to see what this means when we extend this reasoning beyond knowledge work. Anthropic cites that a small but noticeable portion of their traffic has been conversations about interpersonal advice, psychotherapy or counseling, companionship, and romantic/sexual roleplay.[3] Grok’s new avatar Ani is an opinion on this trend. Our lived experience is beginning to be shaped by LLMs. There is no simple answer for living well.
The solution can’t be to abandon AI tools because they provide real value. But we should bristle against the temptation to collapse everything we care about into call-and-response simplicity. Some uncertainties deserve our full attention. Some games are worth playing slowly, with genuine curiosity about what we might unfold along the way. We should remember the joy of asking “What could this possibly be?”, and then taking the time to find out.