wax on, wax off
what vibe learning can't teach you
This week I had one of those rare epiphanies that serves simultaneously as an aha moment and an oh-shit moment.
I’m not totally convinced that I’m learning anything.
Maybe that’s okay?
The last time I wrote anything substantive about building the web app, I had just wired things up to a real, live database so that the app could display contractor service categories. The next logical step on this journey is to get the app to display contractors associated with those categories, because keeping track of contractors is half the point. Beyond some differences in how the app represents a category and a contractor as data, the pattern for storing and retrieving any arbitrary thing-stored-in-a-database is basically the same.
As I began collaborating with Claude on this latest building block, though, I had the aforementioned epiphany:
I have no idea what I’m doing.
In spite of having done a very similar thing, when presented with the task of getting the data from the database into the web app, I didn’t know where to start. I didn’t even know how to start testing, let alone how to start implementing. Sure, I could just copy and paste what I’d already done and tweak it for the new data. Of course I could work with Claude to do it again from the ground up. But those options aren’t really the point.
I realized that not only did I not know how to accomplish this task, but I didn’t even know why what I’d already done worked to begin with.
Whoops.
just a cog in the code
Derek, you might be thinking to yourself, you spend maybe two hours a week looking at code and you get distracted by new AI tools like an otter gets distracted by sea urchins. Of course you have no idea what you’re doing.
Fair point. But my ignorance is more fundamental than a lack of practice or attention. I’ve come to realize that the motions I’ve been going through have been entirely mechanical, directed by a machine intelligence that ostensibly knows what it’s doing but with zero incentive or training to explain itself. By letting the AI drive, even as much as I course-correct, I’m basically just typing whatever words or characters the model would have predicted to be next in the sequence anyway.
Even with an app that’s so basic as mine right now—something that just displays a group of categories from a database on a webpage—there’s still an architecture and a flow of instructions and information that people who really know how to code know at first from memory and later from intuition. Professional software engineers (and probably the people most effectively amplified by AI at the moment) can describe and explain the basic topology of a web app at almost a conceptual level.
I, on the other hand, am the AI’s dictation machine. Even as I yell at it not to do the coding for me, it still describes the code my fingers need to type without pausing to articulate that topology. The way data get from a database to my screen is mystifying, even to me, in spite of the part I played in getting it there.
What does it mean to participate in building something I barely understand? I suspect this is the sort of question we’ll be asking ourselves about human agency more and more as this technology progresses.
muscle memory matters
While I’m confident that my Cursor-bound tutor isn’t intentionally channeling Miyagi, I can’t help but think of Daniel-san waxing and painting and sanding without really understanding what he was doing.1 In much the same way, developing muscle memory before understanding is like intentionally riding out the Dunning-Kruger effect, developing some mechanical competence before unveiling the incompetence just one layer deeper.
While this likely won’t affect my own adoption, I can see this being insidious for people who are unaware of their own limitations. I cringe imagining the number of “AI consultants” and “context engineers” in the market with no real depth of experience in any other field, and I can’t help but wonder how many other skills people will vibe-learn just to the point of incompetent hubris.
What does some future version of the world look like in which vast numbers of writers emulate the timid scribe style of AI writing, new philosophers test their ideas on sycophantic chatbots, and the next generation of coders understand only happy-path patterns? I’m perhaps naively optimistic about the positive changes AI will usher into our civilization, but that doesn’t change the questions with which we’ll have to grapple along the way.
Now that I’m more aware of my own incompetence, what do I do with this information? Do I continue to let the machine guide me mechanically toward muscle memory that I’ll later apply to more conscious problem-solving, or do I rip away the veil now and try to intercede with some kind of overarching lesson plan that the mechanics of what I’m doing ultimately serve?
All I can do is keep tinkering until I figure it out.
the way to go is through
OpenAI recently unveiled Study Mode in ChatGPT, an explicit acknowledgment that, without guardrails, AI will gladly just give us all the answers. I spent a few minutes chatting with it about how Next.js works and asking it some specific questions about the code I had written but didn’t understand. It was notionally helpful but felt hollow when contrasted against my richer experiences collaborating on code.
It was great at giving me the fundamentals but couldn’t effectively explain how I’d already applied those fundamentals in my mechanical work.
Not one to be totally deterred by running into an unsatisfying AI response, I took to Codex. Instead of taking it up on its promise of agentic coding, I prompted it: Explain how this app and the testing strategy work to someone new to Next.js. It gave me a great technical explanation of the bits and bobs of my basic project while still making some obvious assumptions about what I ought to know about how web apps work in general.
So however we slice the problem we still have a long way to go before I get the coding Miyagi that helps me transcend the mechanics of what I’m doing and guides me toward solving real problems with the stream of characters emanating from my fingers.
Still, this is a great time to take Ethan Mollick’s advice and “assume this is the worst AI you’ll ever use.” And keeping that in mind maybe we don’t have such a long way to go after all.
I’m also confident that some folks won’t get this old-ass reference.


