kleamerkuri

kleamerkuri

Mar 21, 2026 · 17 min read

Are AI Coding Tools Making Developers Worse at Coding?

Here’s something nobody tells you about working with AI tools every day: the slipping is quiet.

It doesn’t announce itself. There’s no error in the console, no failing test, no moment when someone pulls you aside and says, “Hey, you okay in there?”

It just… accumulates. Commit by commit. Accept by accept. Nod by nod.

I’m a developer with real experience under my belt. I work in a multi-tenant codebase at my day job; I ship projects daily and consistently write about web development.

Somewhere between all of that, I built something I couldn’t explain, and I didn’t even realize it was happening until I was already deep.

That’s the thing about this particular problem. It doesn’t feel like a “problem” while it’s forming.

AI coding tools like Copilot and Cursor are useful, but there’s a real difference between using them to remove friction and using them to outsource your thinking.

This post is a confession about where I crossed that line, what the research says about developer skill atrophy, and the practical habits I’m using to stay sharp without giving up the tools.

Related: Am I Still a Developer If I Didn’t Write the Code?

The VersoID Paradox: What Vibe Coding Actually Costs You

Let me tell you about VersoID. It’s a WIP app I’ve been building, a personal project where I deliberately chose Flutter and Dart as the tech stack.

Not because I know Flutter. Not because I know Dart. I’ve never used either.

The point of VersoID was never to learn Flutter deeply. The goal was to develop my AI workflow skills, specifically agent orchestration and planning.

How do you break down a complex build into agent tasks?

How do you manage context across sessions?

How do you get AI systems to produce coherent, connected output across a real project?

Those were the questions I was trying to answer. The tech stack was almost beside the point.

And here’s where it gets interesting, because that framing created a paradox I didn’t see coming.

When you’re using AI tools, and you know the technology, you have a natural check on what comes back.

You can feel when something’s off. You catch the moment the code goes berserk.

How? By noticing the unnecessary abstraction or the slightly wrong pattern.

Occasionally, you might still nod it through, but you have a foundation to push back against.

When you don’t know the technology (and have deliberately decided that learning it isn’t the primary goal) that check disappears entirely.

I was reviewing AI-generated Dart and Flutter code with no baseline to compare it against. I was skimming and accepting because I had no framework to skim with.

And because the goal was orchestration, not comprehension, I told myself that was fine.

It’s not fine.

Who Owns Vibed Code?

What I was doing is called vibe coding. The term gets thrown around as a positive these days because it means you move fast, ship things, and don’t get bogged down in implementation details.

I get the appeal. I was focused on something real: building orchestration skills is valuable.

Explore: Google Antigravity Explained: The New Way to Build Apps With Vibe Coding (2026)

Learning how to break a complex product into agentic tasks is a legitimate and increasingly important skill.

But, I learned the hard way that vibe coding in a technology you don’t know, while deliberately deprioritizing comprehension, doesn’t just mean you’re moving fast.

It means you’re building a codebase that belongs to the AI more than it belongs to you.

When something goes wrong, or when you need to make a deliberate architectural decision, or when you want to extend a feature in a direction the AI didn’t anticipate, you have nothing to stand on.

I came back to sections of VersoID and couldn’t follow the logic. Not because the code was exotic or poorly written, but because I’d never actually understood it in the first place.

I’d been a project manager on my own app, approving output I couldn’t interrogate.

The paradox is this: I got better at orchestrating AI. In doing so, I quietly got worse at the thing that makes orchestration useful, like being a developer who can evaluate what the AI actually produces.

Related: This Is The Simple Reason I Choose To Co-Code Instead Of Vibe Code

When AI Becomes a Prerequisite for Your Own Thinking

The VersoID thing was uncomfortable. But a separate moment bothered me more, because it touched something I thought was solid.

I was in a system design session for a different project. Scoping the problem, asking the right initial questions, doing the work—that part felt fine.

Then came the step where I needed to synthesize inputs and requirements to structure an approach (i.e., design the system).

My first instinct, without hesitation, was to open an AI interface and hand it off.

Not because I couldn’t do it. I’ve done system design plenty of times (in the “real world”). I know how to work through a problem.

But in that moment, the idea of proceeding without having AI parse, validate, and scaffold an initial plan first felt extremely uncomfortable. Like I was missing a step.

Like my own thinking, unmediated, wasn’t quite trustworthy yet.

That’s not a Flutter problem. Nor a “new tech stack” problem.

That’s a dependency problem.

The question that unsettled me wasn’t “do I even know how to do this?” It was something more specific and more insidious: Why does reaching for AI feel like the only logical next step?

I knew I could think through the design. I just didn’t trust myself to do it without the layer of AI validation sitting on top of it first.

Think about what that means.

It means the AI isn’t augmenting my thinking. Instead, it’s become a prerequisite for it.

And that’s a fundamentally different relationship with the tool than the one I thought I had.

Related: You’re Not Just Writing Code, You’re Architecting an Experience

The Slow Burn: Playing Catch in Your Own Codebase

I want to talk about something that’s less dramatic than either of those moments but might actually be more common, what I’d call the slow burn.

At my day job, I use Cursor and Copilot regularly for boilerplate scaffolding, generating new features, and finding where logic lives in a large multi-tenant environment.

These are legitimately good use cases. I’m not going to pretend otherwise 💁‍♀️

But here’s what’s crept in: I play catch with my own codebase now.

When AI generates something, I trace the changes. I skim.

If there’s a bug or if repeated attempts aren’t landing, I’ll genuinely dig in and review. But a lot of the time, I’m not deeply reading what’s going in. I’m checking for obvious red flags.

Over time, that means there are parts of a codebase I work in every day where I’m not entirely sure where certain logic lives or why it’s structured the way it is.

This isn’t catastrophic. The code works and the features ship. But it creates an unease that sits in the background, a low-grade sense that my map of the system is less complete than it should be.

When something does break in an unexpected way, that gap between “the code works” and “I understand the code” becomes very real very fast.

The research backs this up, and not in a reassuring way.

A GitClear analysis found that code churn (i.e., lines that get reverted or reworked within two weeks) was on track to double compared to pre-AI baselines. More code, shipping faster, though sticking less.

Another study found that developers using AI code assistants frequently accepted suggestions with outdated dependencies or subtle logic errors, partly because the suggestions looked right even when they weren’t.

It’s downright a real automation bias. Once you’ve seen enough AI suggestions land correctly, your skepticism naturally relaxes, often right before it shouldn’t.

Note 👀
Automation bias isn’t unique to AI. It shows up with autocomplete, linters, and spell check. The difference is that those tools operate in narrow, well-defined lanes. AI code generation operates in the full complexity of your system, which means the cost of misplaced trust is higher.

Removing Friction vs. Outsourcing Thinking: The Line That Matters

This is the distinction I keep coming back to, and it’s the one most developers I’ve talked to haven’t explicitly named yet.

Removing friction is legitimate and valuable. AI handling boilerplate means I’m not writing the same getter and setter for the hundredth time.

AI scaffolding a new feature means I start from something rather than nothing.

And, AI helping me find where a function lives in a 200,000-line codebase means I’m not spending forty-five minutes on grep.

That’s not cognitive atrophy, it’s leverage.

Outsourcing thinking is different. It’s when the synthesis, judgment, architectural decision, and debugging hypothesis (i.e., the actual cognitive work of being a developer) is handed off.

Why? Because the tool is right there, it’s faster, and it usually comes back with something coherent enough to accept.

That’s where the damage accumulates.

The hard part is that they feel almost identical in the moment.

Both feel like being productive. Both feel like using your tools well.

What’s the Difference Between Removing Friction and Outsourcing Thinking?

The difference only shows up later, when you’re staring at your own project, unable to explain it, or sitting in a design session, wondering why you feel incomplete without an AI sanity check.

Here’s a quick way to tell which one you’re doing:

  • Friction removal: You know what the output should look like. You’re using AI to get there faster.
  • Thinking outsourced: You’re not sure what the output should look like. You’re using AI to figure that out for you.

One makes you faster. The other makes you dependent. In day-to-day flow, the line between them blurs constantly.

FeatureFriction Removal (Keep it!)Thinking Outsourced (Watch out!)
The GoalYou know exactly what the output should look like.You’re not sure what the output should look like.
The AI’s RoleSpeeds up the typing/execution.Figures out the logic/architecture for you.
Your RoleReviewer/Validator.“Nodder”/Project Manager.
The ResultYou stay in flow.You become dependent on the tool to proceed.

How to Stay Sharp Without Ditching the Tools

I want to be honest here: I don’t have a clean system. I’m working this out as I go, so here’s what’s genuinely been helping.

Do Small Builds With No AI Assist

I still go back to small projects and coding challenges. Not for the portfolio or to ship anything. But because they’re one of the few contexts where I have to think all the way through a problem without a scaffold to lean on.

Is there a voice that says, “Why are you building something tiny when you could be shipping something real with AI”?

Yes. Absolutely, especially in 2026, when vibe coding is celebrated and “I built this in a weekend with AI” is a badge of honor.

I feel that pull. I have serious doubts sometimes about whether insisting on fundamentals is discipline or just nostalgia.

What happens, though, when something breaks in a way that AI can’t diagnose? The developer who understands the fundamentals is the one who can actually fix it.

The developer who can only prompt their way through a happy path is not.

Small builds are how I stay the first kind of developer. They’re unglamorous. They’re reps. I’m keeping them.

Related: Here Is An Easy Active Guide To Beating AI Burnout

Use Writing as a Comprehension Test

This blog is partly selfish because writing forces a kind of clarity that building doesn’t require.

You can ship something you half-understand.

You absolutely cannot explain something you half-understand without the gaps surfacing immediately.

If I can’t write about how something works, I don’t actually know how it works. That’s become a real benchmark for me—not a formal one, just a gut check.

If I’m avoiding writing about a thing, it usually means I’m avoiding admitting I don’t fully understand it.

Tip 💡
You don’t have to publish anything. Just try writing a two-paragraph explanation of the last feature you shipped (maybe, crazy idea, for documentation). If you get stuck, that’s information. The sticking point is exactly where the comprehension gap lives.

Keep Part of Your Stack Manual, On Purpose

One concrete habit: I don’t let AI touch my database setup.

When I’m working with Supabase, I build the tables myself. I define the schema, set the permissions, think through the relationships, and configure the RLS policies by hand.

No MCP connection directly to the database. Some AI-generated migration files that are not pushed (so I can review).

Part of this is deliberate data consciousness. I’m building the habit of not granting blanket access to production data, even on personal projects where it seems like overkill, because the habit needs to be built somewhere.

Part of it is intentional friction. This is the part of the stack where I insist on doing the cognitive work—every project, without exception.

It might seem like a small thing, but it keeps something real.

And it means that no matter how much vibe coding is happening at the application layer, I always have a solid, personally-understood foundation underneath it.

Explore: How To Connect To Real Data And Secure Your Antigravity Workspace

Read the Documentation, Don’t Just Prompt About It

I don’t use AI as my first stop for technical questions. I go to the docs.

This gets framed as inefficient in a world where you can ask Claude or NotebookLM to explain an API, but there are two reasons I keep doing it.

Tip: If using NotebookLM, take the extra step of clicking through search-based suggested sources instead of importing blindly. This selection process will make you an active participant.

The first is that AI still hallucinates with impressive confidence. Outdated method signatures, deprecated patterns, invented parameters—it’s gotten better, but it hasn’t gone away.

Reading the docs isn’t just a learning habit, it’s a hygiene habit.

The second reason is harder to articulate but more important: reading documentation is an act of building a mental model.

When I read how something works in the source material, that knowledge lives differently in my head than when I receive a summarized answer to a specific question.

Prompting for answers optimizes for immediate utility. Reading docs optimizes for durable understanding.

I want both, and right now I think I’ve been over-indexing on the first.

Tip 👇
If you really don’t have time or bare a true hatred towards docs (some dev docs deserve the sentiment 😔), then I suggest you ask for citation. Have your AI actively cite the docs and cross-reference. It cuts back on the time it would take for you to review them yourself while adding a level of accountability.

The Doubt I’m Not Going to Resolve For You

I don’t think the answer is less AI. I believe the answer is a clearer relationship with it, one where you know which parts of the work are yours and you protect them.

Where the tool is doing execution layer work, not thinking-layer work.

Where you could, if you had to, explain what you built to someone who asked.

But I also hold some real doubt about where this is all going.

The landscape is shifting fast. The tools keep getting better. And the gap between “I understand this” and “I can ship this” keeps widening.

There will probably come a point where some of the things I’m insisting on doing manually don’t matter anymore.

I just don’t think we’re there yet.

The cost of assuming we’re there before we are (i.e., the cost of quietly outsourcing the thinking and only noticing when something goes wrong) is higher than the cost of a few extra hours reading documentation or building something small.

Note 📝
There’s a real difference between “I don’t need to understand this because AI handles it” and “I understand this well enough to know when AI is getting it wrong.” The second one is where you want to be. The first one is where you end up when the slipping is quiet.

FAQ

Do AI coding tools like Copilot and Cursor actually make developers worse over time?

Not automatically, but they can, depending on how you use them. The risk isn’t the tools themselves; it’s the habit of outsourcing cognitive work like problem formulation, architectural decisions, and debugging hypotheses.

Research from GitClear found that AI-assisted code churn was projected to double compared to pre-AI baselines, suggesting code is being written faster but understood less deeply.

What is “cognitive debt” in software development?

Cognitive debt is the gap between what you can ship and what you actually understand. It builds up when you consistently accept AI-generated code without deeply reading or questioning it.

Unlike technical debt, it doesn’t show up in your codebase. It shows up when something breaks, and you realize you don’t know the system well enough to fix it.

What is vibe coding, and is it a problem?

Vibe coding means building with AI tools at the wheel—you describe the goal, the AI writes the code, you review and accept at a high level.

It’s productive for shipping quickly, but it becomes a problem when you’re working in an unfamiliar tech stack with no baseline to evaluate the output against. You end up as a project manager on your own app, approving code you can’t interrogate.

How can developers stay sharp while still using AI tools?

A few habits that help:

  • Doing small builds without AI assistance to keep your problem-solving muscles active
  • Writing about what you build as a comprehension test
  • Keeping at least one part of your stack manual on purpose
  • Reading documentation directly rather than prompting AI for summaries

None of these require giving up the tools; they’re about maintaining ownership of the thinking layer.

Should developers avoid using AI for system design?

Not avoid but be deliberate. The warning sign isn’t using AI in a design session; it’s when you feel you can’t proceed without it.

If you find yourself unable to synthesize requirements or structure an approach without AI validation first, that’s a signal the dependency has shifted from tool to crutch.

Is it safe to connect AI tools directly to your database via MCP?

It depends on your context, but there are real data consciousness reasons to be cautious even on personal projects. Granting blanket database access builds habits that become risky at scale.

Setting up your schema and permissions manually, even when it feels like overkill, keeps you understanding the foundational structure of your own system and builds the right instincts for when it actually matters.

It’s a Wrap

I was building something I couldn’t explain. It snuck up on me.

If you’re using AI tools every day, as in really using them, not just dabbling, there’s a decent chance something similar is sneaking up on you, too.

The fix isn’t dramatic. It’s not “delete Copilot” or “go back to writing everything from scratch.” It’s just: notice the relationship.

Notice when you’re using AI to move faster and when you’re using it because your own thinking feels incomplete without it.

Notice which parts of your work you actually understand and which parts you’ve been nodding through.

Keep something manual. Write about what you build. Read the docs occasionally. Build something small with no AI assist and see how it feels.

Not to prove a point. Just to stay sharp.

😏 Don’t miss these tips!

We don’t spam! Read more in our privacy policy

Related Posts

Leave a Comment

Your email address will not be published. Required fields are marked *