I’m currently building my first Flutter app, something I’ve never done before. Never touched Dart. Never worked with Android or iOS simulators. I’ve never built anything “for mobile” in my life.
But with Gemini’s help? I’ve got a working app running on both platforms.
The code writes itself. The architecture materializes, and the widgets render. I review it, test it, and tweak a few things. It works.
Then comes the weird feeling: Did I actually learn Flutter, or did I just supervise?
It’s a question I’ve been sitting with for months now, and I know I’m not alone. According to Stack Overflow’s 2025 Developer Survey, 84% of developers are using AI tools at least weekly.
We’re all in this strange new territory together, watching AI write more and more of our code while trying to figure out what that means for our identity as developers.
The industry has a fancy new name for what we’re becoming: AI Supervisors.
Code reviewers. Validators. Architects of systems we didn’t personally type out.
Honestly? It feels weird. Because I got into development to build things, not to steer a really smart autocomplete.
But an even bigger concern is that this shift is messing with our skills in ways that go deeper than productivity metrics.
Research is starting to show actual cognitive atrophy.
Junior developers are losing fundamental debugging abilities.
Companies are scrambling to figure out what they even want from developers.
So let’s have the honest conversation. The one where we admit that using AI feels both incredible and uncomfortable 😌
Where we acknowledge that the old identity of “developer who writes code” is colliding head-on with the reality of “developer who supervises AI-written code.”
And where we figure out together how to navigate this without losing the skills that make us actually good at what we do.
The New Role We Didn’t Apply For
Here’s what’s actually happening: developers are transitioning from code-writers to code-supervisors, and most of us didn’t see it coming.
Literally, whacked us on the head 🔨
The World Economic Forum found that 65% of developers expect their roles to be fundamentally redefined in 2026.
Not tweaked. Not adjusted. Redefined.
We’re moving from routine coding toward architecture, integration, and what they’re calling “AI-enabled decision-making.”
Think about your day-to-day right now. The tasks are shifting under our feet:
What’s decreasing:
- Writing boilerplate (AI crushes this 🚀)
- Basic CRUD operations (templates everywhere)
- Routine debugging (copy-paste to ChatGPT)
- Documentation reading (just ask the AI)
What’s increasing:
- Reviewing AI-generated output for correctness
- Making architectural decisions the AI can’t make
- Security validation (because AI code is riddled with vulnerabilities)
- Explaining technical decisions to non-technical stakeholders
- Debugging when the AI doesn’t understand the system
Further, according to PwC’s research, skills in AI-exposed roles are changing 66% faster than in less AI-exposed positions.
We’re not just learning new frameworks anymore. We’re learning an entirely different relationship to our craft.
The identity struggle is real 💁♀️
I’m supposed to be building, not steering and coordinating, but increasingly, the job is the steering and coordinating.
Some companies have even started requiring it explicitly. Meta and LinkedIn now run AI-enabled coding interviews where they literally watch how you use and validate AI output.
It’s not the developer job I imagined when I started (or, even, 1-2 years ago), and that’s okay to admit.
Explore: 10 Things People Get Wrong About My Developer Job
The Skill Atrophy Nobody’s Talking About
Anthropic (yes, the company that makes Claude) published research on how AI assistance affects skill formation.
They had developers learn a new coding library, some with AI help and some without.
The AI-assisted group finished tasks slightly faster, but when they tested actual learning?
The AI group scored 50% on the skill assessment. The manual group scored 67%.
That’s nearly two letter grades of difference. And the biggest gap? Debugging skills. The ability to understand when code is incorrect and why it fails.
MIT ran a similar study and found that heavy AI users showed a 20% decline in critical thinking.
Microsoft and Carnegie Mellon discovered that the more people relied on AI tools, the less critical thinking they engaged in, making it harder to summon those skills when needed.
But here’s the study that really hit me: METR tracked experienced open-source developers working on their own repositories. Developers predicted AI would make them 20% faster. The reality? They were 19% slower with AI.
There’s a massive perception gap between how helpful we think AI is and how it actually affects us in complex, real-world work.
What’s Actually Disappearing
I’ve noticed this in my own workflow with this Flutter project, and it’s subtle:
- I can work with Dart purely because I know concepts from other languages. But would I say “I know Dart” after this? Absolutely not.
- I read stack traces and errors, often know what’s wrong, but then direct Gemini to fix it anyway. Why trace it manually when AI can patch it?
- I’m optimizing for “does it work” over “do I understand why.” I want to get it done. I tell myself to slow down and understand things, but that mostly happens when I need to debug or redirect the plan.
- Pattern recognition through repetition? Not happening. AI handles the repetition, so I never build the muscle memory.
A moment that highlighted this was when I wanted to change a button’s background opacity. Simple UI tweak, right?
I went into the code and realized there were predefined values I had no idea existed. Not because I couldn’t learn them, but because I’d been letting Gemini handle all the Flutter-specific stuff while I stayed at the conceptual level.
One engineer confessed on his blog that after 12 years of programming, heavy AI use made him “worse at his own craft.”
He described a creeping decay—first documentation, then debugging skills, then the instinct for when something would break in production.
Note 👇
The research calls the “creeping decay” of the developer as cognitive offloading. We’re outsourcing our thinking to AI, and our brains are adapting by getting worse at the things we outsource.
The Junior Developer Problem
Here’s where it gets worse for the industry: the apprenticeship pipeline is breaking.
Tech internships have dropped 30% since 2023.
Only 7% of new hires at major tech companies are recent graduates, down from 9.3% in 2023.
Entry-level roles now require what used to be mid-level expectations 😒
Why? Because AI handles what junior developers used to do to learn. The grunt work. The boring but skill-building tasks.
Companies look at AI and think, “Why train someone for six months when the tool handles it immediately?”
The problem, however, that nobody’s solving is this: where do future senior developers come from if we skip the learning phase?
When the Instinct Disappeared
Here’s where I started noticing the shift, and it’s subtle enough that you might be experiencing it too.
With this Flutter app, I catch myself reading errors, understanding what’s wrong, but immediately hand it off to Gemini to fix.
Not because I can’t fix it, but because why would I when the AI can do it in 10 seconds?
State management confusion? Ask Gemini.
Widget not rendering? Paste the error, get the fix.
I’m staying at the conceptual level, understanding the “what,” but skipping the implementation muscle-building entirely.
The opacity button moment made me realize that I’m building an app, but I’m not actually learning Flutter.
I’m learning how to supervise Gemini while it learns Flutter for me.
The 80/20 Rule That Keeps You Relevant
I’ve been working on a framework to help me navigate this, and it is: use AI for 80% of the work, but fight for the 20% that matters.
Let AI handle the boilerplate:
- tests
- documentation
- scaffolding
All the stuff that’s tedious but necessary, that’s the 80%. It’s where AI genuinely accelerates you without much downside, so safe to move over 🫷
But the 20%? That’s where your actual value lives, and you need to claim it deliberately.
What the Critical 20% Looks Like
1. System Design Thinking
AI can write the code, but you decide the architecture.
Why does this pattern exist? What are we trading off? When would this approach fail at scale?
Example: AI might suggest a simple for-loop to process data. But you need to be the one who recognizes that it won’t work when the dataset grows to millions of records.
You need to know why a different approach matters.
2. Security and Validation
Studies show that nearly half of AI-generated code contains security vulnerabilities in some tests—SQL injection, XSS, and authentication flaws.
AI is great at making code that works. It’s terrible at making secure code 👎
Your job is catching what AI misses. Understanding threat models and recognizing when a pattern is exploitable.
Real example: Southwest Airlines’ 2022 holiday collapse cost them $750 million, rooted in technical debt from deferred modernization.
When you skip validation because “AI said it’s fine,” you’re creating future disasters.
3. Debugging When AI Fails
This is the big one. When production breaks at 2 AM, you need to be able to read a stack trace cold to understand what broke and why. Then trace execution flow through unfamiliar code.
AI can help. But if you’ve lost the foundational debugging skill because you always offloaded it to ChatGPT, you’re going to struggle exactly when it matters most.
One CTO put it perfectly: “I don’t care that you use AI. I care if you can tell me when it’s wrong.”
4. Architectural Judgment
Is this the right solution? Will this scale? Does this create tech debt we’ll regret in six months?
AI optimizes for “works right now.”
You need to optimize for “works sustainably.”
That requires experience, judgment, and understanding of consequences that AI can’t see.
The Daily Practice That Matters
Anthropic’s research concluded: “Cognitive effort—and even getting painfully stuck—is likely important for fostering mastery.”
So here’s what I’m doing deliberately:
- Read documentation first: Before asking AI to summarize, I read the actual docs (or, at least, I try). It’s slower, but it also teaches pattern recognition that summaries miss.
- Debug manually before asking AI: When something breaks, I spend 15 minutes tracing it myself before copy-pasting to Gemini. Even if AI finds it faster, I learn more my way and, quite often, I can help AI resolve it faster.
- Build from scratch periodically: Side projects where I write every line. No AI. Just to remember what it feels like.
Related: This Is 12 Shades Of practical Floating Inputs Done Right
Tip ✅
Think of these like drills for athletes. Professional basketball players still practice free throws even though they’ve made thousands. Professional developers should still practice debugging and algorithm implementation, even if AI can do it faster. The fundamentals matter when the game is on the line 💁♀️
The Hiring Paradox: What Companies Actually Want
The job market is giving us wildly conflicting signals right now, and it’s honestly a bit disorienting 🙈
The Split
Some companies are going all-in on AI:
- Meta introduced AI-enabled coding interviews in October 2025. They literally give you access to an AI assistant during the interview. According to reports, the focus is on watching how candidates use AI to approach challenges, validating that they can work with AI tools the same way they would on the job.
- LinkedIn runs similar interviews where you’re expected to use AI for code examples and test cases, but you need to come up with the solution approach yourself. (I’m curious to know how this is working for them 👀)
The key differentiator? How well you validate AI output. Whether you blindly accept suggestions or critically evaluate them.
According to industry reports, the ability to explain when and why you’d reject AI-generated code is becoming a core interview skill.
But other companies are going the opposite direction.
They’re requiring AI-free assessments specifically to verify that foundational skills still exist. Because they’ve realized: you can’t train someone who never learned to debug in the first place.
What CTOs Are Actually Hiring For Now
The interview priorities have shifted:
2019–2023 focus:
- Speed with specific frameworks
- Years with X stack
- Individual output metrics
- Basic security awareness
2025 focus:
- AI validation and code review rigor
- System design and reliability thinking
- Product sense and cross-functional communication
- Proactive security (OWASP Top 10, threat modeling)
Fortune’s coverage of interview changes confirms this.
The question isn’t “Can you write a binary tree traversal?” anymore. It’s “Can you review this AI-generated code, identify the bugs, explain the security risks, and propose a better architecture?”
More: Developer Job Market 2026: Why JDs are Broken & How to Win with AI
The Junior-to-Senior Gap
The brutal reality is that junior developer hiring is down significantly. But senior developer roles are exploding.
Companies want people who can supervise AI effectively. That requires experience and judgment.
You can’t hire that judgment; it has to be built through years of making mistakes and understanding why systems fail.
We’re creating a missing middle. Junior roles are harder to get because AI does what juniors used to do.
But we need juniors to eventually become the seniors who can supervise AI.
Nobody’s figured out how to solve this yet.
Human-AI Hybrid Mastery: The Identity Reframe
Let’s address the question directly: Are you still a developer if you didn’t write the code?
I think we’re asking the wrong question 🙅♀️
You’re Not “Just a Prompt Engineer”
The best developers in 2026 aren’t choosing between AI and manual coding. They’re becoming hybrid thinkers who know when to use which tool.
Think about surgeons with robotic surgical tools. The robot provides precision that the human hand can’t match.
But the surgeon still needs to understand anatomy, recognize complications, and make real-time decisions when things go wrong.
The robot is a tool that amplifies expertise; it doesn’t replace it.
We’re in the same position.
What Hybrid Mastery Actually Looks Like
Here’s a real pattern I’ve seen work:
- Start with AI for speed on known patterns: Scaffold the basic structure. Get the boilerplate out of the way.
- Manual deep-dive on core features: The logic that makes your app unique? Write that yourself and understand it fully. Map it out. Be able to explain it like you’re selling it to anyone and everyone, whether they want it or not.
- AI for variations and edge cases: Once you’ve nailed the core, let AI help with the 47 variations of similar functionality.
- Manual for security review and optimization: Always. No exceptions.
The Mindset Shift
- Stop asking “did I write this?” and start asking “do I understand this?”
- Instead of measuring “how many lines did I type?”, start measuring “can I defend every decision in this codebase?”
- Don’t optimize for speed alone. Optimize for sustainability and maintainability.
The Developer Who Gets It Right
There’s a case study making rounds in the dev community about a developer who built two major open-source projects over the winter holidays using Claude’s Opus 4.5.
In his write-up, he described the AI as “behaving like a senior engineer whom you can just tell what to do.”
Yet, here’s the key: he had the senior-level foundation to supervise effectively.
He knew when the AI’s suggestions were wrong and could course-correct.
He understood the architectural implications of each decision.
This wasn’t a guy vibe coding amazing UIs and flashy dashboards with zero coding knowledge. Please keep this in mind.
The AI amplified his expertise—it didn’t replace it.
That’s the goal 🎯
Not “AI replaced me.” Not “I refuse to use AI.”
But “AI and I are both better together than either of us alone.”
The Honest Truth
Nobody has this figured out perfectly.
I don’t. You don’t. The CTOs hiring developers don’t.
The researchers studying this don’t.
The field is changing faster than anyone can adapt. We’re all learning in real-time, making mistakes, adjusting.
And that’s okay.
What I’m Trying
I’m working on being more intentional about this, though I’m not claiming I’ve figured it out:
- Slow down moments: When I catch myself about to paste an error into Gemini, I force myself to actually read it first. This is to understand what broke, even if I still let the AI fix it. At least, I know what it’s fixing.
- One framework, the hard way: My Flutter app is AI-assisted, but I keep working on other projects where I write every line. Different tools for different learning goals.
- Question the “just works”: When Gemini generates code that solves my problem, I’m trying to ask “but why does this work?” before moving on. Yes, it’s slower. It’s also the difference between shipping and learning.
- Debug first, fix second: Separating “understanding what’s wrong” from “letting AI fix it.” I can outsource the fix, but I shouldn’t outsource the diagnosis.
Am I perfect at this? Not even close. My Flutter app is proof—I’m learning concepts while letting AI handle implementation.
But I’m trying to be honest with myself about the difference between “I built this” and “I supervised this.”
What I Recommend You Try
- Try the 80/20 split! Use AI for the scaffolding, but claim the critical 20% for yourself.
- Be honest about what skills are atrophying because we all have them. The question is whether we’re addressing it.
- Practice the fundamentals deliberately, like an athlete doing drills. It feels silly until the game is on the line.
- Don’t let speed become the only metric. Sustainable, maintainable, secure code matters more than fast code.
The Real Question
“Am I still a developer if I didn’t write the code?”
Here’s what I think the question should be:
“Can I build sustainable systems that solve real problems, whether I type every character or supervise AI doing it?”
Because ultimately, users don’t care how the code was written. They care if it works, if it’s secure, and if it solves their problem.
But you should care that you have the skills to ensure all of that. That you can validate what AI produces. That you can debug when it fails and make architectural decisions that won’t create technical debt nightmares.
The method matters less than the outcome, but the skills to ensure good outcomes? Those matter more than ever.
It’s a Wrap
The developer community needs more honest conversations about this.
Not toxic gatekeeping (“real developers don’t use AI!”).
Not blind optimism (“AI makes everything better!”).
Just honest talk about what we’re experiencing.
What skills are you noticing deteriorating? Where does AI actually help vs where does it hurt?
How are you staying sharp while still using the tools that make you faster?
Drop your thoughts in the comments and share your struggles.
We’re all navigating this weird transition together, and I genuinely think collective honesty will help us all more than individual perfection.
Bye-bye.