Explore how Agentic AI, Claude Sonnet 4, MCP servers, and VS Code are reshaping developer productivity
Six months ago, I wrote an article called The Future of Code: How AI Will Generate 90% of Our Code in 12 Months — But Not Everything.
At the time, I made what felt like a bold prediction: that AI would be generating 90% of our code within a year. But I’ll be honest — I was skeptical even as I wrote it.
Back then, AI coding tools were powerful in some ways but frustrating in others. They could spin up a quick React component or write a boilerplate API, but when it came to:
* Fetching the right documentation
* Debugging complex issues
* Following strict design requirements
* Or integrating with real developer environments
… they often fell short.
AI was excellent at what I called “vibe coding” — prototyping, brainstorming, and filling in simple gaps. But for large-scale, production-ready systems, the human developer still had to do the heavy lifting: writing detailed designs, verifying code, running tests, and stitching together all the moving parts.
So why did I still say “12 months”? Because the direction of travel was clear. I could see the building blocks falling into place — better models, new protocols, smarter integrations.
What I didn’t expect was just how fast those pieces would come together. In the past six months, Claude Sonnet 4.0, Claude Code, MCP servers, and function calling have transformed my daily workflow in ways that even six months ago I would’ve said were years away.
One of the biggest frustrations with earlier AI models was their isolation. They were brilliant text generators, but cut off from the live world. They couldn’t fetch new documentation, search forums, or pull from GitHub issues.
And anyone who has written serious code knows: a huge part of software development is researching. You’re not just writing code — you’re reading API docs, comparing solutions, checking open GitHub issues, and validating that you’re not using deprecated functions.
Six months ago, the AI couldn’t help with that. It guessed. And guessing in production systems is dangerous.
Enter function calling. Now, LLMs can actively call external APIs, fetch content, and integrate it into their reasoning. That means they no longer hallucinate the signature of a React hook or make up the parameters of a Postgres function. They can go look it up.
For me, this plugged one of the biggest holes. Suddenly, the AI became less of a code autocomplete engine and more of a research partner — capable of pulling in exactly the context I would’ve spent 30 minutes hunting down.
If function calling fixed the research problem, MCP servers (Model Context Protocol) solved the integration problem.
In the past, I would constantly bounce between environments:
* Comparing branches in GitHub
* Running SQL queries in a database
* Debugging Python scripts in the terminal
* Checking logs on Vercel
AI could generate scripts for me, but it couldn’t run them. I still had to wire everything together.
With MCP servers, that’s changed. Now, systems like Neon Postgres, Vercel, and Upstash expose standardized endpoints that LLMs can connect to directly. Instead of me pasting connection strings or flipping to the CLI, the AI can fetch data, run queries, and validate results within the editor.
And with VS Code Insider’s built-in MCP Browser, the AI can even “see” what’s happening in a live browser environment, spot errors, and suggest fixes — without me leaving my IDE.
This is a staggering productivity leap. It feels less like using an autocomplete tool and more like having a junior developer with system access working alongside me.
Six months ago, AI coding tools were enthusiastic interns. They could crank out code quickly, but they often ignored instructions, skipped edge cases, and left documentation behind.
Now, with models like Claude Sonnet 4.0 and Claude Code, I can rely on them to:
Generate detailed PRDs (Product Requirements Documents) before starting code
Ask clarifying questions about product requirements and design decisions
Produce implementation plans with risks and alternatives laid out
Generate unit tests and documentation alongside the code
Combine this with the new TODO MCP tool in VS Code Insider, and I can literally watch the LLM progress through tasks like a developer ticking off a Jira board.
This shift from vibe coding to plan-driven execution is the difference between a demo tool and a professional development partner.
For a while, developers were flocking to Cursor and Windsurf because they felt ahead of GitHub Copilot. They offered better integrations, more powerful AI features, and generous free tiers.
But in the last six months, Microsoft has hit back hard:
* VS Code Insider Edition now comes with full Copilot Agent Mode and supports Claude Sonnet 4, GPT-4.1, and GPT-5 mini models.
* Copilot Instructions let me set guiderails and rules — so I don’t have to repeat “use strong typing” or “follow clean architecture” every prompt.
* A free plan makes it accessible to anyone, while the $10/month tier is extremely competitive.
For power users like me, the $39/month plan unlocks unlimited Claude Sonnet 4 — worth every cent.
On most days, I now ask the AI to produce design documents, analyze bugs, or draft multiple solution approaches. Once the design feels right, I let it run with the implementation. I intervene when needed, but more often than not, I find myself saying “continue” and watching it progress through the plan.
At the same time, tools like v0.dev are building specialist LLMs that help transition from prototyping to production. The ecosystem is vibrant and competitive, which means the tools are evolving faster than ever.
This is where I’ve felt the most personal impact.
Six months ago, my workflow looked like this:
* Code in VS Code
* Flip to ChatGPT Plus in the browser for research
* Check GitHub issues in another tab
* Manage multiple subscriptions: ChatGPT Plus, GitHub Copilot, Claude Sonnet
That context switching was exhausting. It broke my flow every time.
Today, everything happens in one prompt-driven world inside the editor. With VS Code Insider, Copilot Agent Mode, MCP servers, and Claude Sonnet 4, I don’t need to leave the IDE. The AI can:
* Fetch docs
* Debug browser output
* Run queries against my database
* Propose fixes and design alternatives
The time saved is enormous, but even more importantly, the cognitive load is lighter. I can stay in flow for longer, which means more energy for actual problem-solving.
This is exactly the shift I explored in my earlier article on AI-Powered Developer Productivity. In that piece, I argued that the future of productivity was about reducing friction and context switching, and that developers should move from being “prompt-to-code” amateurs to “design-first” professionals.
That prediction is already coming true. The IDE is becoming the single hub of productivity, where design, coding, debugging, and documentation converge. And with tools like Claude Sonnet 4 integrated directly into Copilot Agent Mode, the gap between idea and implementation has never been smaller.
* Design-first mindset: I start by asking the AI to generate a design document, risks, and alternatives. Only then do we move to code.
* Debugging as conversation: Bugs aren’t roadblocks. They’re discussions with the AI, where we analyze logs, evaluate fixes, and decide together.
* Semi-autonomous execution: I let the AI handle long stretches of code, stepping in when needed.
* Persistent guiderails: With Copilot Instructions, my preferences are remembered across sessions.
The end result is that I deliver better features, faster, with stronger documentation and fewer bugs.
When I wrote the original article, I said “12 months” partly as a hedge. I thought it would take that long for the tools, protocols, and workflows to mature.
But the combination of:
* Function calling
* MCP servers
* Claude Sonnet 4 + Claude Code
* VS Code Insider with Copilot Agent Mode
And a new culture of design-first prompting
… makes me think we may hit the 90% milestone much sooner.
The role of the developer is evolving. We’re no longer just coders. We are:
* Designers of systems and workflows
* Supervisors of AI-driven execution
* Guardians of architecture, security, and business intent
AI is not replacing developers. It is elevating them — taking away the repetitive glue work and freeing us to focus on what humans do best: design, judgment, and creativity.
Six months ago, I was skeptical. Today, I’m convinced: the future of code is arriving faster than expected.
The fragmented, subscription-heavy workflows of the past are collapsing into a unified, prompt-driven development environment. Tools like Claude Sonnet 4, Copilot Agent Mode, and MCP servers are not just incremental improvements — they are changing the nature of the work itself.
Developers who adapt to this new reality will find themselves more productive, more creative, and more valuable than ever. Those who cling to old workflows will be left behind.
The future of code is not about typing faster. It’s about designing smarter, orchestrating better, and collaborating with AI agents inside a unified environment.
And if the past six months are any indication, the next six will take us even further.