Explore the shift from ‘How do I’ to ‘Can you please’ as AI agents move from giving answers to taking action
For most of the internet age, our digital interactions have started with the same three words:
“How do I…”.
How do I fix a formula in Excel?
How do I write a SQL query to get today’s sales?
How do I set up my CI/CD pipeline in AWS?
We turned to Google to search tutorials, blog posts, and forums. The expectation was clear: the system would tell us how, and we would do the rest.
Then came ChatGPT, Claude, and other large language models. Suddenly, “how do I” transformed into “tell me….”
Tell me the formula for VLOOKUP with two conditions.
Tell me how to fix this Python error.
Tell me what serverless means in AWS.
This was a huge leap. Instead of parsing through ten forum posts, we got instant, tailored answers. The AI didn’t just point to the recipe — it wrote it out. It was clever, fast, and addictive.
But we are now entering another leap. With the rise of agentic AI and protocols like MCP (Model Context Protocol), our language is shifting again. The new phrase is not “how do I” or “tell me,” but:
“Can you please…”
This shift may sound subtle. But it’s as profound as moving from the card catalogue to the search engine, or from the flip phone to the smartphone. It represents a fundamental change in what we expect from our digital tools.
Let’s make this concrete.
Imagine you’re working on a spreadsheet and need a complicated formula.
With Google, you’d type: “How do I write a nested IF formula in Excel to calculate commission rates?” You’d click through forums, copy examples, and experiment.
With ChatGPT or Copilot, you’d type: “Tell me the formula for commission rates with these conditions.” The AI would give you the formula, maybe with an explanation. You’d copy and paste it in.
With an agentic AI, you’d say: “Can you please add a new column for commission rates, apply the correct formula for each row, and test that it works with sample data?”
The agent would not just generate the formula. It would:
Insert the column.
Write the formula in the correct cells.
Run the calculation.
Check edge cases.
Hand you back a working sheet.
That’s the difference between knowledge and execution.
Most of us don’t work in single-purpose apps. We use complicated systems — CRMs, ERPs, BI dashboards, cloud consoles, clinical software — where even simple tasks take ten steps across different screens.
Submitting a purchase order in SAP.
Reconciling two data sources in Salesforce.
Checking user permissions in AWS IAM.
These aren’t “one and done” questions. They’re workflows.
Until now, AI couldn’t handle that. It could explain the steps, but you had to execute them manually. That kept us in the “how do I” or “tell me” world.
But with MCP-enabled systems and multi-step agent capabilities, we’re moving into a future where AI can:
Understand the intention behind your request.
Navigate multiple steps in different interfaces.
Handle conditional paths (“if error, retry this way”).
Deliver an actual completed task, not just instructions.
In other words, the AI is no longer your tutor or your search assistant. It’s your digital colleague.
Why does MCP matter so much here?
Think of MCP (Model Context Protocol) as the “USB standard” for AI. It lets models plug directly into systems and act, not just chat.
In the past, if I wanted to:
Compare code between two GitHub repos
Run a SQL script against a Neon Postgres database
Or fetch logs from Vercel
…I had to copy/paste, run terminal commands, and interpret results. The LLM could generate the commands, but execution was manual.
With MCP servers, the agent can connect directly:
GitHub MCP lets it compare branches or pull requests.
Neon MCP lets it query the live database.
Upstash MCP lets it interact with vector or Redis stores.
This is what takes us from “tell me the query” to “can you please run the query, check for anomalies, and give me a summary.”
So how will this play out in everyday workflows?
“How do I bulk update user permissions in Salesforce?”
→ You get a list of steps, maybe a tutorial. You do them yourself.
“Tell me the Apex code to bulk update these users.”
→ The AI gives you the snippet. You paste it into Salesforce.
“Can you please bulk update user permissions for these 150 users according to this CSV, and confirm success?”
→ The agent does the updates, validates, and reports back.
This is not science fiction. In some systems, it’s already happening.
The user experience changes dramatically:
Less friction: No more context switching between apps, docs, and prompts.
More autonomy: The AI handles low-level steps. You focus on the goal.
Natural language control: Instead of learning the quirks of each system, you speak your intent once.
Confidence: With testing and validation built in, you’re not left wondering if it worked.
In short: you stop operating software and start delegating outcomes.
For developers, this shift means:
Building APIs and MCP endpoints that expose not just data but actions.
Designing systems with agent discoverability in mind (clear metadata, predictable responses).
Emphasizing auditability and security (so we know what the agent did, and that it was allowed).
For enterprises, it means rethinking software delivery:
Applications will be judged not by their UI, but by their agentic accessibility.
Success will depend on governance: making sure AI can do tasks safely and compliantly.
Productivity metrics will shift from time-on-tool to time-to-outcome.
Of course, this shift comes with challenges:
Trust: Will you let an agent make a change in your live ERP?
Security: How do we ensure entitlements and identity are respected?
Transparency: How do we see why the agent took a certain path?
Error recovery: What happens when things go wrong?
These are not reasons to avoid agentic AI — they’re reasons to build it responsibly.
When I first used ChatGPT to help with coding, I thought: “Wow, this is faster than Stack Overflow.”
When I first used Claude Sonnet 4 with MCP in VS Code Insider, I thought: “Wow, this is faster than me.”
That’s the difference. We’ve moved from AI as a clever assistant to AI as a reliable teammate. And our language reflects that:
From “how do I” (learning)
To “tell me” (knowing)
To “can you please” (doing)
The most profound technology shifts often start with small changes in language.
We used to “dial” a phone. Now we “call.”
We used to “log on.” Now we just “open.”
We used to ask “how do I.” Now we ask “can you please.”
Agentic AI represents more than smarter software. It’s a new social contract with our tools. They’re no longer passive databases or suggestion engines. They are active participants in our work.
The real question is: what will you ask them to do?