BA/PM Agent Skill encodes requirements thinking into AI. Stop building the wrong thing faster with Claude, Copilot & OpenAI.

When I released my first set of Agent Skills for building modern AI agents, chatbots, and RAG solutions, the focus was deliberately on how to build — architectures, workflows, and execution. The tooling was finally ready. Claude Code had formalized skills with YAML front-matter and discoverable folders. GitHub Copilot was preparing to launch agentic skill support. OpenAI's Codex had embraced the open Agent Skills specification. We were entering an era where AI could inherit professional roles, not just respond to prompts.
But as those skills were battle-tested in real projects, one gap kept surfacing with alarming consistency.
What kept showing up — again and again — across enterprise deployments and startup sprints was this:
Poorly framed requirements produce bad code. Even when the AI is excellent.
That insight became impossible to ignore. We'd solved the "how" with elegant tooling, but we'd ignored the "what" and "why" that should come before any line of code is written.
The first wave of Agent Skills answered critical mechanical questions:
Those skills worked. Teams could spin up capable agents, strong RAG pipelines, and production-grade AI features in hours instead of weeks. The developer experience was transformative.
And yet… outcomes still suffered.
Not because the models were bad. Not because the architectures were wrong. Not because we'd failed to optimize inference or reduce latency.
But because the inputs were unclear. The AI was accelerating confusion at the speed of thought.
We'd given every developer a Formula 1 car without teaching them how to read the track. The result? Spectacular crashes, just faster.
Modern AI models are incredible at execution. They follow instructions precisely, generate code at speed, fill in gaps confidently, and produce syntactically valid solutions that pass tests and review.
But they share a dangerous limitation: They don't challenge bad requirements by default.
If the input is vague, contradictory, incomplete, or solution-biased, the AI will still generate something. It won't slam on the brakes and ask "are you sure this is the right problem to solve?" It will cheerfully build the wrong thing with perfect code style.
That's how you end up with:
This isn't an AI failure. It's a requirements failure. And it's costing teams more than ever because the cost of generating code has dropped to near zero. When bad requirements led to a week of wasted engineering time, that was expensive. When they lead to five minutes of AI-generated architectural churn, the financial cost is lower — but the cognitive cost and opportunity cost are exponentially higher.
This problem matters more today than it ever did before. Why? Because Agent Skills have become first-class citizens across the entire ecosystem, creating a structural shift in how AI integrates into development.
Consider the landscape as of December 2025:
This is a structural shift. AI is no longer just reacting to prompts. It can now inherit professional roles and act with agency within your codebase. The skill definition becomes a living document of expertise that travels with your repository.
And once that became possible, one question became obvious:
If we can encode how to build… why aren't we encoding how to think before building?
If we can give AI the ability to write tests, generate components, and orchestrate agents, why can't we give it the ability to ask better questions? Why can't we embed the discipline of requirements analysis directly into the development loop?
Most software failures don't start in code. They start in human communication:
This is the domain of the Business Analyst — the unsung hero of software delivery.
At a minimum, a good BA does four things exceptionally well:
Not lots of questions. The right ones. Contextual, situational, and intentional. Questions that reveal unstated assumptions and surface the real problem behind the requested feature.
People describe symptoms, not root causes. They propose solutions they've seen elsewhere without articulating their actual need. A BA reframes:
Not fluffy documents that get ignored. But crisp, structured artefacts:
Artefacts engineers can actually build from without constant clarification.
A BA reflects understanding back: "Here's my interpretation of what you need — is this correct?" That feedback loop is where clarity is forged. It's not a one-time interview; it's a continuous process of synthesis and validation.
None of this behavior existed natively in AI tooling. We had code assistants, not thinking partners. We had intelligence without the crucial skepticism that makes human BAs valuable.
Once Agent Skills became real, repeatable, and versionable — embeddable directly in .claude/skills/ or .codex/skills/ and shared via git — the absence of a BA skill became not just obvious but untenable.
We were encoding patterns for authentication, database access, and UI components, but we were ignoring the pattern for deciding what to build. It was like having a construction crew with perfect tools but no blueprint discipline.
So I built v1.0 of a Business Analyst / PRD Agent Skill (source).
Not as a chatbot. Not as a prompt you paste into a conversation. But as a reusable, embedded capability that lives inside the codebase, is tunable, and behaves like a good consultant — not an order-taker.
This skill doesn't replace human BAs or PMs. It captures their best behavior and makes it repeatable, available at the moment of creation, and versioned alongside the code it influences.
This skill is designed for how real projects actually start: messy, partial, human. It works whether you're starting a project from scratch, extending an existing system, adding a feature, or framing a bug.
The agent follows a disciplined, professional workflow:
This isn't verbosity for its own sake. It's precision under constraint — the hallmark of a senior BA who knows their document will be read by both executives and machines.
PRDs today aren't just for humans. They're read by:
That means the format matters as much as the content:
The BA Agent Skill enforces:
This turns requirements into machine-readable intent. When a PRD is structured this way, an AI agent can:
For the first time, developers can:
This doesn't replace the need for human product managers or business analysts. In fact, it elevates their role. Instead of spending hours drafting documents that get misinterpreted, they can pair with the BA Agent Skill to:
The skill becomes a force multiplier for good product thinking, not a replacement for it.
For years, we've optimized everything except the most fragile layer in software: how humans explain what they want.
We've perfected:
But we've ignored the chaotic, human process of requirements discovery. We've accepted that "garbage in, garbage out" is inevitable, focusing instead on making the garbage-processing faster.
Agent Skills — especially BA-grade ones — finally address that root cause. They encode professional discipline into the AI itself. They make skepticism, context-gathering, and validation a default behavior rather than an exceptional one.
The open standard that OpenAI, Anthropic, and GitHub are all embracing means these skills are portable. The BA/PRD skill I wrote for Claude Code works in Copilot and Codex because they all respect the same patterns: SKILL.md with YAML frontmatter, progressive disclosure of context, and model-invoked activation.
This isn't just a convenience. It's a paradigm shift. We're moving from AI as a tool to AI as a team member with professional training — and that training is expressed in versioned markdown files that any team can inspect, modify, and improve.
Once you use a BA Agent Skill, it's very hard to go back to raw prompting. The difference is like building with blueprints vs. building from hallway conversations. Both can work, but only one scales reliably.
Before BA Agent Skills: A product manager says "We need a recommendation engine like Amazon's." The engineer nods, spends three days building a collaborative filter, and delivers something that technically works but recommends discontinued products because the PM didn't specify inventory status filtering. The business impact is zero, and trust erodes.
After BA Agent Skills: The same PM says "We need a recommendation engine like Amazon's." The BA Agent Skill immediately reframes: "Before we discuss algorithms, what outcome are you trying to drive? Higher cross-sell? Reduced cart abandonment? And should we only recommend in-stock items?" The PRD that emerges includes explicit acceptance criteria: "Given a user with 3+ items in cart, when they view recommendations, then only in-stock products appear, and click-through rate increases by >15%." The engineer builds the right thing the first time.
The difference isn't AI capability — it's requirements discipline baked into the process from the first interaction.
The BA/PRD Agent Skill is v1.0, and already it's changing how teams approach AI-assisted development. But this is just the beginning of a larger movement.
Imagine:
Every professional discipline that acts as a "check" on raw development can be encoded as a skill. We're moving from monolithic AI assistance to role-based AI collaboration.
If you want to explore this further, you can:
The future of AI in software isn't just about better models. It's about better professional discipline, encoded and automated. And that future is already here, living in markdown files in your repository.
Ready to stop building the wrong things faster? Install the BA/PRD Agent Skill and make requirements thinking your default.