Spec-First Development: Why Your Flow Specs Should Exist Before the Designer Opens
Stop building flows and hoping documentation catches up. Write machine-readable specs first, then generate flow JSON from them. 14 flows, zero designer.
We shipped 14 Power Automate notification flows without opening the designer once. Not a single trigger dragged onto the canvas. Not one action configured by hand. Every flow was generated as JSON from two spec-first specification documents that lived in git.
If that sounds impossible, you have never worked with specs precise enough to be machine-readable.
On the Meridian Performance Management project at Apex Federal Solutions, I wrote the specs before writing any flow logic. Sarah Chen, Director of Talent Operations, submitted notification requirements. Before anyone opened Power Automate, before a single trigger was configured, two specification documents were updated to reflect every decision. Then AI agents read those specs and generated flow JSON in parallel batches.
The specs were the source of truth. The code was output.
The Designer-First Trap
Most Power Automate projects follow a predictable pattern. The product owner describes what they need. The developer opens the designer. Actions get dragged onto the canvas. Details get figured out in real time. Documentation - if it happens at all - comes after the flow works.
By then, the spec is already stale. It describes what was planned, not what was built.
This is the default because it feels productive. You are “building.” But you are also making architectural decisions in real time with no record of why. Every decision lives in the designer’s undo history, which disappears when you close the tab.
Microsoft’s own coding guidelines recommend adding descriptive notes to actions “just as you would add comments to lines of code.” That is good advice. But it is backwards. Comments describe code that already exists. Specs describe code that should exist.
The difference matters when you have 14 flows to build.
What Does a Machine-Readable Spec Look Like?
A machine-readable spec is a structured markdown file stored in git that AI agents and humans can both parse without ambiguity. It uses consistent table columns, exact Dataverse schema names, and precise status codes rather than vague descriptions.
A spec is not a Word document in SharePoint. We covered why in Living Documentation in Git. Word docs cannot be diffed, reviewed in pull requests, or branched. They have “last modified by” but not “what specifically was modified.”
A machine-readable spec is a markdown file in git. Tables with consistent column headers. Exact field names with Dataverse schema prefixes. Status values as specific codes. FetchXML query patterns written out in full.
The Meridian project maintained two spec documents:
-
Notification Requirements Spec - Every product owner decision captured: daily digest vs real-time delivery (Sarah chose digest for all except rejection notifications), email wording distinctions (self-sign acknowledgment vs standard signing), escalation paths (weekly to author, daily to supervisor), and recipient scoping (rejection notifies author plus all previous signers).
-
Power Automate Flows Spec - The complete flow inventory. Every flow cataloged with its tag, display name, trigger type, table queried, recipient logic, email subject line, and priority tier.
These live in the project’s git repository alongside the solution code.
The Flow Inventory Table
This is the core of the spec. Every flow on a single page:
| Tag | Display Name | Trigger | Table | Recipient | Email Subject | Priority |
|---|---|---|---|---|---|---|
| NTF-EMAIL-01 | Meridian | [NTF-EMAIL-01] Form Assigned - Daily Digest | Recurrence (8:00 AM ET, weekdays) | mrd_personnelevaluation | Author | Action Required: Form Name Assigned | P1 |
| NTF-EMAIL-02 | Meridian | [NTF-EMAIL-02] Ready for Signature - Daily Digest | Recurrence (8:00 AM ET, weekdays) | mrd_evaluationsigningstep | Signer (non-self) | Action Required: Sign Form Name | P1 |
| NTF-EMAIL-05 | Meridian | [NTF-EMAIL-05] Rejection to Author | Step status changed to Rejected | mrd_evaluationsigningstep | Author | Rejected: Form Name | P1 |
In Tag-Based Flow Architecture, we introduced the NTF-EMAIL tagging system. The spec is where that tag system is formally documented. Every tag mapped to its flow, table, and trigger.
Trigger Definitions
Scheduled flows: Recurrence trigger, 8:00 AM Eastern, weekdays only, 24-hour lookback window via FetchXML last-x-hours operator.
Real-time flows: Dataverse trigger on mrd_evaluationsigningstep table, filtering on mrd_stepstatus change to Rejected (status code 691090003).
Action Sequence Pattern
All scheduled notification flows follow the same action sequence:
Initialize variables (sequential chain at top level)
-> FetchXML query (List Rows with FetchXML)
-> Apply to each (sequential, concurrency = 1)
-> Resolve recipient email
-> Build HTML body (AppendToStringVariable)
-> Detect recipient change (group break)
-> Send email (SharedMailboxSendEmailV2)
-> Reset accumulator
FetchXML Queries
Each flow’s FetchXML is fully specified. Table, attributes, filter conditions, linked entities, sort order. Here is NTF-EMAIL-01:
<fetch version="1.0" output-format="xml-platform"
mapping="logical" distinct="false">
<entity name="mrd_personnelevaluation">
<attribute name="mrd_personnelevaluationid" />
<attribute name="mrd_name" />
<attribute name="mrd_evaluationduedate" />
<attribute name="mrd_evaluator" />
<filter type="and">
<condition attribute="mrd_personnelevaluationstatus"
operator="eq"
value="a0fed756-9f0b-f111-8406-0022480b7cb8" />
<condition attribute="mrd_evaluator"
operator="not-null" />
<condition attribute="modifiedon"
operator="last-x-hours" value="24" />
</filter>
<order attribute="mrd_evaluator" />
</entity>
</fetch>
Email Templates and Environment Variables
Email subject lines use merge field placeholders: Form Name, Employee Name, Due Date. The body structure is documented so anyone reading the spec knows exactly what the email looks like before a single flow exists.
Environment variables are listed with schema names and purposes:
| Schema Name | Purpose |
|---|---|
mrd_EnvironmentURL | Base URL for deep links in emails |
mrd_MeridianNotificationsMailbox | Shared mailbox “send from” address |
mrd_MeridianAppID | Model-Driven App ID for deep link construction |
Specs as AI Instructions
Here is the insight that changes everything: when a spec is precise enough, it is not documentation. It is a prompt.
The Meridian notification spec contained exact table names with schema prefixes, exact status values, complete FetchXML queries, email subject lines with merge fields, and the action sequence pattern every flow should follow. That level of precision is enough for an AI coding assistant to generate correct flow JSON without a single clarifying question.
I parallelized the work across four AI agent threads. Each agent received a batch of 3-4 flows from the same functional group:
| Agent | Flows | Batch Logic |
|---|---|---|
| Agent 1 | NTF-EMAIL-01, 10, 11 | Same table (mrd_personnelevaluation), author-facing |
| Agent 2 | NTF-EMAIL-02, 03, 04 | Same table (mrd_evaluationsigningstep), signer-facing |
| Agent 3 | NTF-EMAIL-05, 06, 07 | Event-driven triggers, distinct from scheduled |
| Agent 4 | NTF-EMAIL-08, 09, 12, 13, 14 | Escalation and broadcast patterns |
Each agent read the same spec. Each agent produced consistent JSON. The patterns - variable initialization chain, FetchXML queries, sequential Apply-to-each loops, SharedMailboxSendEmailV2 - were consistent across all agents because the spec defined them before parallel execution began.
The spec was the interface contract between me (the architect who made the design decisions) and the AI agents (who executed those decisions at scale).
Without the spec, AI-assisted development is a conversation. You explain the same patterns in every chat thread. You correct the same mistakes. You get inconsistent results. With the spec, AI-assisted development is a build pipeline. Input: spec. Output: flow JSON. Repeatable.
Living Documentation That Cannot Go Stale
The governance repo we described in The Power Platform Governance Repo has a docs/ folder. That is where flow specs live - versioned, diffable, reviewable.
meridian-performance-management/
docs/
notification-requirements.md <- what to build and why
power-automate-flows-spec.md <- every flow cataloged
flows/
Meridian-NTF-EMAIL-01-FormAssigned.json
Meridian-NTF-EMAIL-02-ReadyForSignature.json
...
When the spec changes, the commit diff shows exactly what changed. When flow JSON changes, the corresponding spec update appears in the same pull request. Reviewers see both the “what changed” (spec) and the “how it changed” (code) in a single review. We covered how to version flow JSON in Flow Versioning and Source Control. The spec-first rule extends that practice: every flow change includes a spec update in the same commit.
Microsoft’s ALM basics call source control the “single source of truth” for solutions. I take that one step further. Source control is the single source of truth for specifications too. Specs in git, not specs in SharePoint.
The Spec-Update-Then-Code Rule
This is the discipline that makes everything work. A strict rule: update the spec before writing code. Always.
When Sarah Chen submitted notification requirements through an ADO work item, the process was:
- 1
Read the requirement
ADO work item with 6 design questions answered by the product owner.
- 2
Reconcile against existing spec
Compare PO answers to what the current spec says. Find every conflict.
- 3
Update the spec in git
Resolve every conflict, document the decision, update the flow inventory table.
- 4
Then write code
Only after the spec reflects the full, reconciled truth.
The reconciliation step is where spec-first development pays for itself. Sarah’s answers conflicted with the original notification spec in 5 places:
| Conflict | Original Spec | PO Decision | Resolution |
|---|---|---|---|
| Digest content | List all open items | New items + summary count | Updated to PO decision |
| Past due frequency | Daily to author | Weekly to author + daily to supervisor | Added supervisor escalation flows |
| Rejection scope | Author only | Author + previous signers | Added NTF-EMAIL-06 |
| Completion notification | Not specified | Yes, with signer details | Added NTF-EMAIL-07 |
| Self-sign wording | Generic | ”Ready for acknowledgment” | Split NTF-EMAIL-02/03 |
Five conflicts. Five decisions that would have surfaced mid-build - or worse, after deployment. In the spec, each one cost minutes to resolve. In code, each one would have cost hours to rework.
Spec conflicts are cheap. Code conflicts are expensive.
Designer-First vs Spec-First
| Dimension | Designer-First | Spec-First |
|---|---|---|
| Starting point | Open PA designer, drag actions | Open spec document, catalog every flow |
| Decision record | Decisions live in undo history (lost on close) | Decisions recorded in versioned markdown |
| Conflict detection | Conflicts surface during testing or production | Conflicts surface during spec reconciliation |
| AI compatibility | AI infers intent from conversation | AI reads structured spec, generates precise JSON |
| Parallel development | One person, one flow at a time | Multiple agents execute batches simultaneously |
| Onboarding | New dev reverse-engineers intent from canvas | New dev reads the spec |
| Change tracking | 'Modified by' timestamp, no diff | Git diff shows exact changes in same PR |
| Documentation debt | Written retroactively (if ever) | Documentation is the starting artifact |
Where to Start
You do not need to adopt this for every project overnight. Start with one.
Pick an active project with 3 or more flows. Create a docs/flow-spec.md file in the repo. Build the flow inventory table: tag, display name, trigger, table, recipient, subject, priority. One row per flow. Commit it.
The next time a requirement comes in, update the spec first. Then build. Then verify the code matches the spec. That is the whole process.
Microsoft’s adoption guidance recommends standardizing “how your workload team writes, reviews, and documents code by using naming conventions and a style guide.” A flow spec is that style guide made concrete. Not a PDF on a wiki. A living document in git that every pull request touches.
The 14-flow build described in 14 Flows in 10 Minutes was only possible because the spec existed before the AI agents started. No spec, no parallel generation. No precision, no correct output. The spec is not overhead. It is the prerequisite.
Spec-Driven Power Platform Series
This article is part of a series on building Power Automate solutions with specs, governance, and AI:
- Tag-Based Flow Architecture - How 3-letter prefixes make 24 flows manageable
- Spec-First Development - Why specs should exist before the designer opens
- Notification Architecture - Notifications that cannot break business logic
- FetchXML in Power Automate - When OData $filter is not enough
- Building Solution ZIPs - The undocumented packaging guide
- What AI Gets Wrong - And why human correction is the point
- 14 Flows in 10 Minutes - The full story
AZ365.ai - Azure and AI insights for architects building on Microsoft. Follow Alex on LinkedIn for architecture deep dives.
Stay in the loop
Get new posts delivered to your inbox. No spam, unsubscribe anytime.
Related articles
Power Automate Environment Strategy - Dev Test Prod Done Right
Stop running Power Automate flows in the default environment. Practical guide to dev/test/prod with Managed Environments and environment routing.
Power Automate Versioning and Source Control - Export Tag Track
Power Automate has no version control for individual flows. Export flow JSON to git, tag versions, diff changes, and track every modification.
Power Platform Pipelines - Moving Flows from Dev to Prod with Approvals
Pipelines in Power Platform are Microsoft's built-in CI/CD. Most orgs don't know they exist. Setup guide with approval gates and deployment settings.