Power Automate Versioning and Source Control - Export Tag Track
Power Automate has no version control for individual flows. Export flow JSON to git, tag versions, diff changes, and track every modification.
You changed a Power Automate flow last Tuesday. Something broke in production on Thursday. You want to compare what the flow looked like before and after your change. You open the flow designer and see… the current version. That’s it.
There is no “show me last Tuesday’s version.” There is no diff view. There is no blame history. Power Automate versioning and source control require you to get flow definitions out of the platform and into git. Solution versioning tracks the whole solution, not individual flows. If your solution has 30 flows and you changed one, the version history tells you the solution changed. Not which flow. Not what changed inside it.
This is the gap that kills teams at scale. You can’t govern what you can’t track.
How Do You Version Control Power Automate Flows?
You export flow definition JSON from Dataverse into a git repository, either through the native Dataverse Git integration or the pac CLI. From there, every change becomes a commit, every release gets a tag, and you can diff any two versions to see exactly what changed inside a flow.
The State of Flow Versioning in 2026
Power Automate gives you a few things out of the box:
- Run history. You can see past executions and their inputs/outputs. Useful for debugging, not for understanding what changed in the flow definition.
- Solution versioning. Solutions have version numbers (1.0.0.0, 1.1.0.0). But the version applies to the entire solution, not individual components.
- 28-day auto-save. Cloud flows keep versions for 28 days in the designer. You can restore a previous version. But you can’t compare two versions side by side. And after 28 days, they’re gone.
None of this is source control. Source control means: every change is recorded, every version is retrievable, and you can compare any two versions line by line.
To get real source control for flows, you need to get the flow definitions out of Dataverse and into git.
Two Paths to Git
There are two approaches. Microsoft’s built-in Dataverse Git integration (GA since April 2025) and the manual pac CLI workflow that teams have been using for years.
| Dataverse Git Integration | Manual pac CLI | |
|---|---|---|
| Setup effort | Medium - requires Managed Environments, Azure DevOps | Low - just install pac CLI |
| Git provider | Azure DevOps only (no GitHub) | Any git host |
| Sync mechanism | Commit/push from maker portal or API | Manual export and unpack |
| Granularity | Individual component files | Individual component files (after unpack) |
| Automation | Can be triggered via API | Script it yourself |
| Requires Managed Environments | Yes | No |
| Best for | Orgs already on Azure DevOps with Managed Environments | Everyone else, or teams wanting full control |
Both approaches end up in the same place: flow definitions as JSON files in a git repo, one file per flow, diffable and taggable.
Path 1: Dataverse Git Integration
Microsoft shipped Dataverse Git integration as GA in April 2025. It syncs solution components from a development environment to an Azure DevOps Git repo.
Here’s how it works:
- 1
Enable Managed Environments
Dataverse Git integration requires Managed Environments. Go to the Power Platform Admin Center, select your dev environment, and enable managed mode. This is a prerequisite you cannot skip.
- 2
Create an Azure DevOps repo
Create a new repo in Azure DevOps (or use an existing one). This is where your solution components will land. GitHub is not supported natively as of March 2026.
- 3
Connect the environment to the repo
In the Power Platform Admin Center, go to your environment settings and configure the Git integration. Choose the Azure DevOps org, project, repo, and branch. Pick a folder path for the solution files.
- 4
Choose a binding strategy
You have two options. Environment binding ties the connection to the environment itself. Solution binding ties it to a specific solution. Environment binding is simpler for single-solution dev environments. Solution binding gives you more control when one environment has multiple solutions.
- 5
Commit from the environment
From the maker portal (or via API), commit changes to the connected git repo. Each component gets its own file. Flows land as individual JSON definitions.
- 6
Pull changes back
You can pull from git into the environment to sync changes made by other developers. This is how multi-developer scenarios work - each dev has their own environment, all syncing to the same repo.
The big advantage: makers can commit directly from the Power Platform maker portal. No CLI. No scripts. The big limitation: Azure DevOps only. If your team uses GitHub, you’ll need the pac CLI approach or a mirror setup.
Path 2: Manual pac CLI Workflow
The Power Platform CLI (pac) has been around longer and works with any git host. The workflow is more manual but gives you complete control.
The key commands:
# Export a solution as a ZIP file
pac solution export --name YourSolution --path ./exports/YourSolution.zip
# Unpack the ZIP into individual component files
pac solution unpack --zipfile ./exports/YourSolution.zip --folder ./src/YourSolution
# After making changes, re-pack for import
pac solution pack --folder ./src/YourSolution --zipfile ./exports/YourSolution.zip
# Generate deployment settings (connection references, env variables)
pac solution create-settings --solution-zip ./exports/YourSolution.zip --settings-file ./settings/dev-settings.json
When you unpack a solution, flows land in the Workflows/ folder as individual JSON files. Each flow has its own file named with its GUID and display name. The JSON contains the full flow definition - every action, every condition, every expression.
Your repo structure ends up looking like this:
your-solution-repo/
src/
YourSolution/
Workflows/
FlowA-{guid}.json
FlowB-{guid}.json
FlowC-{guid}.json
Entities/
...
PluginAssemblies/
...
Other/
Solution.xml
Customizations.xml
exports/
YourSolution.zip
settings/
dev-settings.json
test-settings.json
prod-settings.json
The Git Workflow
Once your flow definitions are in git - whether through Dataverse Git integration or pac CLI - you treat them like any other code artifact.
Branching. One branch per feature or change. If you’re modifying the approval flow and the notification flow, that’s one branch with changes to two JSON files.
Commits. Meaningful commit messages that describe what changed in business terms. Not “updated flow” but “Added retry logic to vendor approval flow - 3 attempts with exponential backoff.”
Tags. Tag solution versions when you deploy. v1.0.0.0 matches solution version 1.0.0.0 in Dataverse. When something breaks in production, you can check out the exact version that’s running.
# Tag a release
git tag -a v1.2.0.0 -m "Release 1.2.0.0 - added vendor approval retry logic"
# List tags
git tag -l "v1.*"
# Check out a specific version
git checkout v1.1.0.0
Pull requests. Every flow change goes through a PR. The reviewer sees the JSON diff. They can see that you added a new action, changed a condition, or modified an expression. It’s not as readable as code, but it’s infinitely better than “trust me, I changed the flow.”
Diffing Flow Versions
Here’s where it gets interesting. Flow definitions are JSON. JSON diffs in git.
When you compare two commits, you see exactly what changed:
// Before
"actions": {
"Send_approval": {
"type": "OpenApiConnectionWebhook",
"inputs": {
"parameters": {
"approvalType": "First to respond"
}
}
}
}
// After
"actions": {
"Send_approval": {
"type": "OpenApiConnectionWebhook",
"inputs": {
"parameters": {
"approvalType": "Everyone must approve"
}
}
}
}
You can see that someone changed the approval from “First to respond” to “Everyone must approve.” In the designer, you’d have to open both versions (if you still have them) and click into each action to find the difference.
But raw JSON diffs of complex flows are hard to read. A flow with 40 actions produces a JSON file that’s thousands of lines long. Finding the meaningful change in a 200-line diff is tedious.
This is where AI helps. Copy the diff output and ask an AI to summarize it:
Here is a git diff of a Power Automate flow definition (JSON).
Summarize the changes in plain English:
- What actions were added, removed, or modified?
- What conditions changed?
- What expressions were updated?
- Are there any potential issues with the changes?
The AI reads the JSON diff and gives you something like: “The approval action was changed from single-approver to all-must-approve. A new parallel branch was added for a notification to the requester. The error handling scope was extended to cover the new branch.” That’s a code review you can actually work with.
Rewriting Flows with AI
This is experimental but powerful. Export a flow’s JSON definition, give it to an AI, and ask for specific improvements.
Things that work well:
- Add error handling. “Wrap each HTTP action in a try-catch scope with retry policy and failure notification.”
- Rename variables. “Change all variables from Hungarian notation (strApproverEmail) to descriptive names (approver_email_address).”
- Add comments. Flow JSON supports a
descriptionfield on actions and scopes. Ask the AI to add descriptions explaining the business logic. - Optimize expressions. “Simplify this nested if/coalesce chain into a cleaner expression.”
The workflow for AI-assisted flow rewriting:
- Export and unpack the solution (pac CLI or git sync)
- Copy the specific flow’s JSON file
- Give it to the AI with specific instructions for what to improve
- Review the output carefully. Check that connection references, trigger configuration, and environment-specific values weren’t changed
- Replace the flow JSON in your unpacked solution
- Pack the solution and import to dev
- Test every path. Especially error paths.
- Once validated, commit to git and promote through your pipeline
This is not a “let AI rewrite everything” approach. It’s targeted refactoring of specific flows where you know what you want to improve. The AI handles the tedious JSON manipulation. You handle the validation.
What This Looks Like in Practice
A government services team I worked with had 45 flows across 3 solutions. No version control. Changes went directly into the production environment. When something broke, the fix was “let me try to remember what I changed.”
We set up pac CLI exports on a weekly schedule. Every Monday, an Azure DevOps pipeline ran pac solution export and pac solution unpack for each solution, committed the results to git, and tagged the version. Total automation effort: one afternoon.
Within two weeks, the team caught a flow modification that removed an error handling scope. The weekly diff showed the scope was gone. Without git, nobody would have noticed until a failure hit production.
Within a month, the team started doing manual exports before and after each change. Not because someone mandated it. Because they could see the value of tracking what changed.
Getting Started
Don’t try to set up the full pipeline on day one. Start with this:
- Install pac CLI:
dotnet tool install --global Microsoft.PowerApps.CLI.Tool - Authenticate:
pac auth create --url https://your-org.crm.dynamics.com - Export your most critical solution:
pac solution export --name YourSolution --path ./YourSolution.zip - Unpack it:
pac solution unpack --zipfile ./YourSolution.zip --folder ./YourSolution - Initialize a git repo:
git init && git add . && git commit -m "Initial export of YourSolution v1.0.0.0" - Push to Azure DevOps or GitHub
You now have version control for your flows. Every export from this point forward is a new commit. Every commit is a diffable snapshot.
From there, add branching, PRs, tags, and automation as the team matures. The important thing is to start capturing the history.
Power Automate Governance - The Enterprise Playbook
This article is part of a 10-part series:
- Naming Conventions That Scale
- Environment Strategy - Dev Test Prod
- Solution-Aware Flows
- Flow Inventory
- Pipelines - Dev to Prod
- CoE Starter Kit
- AI-Powered Flow Review
- Versioning and Source Control
- The Governance Repo
- Weekly Governance Digest
AZ365.ai - Azure and AI insights for architects building on Microsoft. Follow Alex on LinkedIn for architecture deep dives.
Stay in the loop
Get new posts delivered to your inbox. No spam, unsubscribe anytime.
Related articles
Spec-First Development: Why Your Flow Specs Should Exist Before the Designer Opens
Stop building flows and hoping documentation catches up. Write machine-readable specs first, then generate flow JSON from them. 14 flows, zero designer.
Power Automate Environment Strategy - Dev Test Prod Done Right
Stop running Power Automate flows in the default environment. Practical guide to dev/test/prod with Managed Environments and environment routing.
Power Platform Pipelines - Moving Flows from Dev to Prod with Approvals
Pipelines in Power Platform are Microsoft's built-in CI/CD. Most orgs don't know they exist. Setup guide with approval gates and deployment settings.