The Power Platform Governance Repo - Standards Reviews and Inventory in Git
One git repo for all Power Platform governance: naming standards, review checklists, flow inventory, and AI review reports. Version-controlled.
A Power Platform governance repo is the missing single source of truth for every standard, checklist, and inventory report your platform team needs. Your governance documentation lives in three places. The naming conventions are in a Word doc on SharePoint. The flow inventory is in an Excel file on someone’s OneDrive. The review checklist is in a Confluence page that hasn’t been updated since 2024. The connector approval list is in someone’s head.
When a new team member asks “what are our standards?”, nobody gives the same answer. Because there is no single answer. There are fragments scattered across tools, inboxes, and memory.
Put it in git. One git repository. All governance artifacts. Version-controlled, searchable, and updated through pull requests like code.
What Is a Power Platform Governance Repo?
A Power Platform governance repo is a version-controlled git repository that centralizes all governance artifacts for your tenant. It holds naming standards, review checklists, connector approval lists, flow inventory data, exported solution files, and review reports in one searchable, auditable location with full change history through pull requests.
Why Git for Governance
This isn’t about being trendy with git. It’s about solving three real problems.
Problem 1: Nobody knows which version is current. SharePoint has “Standards v3 FINAL revised (Alex comments).docx” and “Standards v4 draft.docx” in the same folder. Which one is active? With git, main branch is always the current version. There’s no ambiguity.
Problem 2: Changes happen silently. Someone updates the naming convention in the Word doc. Nobody knows. No notification. No review. With git, every change is a pull request. The team sees the diff, reviews it, and approves it before it merges.
Problem 3: History disappears. Why did we change the error handling standard last quarter? Who approved it? With git, git log and git blame tell you exactly when a line was added, who added it, and which PR it came from.
| SharePoint/Confluence | Git Repository | |
|---|---|---|
| Single source of truth | Multiple versions in multiple folders | main branch is always current |
| Change tracking | Document version history (hard to compare) | Line-by-line diff on every change |
| Review process | Email someone and hope they read it | Pull request with required reviewers |
| Search | File-level search, limited text search | Full-text search across all files (grep) |
| Automation | Manual updates only | Pipelines can auto-commit inventory data |
| Offline access | Requires SharePoint access | Local git clone, works offline |
| History | Who last modified the file | Complete history of every line, every change, every reason |
The Repository Structure
Here’s the structure that works. Not a theoretical ideal. This is the structure I set up for D365 implementation teams and what actually gets maintained.
power-platform-governance/
standards/
naming-conventions.md
error-handling-patterns.md
review-checklist.md
connector-approval-list.md
environment-strategy.md
inventory/
flow-inventory.csv
orphan-report.csv
connector-audit.csv
solutions/
hr-evaluation/ (unpacked solution files via pac unpack)
finance-approvals/
reviews/
hr-evaluation-v1.2-review.md (AI-generated review)
finance-approvals-v2.0-review.md
reports/
monthly-governance-report.md
templates/
flow-description-template.md
new-flow-request-form.md
Let’s walk through each folder.
standards/
This is the core of the repo. Plain Markdown files that define how your team builds on Power Platform.
naming-conventions.md - Your flow, environment, solution, connection reference, and environment variable naming patterns. See the naming conventions article for the full standard — this file is the org-specific version committed to your repo.
error-handling-patterns.md - The standard error handling template. What scope structure to use. What to log. Where to send failure notifications. With actual examples from your environment, not generic guidance.
review-checklist.md - What to check before promoting a flow. Connection references configured? Error handling present? Run-after configured on all actions? Variables named descriptively? This checklist is what reviewers use during PR reviews.
connector-approval-list.md - Which connectors are approved, restricted, or blocked. Updated when DLP policies change. Links to the DLP policy configuration in the admin center.
environment-strategy.md - What environments exist, what they’re for, who has access, and the promotion path. Dev to test to production. Which environments are managed. Which allow maker access.
inventory/
Data files that track what exists across your tenant. CSV format because it diffs well in git and opens in Excel when someone needs to filter.
flow-inventory.csv - Every flow: name, owner, environment, solution, status, connectors used, last run date. This is the master list. Updated automatically (more on that below) or manually after each sprint.
orphan-report.csv - Flows whose owners have left the org or been disabled. Updated weekly by automation or manual CoE export.
connector-audit.csv - Which connectors are in use, by which flows, in which environments. Cross-reference with the approval list in standards/.
solutions/
Unpacked solution files from pac CLI. Each solution gets its own subfolder. This is where you track the actual flow definitions, entities, and configuration across versions. See Power Automate Versioning and Source Control for the full export workflow.
reviews/
AI-generated and human-written review reports for specific solution versions. When you review a solution before promotion, the findings go here. Naming convention: {solution-name}-v{version}-review.md. These accumulate over time and become institutional knowledge.
reports/
Periodic governance reports. Monthly summaries, quarterly health checks, annual audits. Markdown with data pulled from the inventory files.
templates/
Reusable templates that makers and developers use when creating new flows or requesting new solutions. A description template ensures every flow has consistent documentation. A request form standardizes the intake process.
Who Maintains What
This is the critical question. A repo that nobody updates is worse than no repo because it creates false confidence. Here’s the ownership model.
Platform team (manual commits):
standards/- Updated when policies change. Every change is a PR reviewed by the CoE lead or governance owner.templates/- Updated when standards evolve.reviews/- Written or reviewed by a human after AI generates the first draft.
Automation (scheduled commits):
inventory/- A scheduled pipeline exports data from Dataverse (CoE tables or admin connectors) and commits CSVs to the repo. Weekly or daily depending on org size.solutions/- A pre-deployment pipeline step runspac solution exportandpac solution unpack, committing the result before deploying. This captures the exact state of every deployment.reports/- Generated from inventory data by a pipeline or Power Automate flow.
AI (generates drafts for human review):
reviews/- AI reads flow JSON fromsolutions/, generates review reports flagging issues, suggesting improvements. Human reviews and approves the PR.reports/- AI summarizes inventory data into readable monthly reports. Human verifies accuracy.
How It Connects to Other Tools
The governance repo isn’t a replacement for the CoE Starter Kit, DLP policies, or the admin center. It’s the connective tissue.
Deployment pipelines. Add a pre-deployment step that exports and unpacks the solution to the governance repo. Before deploying v1.3.0.0 to test, the pipeline commits the unpacked solution to solutions/your-solution/. Now you have a git record of exactly what was deployed and when.
Center of Excellence (CoE) Starter Kit. The CoE Kit collects inventory data in Dataverse tables. A scheduled flow or pipeline exports that data as CSV and commits it to inventory/. The CoE remains the collection engine. Git becomes the archive and the diff engine.
Dataverse Git integration. If you’re using the built-in Git integration for dev environments, the governance repo is a separate concern. Dev repos track solution source code. The governance repo tracks standards, inventory, and reviews across all solutions. They complement each other.
DLP policies. Your connector-approval-list.md in standards/ should mirror your actual DLP policies. When DLP policies change, update the Markdown. When the Markdown changes, someone should verify DLP policies match. The git history tells you when the standard changed and whether the policy was updated to match.
Setting Up the Initial Repo
- 1
Create the repository
Create a new repo in Azure DevOps or GitHub. Name it power-platform-governance or similar. Make it accessible to your platform team, solution architects, and governance reviewers.
- 2
Create the folder structure
Add the top-level folders: standards/, inventory/, solutions/, reviews/, reports/, templates/. Commit with a README.md that explains the purpose of each folder.
- 3
Write your naming conventions
Start with standards/naming-conventions.md. Document how your org names flows, environments, solutions, and connection references. Be specific - include examples. This is usually the most-referenced document.
- 4
Add the review checklist
Create standards/review-checklist.md. List every check that should happen before a flow is promoted. Error handling, connection references, variable naming, run-after configuration, documentation.
- 5
Export your current flow inventory
If you have the CoE Kit, export the flow inventory from Dataverse as CSV. If not, use the admin connectors or the admin center to compile a list. Save as inventory/flow-inventory.csv.
- 6
Set up branch protection
Protect the main branch. Require at least one reviewer for all pull requests to standards/. This ensures governance changes are reviewed, not silently edited.
- 7
Announce it to the team
Send the repo link to your platform team, solution architects, and makers. Walk them through the structure. Show them how to find the naming conventions and checklist. Make it part of onboarding.
The Content of Each Standards File
Let me be concrete about what goes in these files. Generic “follow Microsoft best practices” documents don’t help anyone.
naming-conventions.md
# Naming Conventions
## Cloud Flows
Pattern: {publisher-prefix}-{process}-{action}
Examples:
- contoso-vendor-approval-send-notification
- contoso-employee-onboarding-create-accounts
- contoso-invoice-processing-extract-data
## Solution Names
Pattern: {BusinessUnit}{Domain}
Examples:
- ContosoHROnboarding
- ContosoFinanceApprovals
## Environment Variables
Pattern: {publisher-prefix}_{CATEGORY}_{Name}
Examples:
- contoso_SMTP_SenderAddress
- contoso_API_BaseUrl
- contoso_FEATURE_EnableAutoApproval
## Connection References
Pattern: {publisher-prefix}_{ConnectorName}_{Purpose}
Examples:
- contoso_SharePoint_HRDocuments
- contoso_Outlook_Notifications
- contoso_Dataverse_MainConnection
review-checklist.md
# Flow Review Checklist
## Before Promotion to Test
- [ ] Flow is solution-aware (not a standalone flow)
- [ ] Error handling: Try/Catch scope wraps all external calls
- [ ] Connection references used (no hardcoded connections)
- [ ] Environment variables for all configurable values
- [ ] Variables have descriptive names (not var1, temp, x)
- [ ] Run-after configured on all actions (not just "is successful")
- [ ] Flow description filled in (what it does, who owns it, what triggers it)
- [ ] No premium connectors unless pre-approved
- [ ] Concurrency settings reviewed (parallel vs sequential)
- [ ] Large data: pagination enabled where applicable
- [ ] No hardcoded URLs, emails, or IDs
- [ ] Tested with both success and failure scenarios
These are living documents. When you find a new pattern that causes issues, add it to the checklist. When a naming collision happens, refine the naming convention. The git history shows the evolution of your standards over time.
Avoiding Over-Engineering
The biggest risk isn’t building too little. It’s building too much on day one.
One team I advised created a governance repo with 30 Markdown files, complex folder hierarchies, and automation pipelines before anyone has committed a single real standard. Two months later, the repo is abandoned because maintenance overhead exceeded the team’s capacity.
Start small:
- Week 1:
standards/naming-conventions.mdandstandards/review-checklist.md. That’s it. Two files. - Week 2-4: Use the checklist in actual reviews. Refine it based on what you find.
- Month 2: Add
inventory/flow-inventory.csvwith a manual export. Addstandards/error-handling-patterns.md. - Month 3: Automate the inventory export. Add solution exports for your most critical solutions.
- Month 4+: Add AI reviews, monthly reports, and templates as the team matures.
Each step should feel lightweight. If it feels heavy, you’re going too fast.
Making It Searchable
One underrated benefit of Markdown in git: it’s searchable from your IDE.
Clone the governance repo. Open it in VS Code. Ctrl+Shift+F to search across all files. Looking for the naming convention for environment variables? Search “Environment Variables.” Want to know if a specific connector is approved? Search the connector name in connector-approval-list.md.
Compare that to opening SharePoint, navigating to the right folder, opening a Word doc, and using Ctrl+F inside the document. Multiply by 10 team members doing this 5 times a week.
For teams that live in the terminal, grep -r "approval" standards/ finds every mention of “approval” across all standards files in under a second.
Power Automate Governance - The Enterprise Playbook
This article is part of a 10-part series:
- Naming Conventions That Scale
- Environment Strategy - Dev Test Prod
- Solution-Aware Flows
- Flow Inventory
- Pipelines - Dev to Prod
- CoE Starter Kit
- AI-Powered Flow Review
- Versioning and Source Control
- The Governance Repo
- Weekly Governance Digest
AZ365.ai - Azure and AI insights for architects building on Microsoft. Follow Alex on LinkedIn for architecture deep dives.
Stay in the loop
Get new posts delivered to your inbox. No spam, unsubscribe anytime.
Related articles
Spec-First Development: Why Your Flow Specs Should Exist Before the Designer Opens
Stop building flows and hoping documentation catches up. Write machine-readable specs first, then generate flow JSON from them. 14 flows, zero designer.
Power Automate Versioning and Source Control - Export Tag Track
Power Automate has no version control for individual flows. Export flow JSON to git, tag versions, diff changes, and track every modification.
Notification Architecture That Cannot Break Your Business Logic
Separate Power Automate notification flows from business logic. 14 flows, zero write operations, daily digests over real-time floods.