Skip to content

Building Dataverse Solution ZIPs Programmatically: The Undocumented Guide

Learn to build a Dataverse solution ZIP from scratch, covering the two JSON formats, forward-slash trap, and undocumented XML workflow entries.

Alex Pechenizkiy 9 min read
Building Dataverse Solution ZIPs Programmatically: The Undocumented Guide

Microsoft documents how to export a solution ZIP. They document how to import one. They document SolutionPackager for extracting and repacking. They even document the XSD schema for customizations.xml.

What they do not document is the internal structure of a solution ZIP containing cloud flows. Not the two incompatible JSON formats for flow definitions. Not the forward-slash requirement for ZIP entry paths. Not the XML schema for workflow entries. Not the connection reference transformation between formats.

I spent a week reverse-engineering all of this on the Meridian Performance Management project for Apex Federal Solutions. I built 14 notification flows as JSON outside the Power Automate designer, packaged them into a Dataverse solution ZIP, and imported them. This article is the documentation I wish had existed.

Transformation diagram showing PA Editor JSON format converting to Solution Export JSON format

Why Build Solution ZIPs from Scratch?

You build a Dataverse solution ZIP programmatically when your flow definitions live outside the Power Automate designer. Version-controlled JSON in git, AI-generated flow definitions, or bulk flow generation all require a packaging pipeline that produces an importable artifact without manual export.

SolutionPackager is the right tool when you start from an exported solution. You export, unpack, edit, repack, import. Microsoft explicitly supports this workflow: extract, edit customizations.xml, repackage, import.

But SolutionPackager requires a prior export to work with. Three scenarios demand building a solution ZIP from scratch:

  1. Version-controlled flows. Flow JSON lives in git. In Versioning and Source Control, I described how to get flows into git. This article closes the loop: getting them back into Dataverse without ever opening the designer.

  2. AI-generated flow definitions. AI agents produce flow JSON in PA Editor format. A packaging pipeline converts and bundles them into an importable ZIP.

  3. Bulk flow generation. When you build 14 notification flows in parallel across AI agent threads, manual export/import is not viable. You need a pipeline that takes JSON files and produces a single deployable artifact.

On Meridian, all three scenarios applied simultaneously. The packaging pipeline was the most fragile and least documented part of the entire effort.

The Two JSON Formats Nobody Tells You About

This is the single most important thing in this article. Power Automate flow definitions exist in two distinct JSON formats. They look similar enough to confuse you. They are not interchangeable.

PA Editor format is what you get when you copy a flow definition from the designer or export a single flow:

{
  "$schema": "https://power-automate-tools.local/flow-editor.json#",
  "connectionReferences": {
    "shared_commondataserviceforapps": {
      "connectionName": "shared-commondataser-...",
      "connectionReferenceLogicalName": "mrd_sharedcommondataserviceforapps",
      "source": "Embedded",
      "id": "/providers/Microsoft.PowerApps/apis/shared_commondataserviceforapps",
      "displayName": "Microsoft Dataverse",
      "iconUri": "https://...",
      "brandColor": "",
      "tier": "Premium"
    }
  },
  "definition": {
    "$schema": "https://schema.management.azure.com/.../workflowdefinition.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {},
    "triggers": {},
    "actions": {}
  }
}

Solution export format is what lives inside a Dataverse solution ZIP:

{
  "properties": {
    "connectionReferences": {
      "shared_commondataserviceforapps": {
        "impersonation": {},
        "runtimeSource": "embedded",
        "connection": {
          "connectionReferenceLogicalName": "mrd_sharedcommondataserviceforapps"
        },
        "api": {
          "name": "shared_commondataserviceforapps"
        }
      }
    },
    "definition": {
      "$schema": "https://schema.management.azure.com/.../workflowdefinition.json#",
      "contentVersion": "1.0.0.0",
      "parameters": {},
      "triggers": {},
      "actions": {}
    },
    "templateName": ""
  },
  "schemaVersion": "1.0.0.0"
}

Pasting solution export JSON into the PA editor produces: Missing definition flow property. Importing PA editor JSON inside a solution ZIP produces silent failures or missing flow errors.

Aspect PA Editor Format Solution Export Format
Top-level wrapper None (flat structure) Everything inside a `properties` object
Connection references Full metadata: displayName, iconUri, brandColor, tier Simplified: just connectionReferenceLogicalName and api name
Connection source field `source: "Embedded"` with full provider path `runtimeSource: "embedded"` with `impersonation: {}`
Schema version Not present `schemaVersion` at root level
Template name Not present `templateName` (empty string)
Definition block Identical Identical
Use case Designer editing, clipboard, single flow export Solution ZIP packaging for Dataverse import

The definition block containing triggers, actions, and parameters is identical in both formats. This is the key insight. You can develop in PA Editor format (more readable, what AI generates naturally), then convert to solution export format for packaging. The conversion is mechanical: wrap in properties, simplify connection references, add schemaVersion and templateName.

Solution ZIP Anatomy

A Dataverse solution ZIP containing cloud flows has four parts:

solution.zip
  [Content_Types].xml
  customizations.xml
  solution.xml
  Workflows/
    Meridian-NTF-EMAIL-01-FormAssigned-{GUID}.json
    Meridian-NTF-EMAIL-02-ReadyForSignature-{GUID}.json
    Meridian-NTF-EMAIL-03-ReadyForAcknowledgment-{GUID}.json
    ...

Microsoft documents the three base files: [Content_Types].xml, customizations.xml, and solution.xml. They do not document the Workflows/ folder or its contents for cloud flows.

Here is what each file does:

  • [Content_Types].xml registers MIME types for the ZIP. Must include an entry for .json files or Dataverse will not process the workflow definitions.
  • customizations.xml is the component registry. Each cloud flow needs a <Workflow> entry with metadata. This file must conform to the CustomizationsSolution.xsd schema.
  • solution.xml holds solution identity, version number, publisher info, and a <RootComponents> list referencing every component by GUID.
  • Workflows/*.json contains the actual flow definitions in solution export format (not PA Editor format).

Deterministic GUIDs with UUIDv5

Every Power Automate flow in a solution needs a GUID. That GUID is the workflowid - the stable identifier across all imports. Dataverse uses it to decide whether to create a new flow or update an existing one.

Random GUIDs mean every build creates duplicate flows. This is not a theoretical risk. It happens on the first re-import.

UUID v5 (SHA-1 based, RFC 4122) solves this. Same input always produces the same GUID:

const { v5: uuidv5 } = require('uuid');

// DNS namespace from RFC 4122
const NAMESPACE = '6ba7b810-9dad-11d1-80b4-00c04fd430c8';

// Deterministic: same input = same GUID, every time
const flowGuid = uuidv5('Meridian-NTF-EMAIL-01-FormAssigned', NAMESPACE);
// flowGuid is stable across builds

Microsoft themselves use UUID v5 for deterministic GUID generation in WinRT. The pattern is well-established.

For CI/CD, this makes the build idempotent. Import the ZIP once or ten times and you get the same result. No orphaned duplicate flows accumulating in the environment. I used the flow’s tag-based name (like Meridian-NTF-EMAIL-01-FormAssigned) as the input string. The tag naming convention from the tag-based architecture pays off here: each flow has a unique, human-readable identifier that doubles as the GUID seed.

customizations.xml Workflow Entries

Each cloud flow needs a <Workflow> entry in customizations.xml. This is the most detail-sensitive part of the entire process. One wrong value and the import silently fails.

<Workflow WorkflowId="{a1b2c3d4-e5f6-7890-abcd-ef1234567890}"
          Name="Meridian | [NTF-EMAIL-01] Form Assigned -- Daily Digest">
  <JsonFileName>/Workflows/Meridian-NTF-EMAIL-01-FormAssigned-A1B2C3D4-E5F6-7890-ABCD-EF1234567890.json</JsonFileName>
  <Type>1</Type>
  <Subprocess>0</Subprocess>
  <Category>5</Category>
  <Mode>0</Mode>
  <Scope>4</Scope>
  <OnDemand>0</OnDemand>
  <TriggerOnCreate>0</TriggerOnCreate>
  <TriggerOnDelete>0</TriggerOnDelete>
  <AsyncAutodelete>0</AsyncAutodelete>
  <SyncWorkflowLogOnFailure>0</SyncWorkflowLogOnFailure>
  <StateCode>1</StateCode>
  <StatusCode>2</StatusCode>
  <RunAs>1</RunAs>
  <IsTransacted>1</IsTransacted>
  <IntroducedVersion>1.0.0.12</IntroducedVersion>
  <IsCustomizable>1</IsCustomizable>
  <BusinessProcessType>0</BusinessProcessType>
  <IsCustomProcessingStepAllowedForOtherPublishers>1</IsCustomProcessingStepAllowedForOtherPublishers>
  <PrimaryEntity>none</PrimaryEntity>
  <LocalizedNames>
    <LocalizedName description="Meridian | [NTF-EMAIL-01] Form Assigned -- Daily Digest"
                   languagecode="1033" />
  </LocalizedNames>
</Workflow>

The values that matter most:

  • Category="5" marks this as a modern cloud flow. This is verified against MS Learn. Classic workflows are 0, business rules are 2, desktop flows are 6. Get this wrong and your flow imports as the wrong type or does not appear at all.
  • StateCode="1" / StatusCode="2" means the flow activates on import. StateCode="0" would import it as draft/off. Documented here.
  • Type="1" is a definition (not an activation record or template).
  • Scope="4" sets organization scope.
  • JsonFileName must use forward slashes and match the actual file name in the ZIP exactly. More on this in the next section.
  • PrimaryEntity="none" because cloud flows are not bound to a specific Dataverse table.

The WorkflowId in the XML must match the GUID in the JSON file name and the RootComponent entry in solution.xml. Three places, same GUID, no mismatches.

The Forward-Slash Trap

This cost me more debugging time than every other issue in the packaging pipeline combined.

Here is the scenario. You build the solution ZIP on Windows. You use PowerShell’s Compress-Archive or .NET’s ZipFile.CreateFromDirectory. Your ZIP file looks correct. You open it, the Workflows/ folder is there, all 14 JSON files are present.

You import into Dataverse. Error: “Xaml file is missing from import zip file.”

You open the ZIP again. The file is right there. You check the file name. It matches. You check the content. It is valid JSON. You re-export the ZIP. Same error. You spend hours comparing your ZIP to an exported solution ZIP. Everything looks identical.

It is not identical.

PowerShell on Windows creates ZIP entries with backslash path separators: Workflows\FlowName.json. The customizations.xml <JsonFileName> field uses forward slashes: /Workflows/FlowName.json. Dataverse does an exact string match between them. Workflows\FlowName.json does not equal Workflows/FlowName.json.

The error message says the file is “missing.” It is not missing. It is right there. The path separator is wrong.

The fix is to use a ZIP library that gives you explicit control over entry paths. On Windows, that means avoiding any tool that inherits the OS path separator. I switched to Node.js with the archiver package. The backslash problem disappeared immediately.

The Node.js Packaging Pipeline

Here is the complete pipeline that builds a correct solution ZIP from a folder of source files:

const archiver = require('archiver');
const fs = require('fs');
const path = require('path');

const output = fs.createWriteStream('solution.zip');
const archive = archiver('zip', { zlib: { level: 9 } });
archive.pipe(output);

// Key: build ZIP paths with '/' explicitly, never path.join()
function addDir(dir, prefix) {
  for (const entry of fs.readdirSync(dir, { withFileTypes: true })) {
    const full = path.join(dir, entry.name);       // OS path for reading
    const zipPath = prefix
      ? prefix + '/' + entry.name                  // ZIP path with forward slash
      : entry.name;
    if (entry.isDirectory()) {
      addDir(full, zipPath);
    } else {
      archive.file(full, { name: zipPath });
    }
  }
}

addDir('./solution-folder', '');
archive.finalize();

The critical line is prefix + '/' + entry.name. This constructs ZIP-internal paths with forward slashes regardless of the operating system. Do not use path.join() for the ZIP path. path.join() on Windows produces backslashes. That is exactly the bug you are trying to avoid.

Your source folder structure should look like this before packaging:

solution-folder/
  [Content_Types].xml
  customizations.xml
  solution.xml
  Workflows/
    Meridian-NTF-EMAIL-01-FormAssigned-A1B2C3D4-E5F6-7890-ABCD-EF1234567890.json
    Meridian-NTF-EMAIL-02-ReadyForSignature-B2C3D4E5-F6A7-8901-BCDE-F12345678901.json
    ...

Each JSON file in Workflows/ must be in solution export format (with the properties wrapper), not PA Editor format.

solution.xml Updates

Two things to update on every build:

Version number. Dataverse ignores imports at the same version. Bump it every time:

<Version>1.0.0.12</Version>

RootComponents. Each flow needs a root component entry. Type 29 is workflows/cloud flows:

<RootComponents>
  <RootComponent type="29" id="{a1b2c3d4-e5f6-7890-abcd-ef1234567890}" behavior="0" />
  <RootComponent type="29" id="{b2c3d4e5-f6a7-8901-bcde-f12345678901}" behavior="0" />
  <!-- one entry per flow -->
</RootComponents>

The id must match the WorkflowId in customizations.xml and the GUID in the JSON file name. Three-way consistency.

Import and Verification

Import the ZIP as unmanaged in make.powerapps.com under Solutions. The higher version number causes Dataverse to overlay the existing solution, adding new flows and updating existing ones matched by workflowid.

Post-import checklist:

  1. All flows appear in the solution. Count them. On Meridian, I expected 14 notification flows and verified all 14 showed up.
  2. Connection references resolve. If the target environment has different connection reference logical names, the import will prompt for mapping. Get this wrong and every flow fails at runtime.
  3. Auto-activated flows are on. Flows with StateCode="1" should show status “On” in the solution. If they show “Off,” check the StatusCode value.
  4. Run a test execution. Pick one flow and trigger it manually. Verify the FetchXML queries work, the email sends, and the data updates land correctly.
  5. Check flow ownership. Flows import owned by the importing user. If you need a service account to own them, reassign after import.

Once you can build solution ZIPs programmatically, the next step is Pipelines to automate the dev-to-prod promotion. The programmatic ZIP becomes the artifact that feeds into the pipeline.

The Full Picture

This article covers the packaging gap between “flows as JSON in git” and “flows running in Dataverse.” The approach works because Microsoft explicitly supports editing customizations.xml and repackaging solutions.

The hard parts are all undocumented: the two JSON formats, the forward-slash requirement, the specific XML values for cloud flow workflow entries. Now you have the reference.

For solution-aware flows, this pipeline is what makes spec-driven development possible. Write specs, generate JSON, package into a ZIP, import. No designer required.


Spec-Driven Power Platform Series

This article is part of a series on building Power Automate solutions with specs, governance, and AI:

  1. Tag-Based Flow Architecture - How 3-letter prefixes make 24 flows manageable
  2. Spec-First Development - Why specs should exist before the designer opens
  3. Notification Architecture - Notifications that cannot break business logic
  4. FetchXML in Power Automate - When OData $filter is not enough
  5. Building Solution ZIPs - The undocumented packaging guide
  6. What AI Gets Wrong - And why human correction is the point
  7. 14 Flows in 10 Minutes - The full story

AZ365.ai - Azure and AI insights for architects building on Microsoft. Follow Alex on LinkedIn for architecture deep dives.

Stay in the loop

Get new posts delivered to your inbox. No spam, unsubscribe anytime.

Related articles