On-premises NAS migration to Azure File Sync via Data Box
Here’s a concise, structured summary of the article “Use Data Box to migrate from Network Attached Storage (NAS) to a hybrid cloud deployment by using Azure File Sync.” Key links and original meanings are preserved.
Summary — goal and scope
Purpose: Migrate data from NAS to Windows Server and Azure Files so you can use Azure File Sync for a hybrid cloud deployment (on-premises cache + cloud storage).
Applies to: SMB file shares (Azure File Sync and Azure Files support SMB; NFS is not supported for Azure File Sync).
Migration route: NAS ⇒ Azure Data Box ⇒ Azure file share ⇒ sync with Windows Server using Azure File Sync.
Key constraint: Azure File Sync works only with Direct Attached Storage (DAS) on the server (not NAS); therefore you must copy NAS data to Windows Server (DAS) first.
Phased process (high level)
Plan and map shares (Phase 1)
Determine how many Azure file shares you need (a single Windows Server instance/cluster can sync up to 30 Azure file shares).
Options if >30 or to optimize:
Share grouping: combine multiple on-prem shares under one root folder and sync that to a single Azure file share.
Volume sync: sync a volume root (all subfolders go to same Azure file share).
Create a deployment map (map on-prem folders → Azure file shares) and balance number of items per share (best practice: stay below ~20–30 million items per share; tested up to 100M).
Consider storage account limits and performance: a storage account is a scale target for IOPS/throughput; avoid putting hottest shares together; 250 storage accounts per subscription/region limit applies.
Deploy Azure storage resources (Phase 2)
Provision storage accounts and Azure file shares according to your mapping.
Best practice: dedicate storage accounts for high-activity file shares (one file share per storage account when possible).
Ensure region of storage accounts matches the Storage Sync Service region.
Note: 100 TiB file share limit restricts redundancy options (LRS/ZRS only); default file share size is 5 TiB — follow link to create larger shares.
Choose Data Box options & quantity (Phase 3)
Map storage accounts and target shares to Data Box device limits before ordering. Key limits:
Each Data Box appliance can upload to up to 10 storage accounts.
Data Box Disk: 1–5 SSDs, 8 TiB each (max ~40 TiB raw; usable ~20% less).
Data Box (appliance): typical choice, ~80 TiB usable.
Two Data Boxes can upload to same storage account, but do not split a single file share across multiple Data Boxes.
Copy onto Data Box (Phase 5)
Set up Data Box per Azure docs:
Set up Data Box: https://docs.azure.cn/en-us/databox/data-box-quickstart-portal
Set up Data Box Disk: https://docs.azure.cn/en-us/databox/data-box-disk-quickstart-portal
Use Robocopy rather than default Data Box copy tools for full fidelity (or use Data Box data copy service if configured properly and targeting Azure Files).
Data Box exposes pre-provisioned SMB shares for the storage accounts you specified. For standard accounts there are three SMB shares — only those ending with _AzFiles are relevant for file migration.
Recommended Robocopy command (preserves fidelity): robocopy <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD "System Volume Information" /UNILOG:
Explanation of major Robocopy switches is included in the article (multithreading, /MIR, /COPY flags, /LFSM for tiered targets, etc.).
Deploy Storage Sync Service (Phase 6)
Create a Storage Sync Service in Azure in the same region as your storage accounts.
Best practice: use a single Storage Sync Service for servers that may sync the same set of files.
Place sync and storage resources in a dedicated resource group for management.
Follow Storage Sync Service deployment guidance in the linked article.
Install Azure File Sync agent & register server (Phase 7)
Disable Internet Explorer Enhanced Security Configuration per deployment guide.
Install PowerShell modules:
Install-Module -Name Az -AllowClobber
Install-Module -Name Az.StorageSync
Install the latest Azure File Sync agent and register the server to your Storage Sync Service.
If proxy/firewall restrictions apply, configure proxy or firewall rules; a network connectivity report (post-install) lists required endpoints.
Agent download: https://aka.ms/AFS/agent
Configure Azure File Sync on server (Phase 8)
In Azure portal, create a sync group per Azure file share (cloud endpoint) and add server endpoints (local folder paths).
Turn on cloud tiering and choose "Namespace only" initial download when local disk cannot hold all data.
Wait for namespace enumeration to complete before copying more data (see next step).
Wait for namespace to appear (Phase 9)
Confirm initial cloud namespace download is complete by checking Event Viewer on the server:
Applications and Services → Microsoft → FileSync → Agent
Look for most recent Event ID 9102 (sync session completes) showing download direction and HResult = 0.
Wait for two consecutive 9102 events indicating download completion before proceeding.
Run Robocopy from NAS to the Windows Server (Phase 10) and cutover
After namespace download, run Robocopy (same recommended command) from NAS → Windows Server folder that is a server endpoint for Azure File Sync.
Recommended approach:
Initial Robocopy: moves bulk of data and changes.
Repeat Robocopy runs to catch up deltas; subsequent runs are faster due to /MIR.
When you can accept downtime, block user access to NAS shares, run a final Robocopy to capture last changes, then create SMB share on Windows Server and point clients/DFS to it.
Important notes:
Windows Server 2019 has Robocopy regressions with /MIR and tiered targets — use Windows Server 2016 for the Robocopy phase if encountering issues (Windows Server 2022 recommended overall).
If local server disk is smaller than migrated data, use cloud tiering to free space; Robocopy may outrun upload/tiering and fail with "Volume full" — sequence or throttle jobs to avoid this.
Key cautions, tips, and constraints
Azure File Sync sync groups: a cloud endpoint (Azure file share) can be configured in only one sync group; a sync group supports one server endpoint per registered server.
Server limit: one server can sync up to 30 Azure file shares; additional servers add more sync capacity.
Scale vector: number of items (files + folders) is the most important limit for Azure File Sync. Try to keep per share items below ~20–30M; tested up to 100M.
Robocopy specifics: use the provided Robocopy switches to preserve ACLs, timestamps, owner info, and to mirror source to target. Use /LFSM for tiered targets when applicable, but be aware of compatibility and OS-specific issues.
Cross-tenant limitation: Storage Sync Service, server resource, managed identity, and RBAC on storage account must be in same Microsoft Entra tenant — cross-tenant topologies are not supported.
Deprecated workflow: older “offline data transfer” method is deprecated since agent v13 — follow the updated steps in this article.
Troubleshooting
Most common issue: Robocopy fails with "Volume full" on Windows Server because cloud tiering runs hourly. Let tiering free up space and re-run Robocopy. /MIR ensures subsequent runs copy only deltas.
For Azure File Sync specific troubleshooting, follow the linked troubleshooting article.
Data Box resources and guidance
Data Box Disk: https://docs.azure.cn/en-us/databox/data-box-disk-overview
Data Box appliance: https://docs.azure.cn/en-us/databox/data-box-overview
Data Box setup and copy docs: links included above in Phase 5.
Next steps / further reading (links preserved)
Migration overview: https://docs.azure.cn/en-us/storage/files/storage-files-migration-overview
Azure File Sync planning: https://docs.azure.cn/en-us/storage/file-sync/file-sync-planning
Create a file share: https://docs.azure.cn/en-us/storage/files/storage-how-to-create-file-share
Troubleshoot Azure File Sync: https://learn.microsoft.com/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/storage/file-sync/toc.json
Last updated: 07/30/2025
If you want, I can:
Produce a condensed checklist for the migration (one-line actions per phase).
Extract the recommended Robocopy command and switches into a ready-to-run script template.
Create a mapping-table Excel template link summary for quick reference. Which would you prefer?
Was this helpful?