On-premises NAS migration to Azure File Sync via Data Box
Summary — Migrate NAS → Azure File Sync via Azure Data Box
This article describes planning and implementing a migration from Network Attached Storage (NAS) to a hybrid cloud deployment using Azure Data Box to load data into Azure file shares, then using Azure File Sync to provide a cached on‑premises Windows Server view of that data. It focuses on preserving fidelity, minimizing downtime, and meeting Azure File Sync scale limits and best practices.
Key constraints and scope
Source: NAS (SMB/NFS). Azure File Sync works only with Windows Server (DAS), not directly with NAS—so files must be moved to Windows Server first.
Migration route: NAS ⇒ Azure Data Box ⇒ Azure file share ⇒ Windows Server with Azure File Sync.
Goal: hybrid deployment with on‑premises caching using Azure File Sync.
Supported cloud file share types: SMB (GPv2, FileStorage). NFS is not supported for Azure File Sync.
Migration steps (high level)
Determine mapping (how many Azure file shares)
One Windows Server (or cluster) can sync up to 30 Azure file shares.
Options when you have many on‑prem shares:
Share grouping: combine multiple on‑prem shares under a common root folder and sync that to one Azure file share.
Volume sync: sync a volume root to a single Azure file share (consider item limits).
Keep items per share practical: tested up to 100M items, recommended <20–30M per share.
Create a namespace-to-share mapping table (template link provided).
Deploy Azure storage resources
Provision storage accounts and Azure file shares according to your mapping.
Best practice: 1 file share per storage account when needing high performance; archival/low‑activity shares can be grouped.
Ensure region matches Storage Sync Service.
Note: default file share size is 5 TiB; follow the Create an Azure file share link to make large (100 TiB) shares and consider redundancy options.
Plan Data Box usage
Any Data Box appliance can write to up to 10 storage accounts.
Choose Data Box Disk (up to ~40 TiB raw delivered across disks) or Data Box (rugg ed appliance, ~80 TiB usable).
Two Data Box devices may write to the same storage account, but do not split a single share’s contents across multiple Data Boxes.
Provision Windows Server(s) on‑premises
Use Windows Server 2022 (minimum 2012 R2 supported); size CPU/RAM based on number of items being synced.
Use Direct Attached Storage (DAS)—NAS is not supported for Azure File Sync server endpoints.
Consider higher performance servers for large namespaces or initial sync performance.
Copy data to Data Box
Follow Data Box setup docs (links provided).
Use Robocopy for full fidelity (Data Box copy tools may not preserve everything).
Data Box exposes pre-provisioned SMB shares; for standard accounts look for shares ending with _AzFiles.
Use the recommended Robocopy command (shown in the article) rather than the Data Box sample command.
Deploy Storage Sync Service (cloud resource)
Deploy a Storage Sync Service in the same region as your storage accounts.
Best practice: one Storage Sync Service for servers that may sync the same files; create multiple only for fully isolated topologies.
Create a resource group and deploy in a nearby region.
Install Azure File Sync agent and register server
Install the Azure File Sync agent on the Windows Server(s); register them to your Storage Sync Service.
Install required PowerShell modules:
Install-Module -Name Az -AllowClobber Install-Module -Name Az.StorageSyncEnsure internet/proxy access and disable IE Enhanced Security Configuration during deployment as documented.
Configure Azure File Sync (create sync groups & server endpoints)
In the portal, create a sync group for each Azure file share (cloud endpoint).
Add server endpoint(s) that point to the local folder(s) on the Windows Server.
Enable cloud tiering and select "Namespace only" in initial download to allow local cache with full namespace while minimizing local storage consumption.
Wait for the cloud namespace to enumerate on the server before copying more data.
Run Robocopy from your NAS to the Windows Server (catch‑up and cutover)
After the namespace is present on the server, run Robocopy jobs from the NAS to the corresponding Windows Server folders to copy only deltas and catch up changes since the Data Box copy.
Recommended Robocopy command used in the article: robocopy <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD "System Volume Information" /UNILOG:
Notes and cautions:
Use an intermediate Windows Server 2016 for Robocopy if you must avoid regressed /MIR behavior on Server 2019 when target has tiering.
/MIR mirrors source to target—ensure source/target paths are matched exactly to avoid deletions.
Cloud tiering will free local space hourly; plan Robocopy runs so you don't outpace tiering and run out of local space.
For final cutover: block user access to NAS, run Robocopy once more, then switch shares/DFS targets to the Windows Server shares.
Robocopy flags — highlights
/MT:n — multithreaded (8–20 commonly balanced for initial runs).
/B — backup mode (requires elevated console).
/MIR — mirror (use cautiously; must match source/target structure).
/IT, /COPY:DATSO, /DCOPY:DAT — preserve metadata, ACLs, timestamps, ownership.
/LFSM — low free space mode (use only with tiered targets; not supported for remote SMB shares).
/UNILOG — Unicode log output. Important: prefer Windows Server 2022; if using Server 2019 apply KB5005103 or latest patches.
Common scenarios & limitations (high level)
A cloud endpoint (Azure file share) can be part of only one sync group; a sync group supports only one server endpoint per registered server.
You cannot have multiple server endpoints on the same registered server syncing to the same cloud endpoint.
Cross‑tenant topologies are not supported; Storage Sync Service, servers, managed identity and RBAC must be in the same Microsoft Entra tenant.
If consolidating multiple on‑prem shares into a single Azure file share or storage account, watch performance limits and IOPS/throughput on the storage account.
Troubleshooting
Common failure: Robocopy "Volume full" on the Windows Server—wait for cloud tiering to free space and rerun.
Monitor Azure File Sync event logs and use the troubleshoot link provided.
Deprecated option
The older "offline data transfer" Data Box integration (pre-agent v13) is deprecated. The article's described flow replaces it.
Next steps / further reading (links preserved in original)
Migration overview
Planning for Azure File Sync deployments
Create an Azure file share
Troubleshoot Azure File Sync
Article last updated: 07/30/2025
If you’d like, I can:
Produce a concise checklist you can use during your migration.
Extract the Robocopy commands and recommended flags into a ready-to-run script template (you supply the paths).
Was this helpful?