On-premises NAS migration to Azure File Sync
Hereβs a concise summary of the migration article (NAS β Windows Server β Azure File Sync), organized for quick reading and action.
What this guide covers
Scenario: Migrate SMB file shares from a NAS appliance to a Windows Server (with Direct Attached Storage), then use Azure File Sync to keep data cached on-premises and stored in Azure file shares.
Important restriction: Azure File Sync cannot sync directly with NAS locations β files must be on DAS on a Windows Server.
Migration goal
Move SMB shares from NAS to Windows Server and enable a hybrid deployment with Azure File Sync while minimizing downtime and preserving production data integrity.
High-level phases (quick reference)
Phase 1 β Identify Azure file share mapping
A single Windows Server (or cluster) can sync up to 30 Azure file shares.
Options if you have many shares:
Share grouping: combine multiple on-prem shares as subfolders under one root, then sync that root to a single Azure file share.
Volume sync: sync a volume root to a single Azure file share.
Best practice: try to keep items (files + folders) per Azure file share below 20β30 million (hard limit tested to 100 million).
Create a mapping table (namespace mapping) that maps on-prem folders to Azure file shares.
Phase 2 β Provision a Windows Server on-premises
Use Windows Server 2022 or 2019 (cluster supported). Add Direct Attached Storage (DAS).
You can provision less local storage if you use Azure File Sync cloud tiering β but then copy in batches or use RoboCopyβs /LFSM switch where appropriate.
Size CPU/RAM according to number of items to sync (initial sync can be resource intensive).
Phase 4 β Deploy Azure storage accounts & file shares
Provision storage accounts and Azure file shares according to your mapping.
Consider performance: a storage account is a performance scale target; best practice is one high-activity file share per storage account when needed.
Default file share size is 5 TiB; create large (100 TiB) shares only with certain redundancy choices. Keep region consistent with Storage Sync Service.
Phase 5 β Install Azure File Sync agent on Windows Server
Turn off Internet Explorer Enhanced Security Configuration during install.
Install PowerShell Az modules:
Install-Module -Name Az -AllowClobber Install-Module -Name Az.StorageSyncDownload latest agent from: https://aka.ms/AFS/agent
Register the server with your Storage Sync Service. Confirm under Storage Sync Service β Registered servers in the portal.
Phase 6 β Configure sync groups and server endpoints
In the Azure portal: create a sync group for each Azure file share (cloud endpoint).
Add a server endpoint pointing to the local folder you provisioned.
Enable cloud tiering on endpoints if local storage is smaller than cloud content. For migration, set tiering to 99% volume free space, then adjust after migration to a production value (e.g., 20%).
Verify sync by creating a test file on the server and confirming it appears in the Azure file share.
Phase 7 β Use RoboCopy to migrate data from NAS to Windows Server
Recommended approach: RoboCopy from NAS β Windows Server (local path already synced by Azure File Sync).
Example RoboCopy command: robocopy <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD "System Volume Information" /UNILOG:
Key switches and considerations:
/MT:n β multithreaded (balance threads to CPU/network).
/MIR β mirror (be careful: source/target must match paths).
/LFSM β low free space mode for tiered targets (not supported for remote SMB destinations; incompatible with some switches).
Use /L for dry runs.
Use cloud tiering to free local space during copying; avoid running RoboCopy jobs in parallel if they exhaust disk.
Use Windows Server 2022 recommended; for Server 2019 ensure required updates (e.g., KB5005103) are applied.
Phase 8 β User cut-over
Perform repeated RoboCopy runs until deltas are small enough for an acceptable downtime window.
When ready, prevent users from changing NAS shares (e.g., point DFS-N to a non-existent target or change ACLs).
Run a final RoboCopy to capture remaining changes.
Create SMB shares on the Windows Server and set share-level permissions to match the NAS.
If local users/SIDs differ, re-create or map SIDs as needed after migration.
After migration completion, revert cloud tiering free-space policy from 99% to an appropriate production value (e.g., 20%) across all sync groups.
Key limitations and scenarios to watch
Azure File Sync cannot sync directly from NAS β files must be on DAS on Windows Server.
One cloud endpoint (Azure file share) can be in only one sync group; a sync group supports one server endpoint per registered server.
A server can sync with up to 30 Azure file shares.
Cross-tenant topologies (resources in different Microsoft Entra tenants) are not supported; Storage Sync Service, server resource, managed identity, and RBAC must be in the same tenant.
Main scaling factor: number of items (files + folders) per share.
Common troubleshooting note
The most common issue is RoboCopy failing with βVolume fullβ on the Windows Server. Let Azure File Sync cloud tiering free up space (runs hourly), then re-run RoboCopy. Nothing breaks; you can resume safely.
Useful links (from the article)
Migration guides index: https://docs.azure.cn/en-us/storage/files/storage-files-migration-overview#migration-guides
Azure File Sync planning/overview: https://docs.azure.cn/en-us/storage/file-sync/file-sync-planning
Azure File Sync deployment guide: https://docs.azure.cn/en-us/storage/file-sync/file-sync-deployment-guide
Azure File Sync troubleshooting: https://learn.microsoft.com/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/storage/file-sync/toc.json
Azure File Sync agent download: https://aka.ms/AFS/agent
Last updated in the original doc: 02/23/2024
If you want, I can:
Produce a concise checklist for an operations runbook (pre-checks, commands, rollback steps).
Extract the recommended RoboCopy command and make variations (dry-run, low-free-space, retry-tuned) for your environment.
Was this helpful?