Linux migration to Azure File Sync
Summary β Migrate from Linux (Samba) to a hybrid cloud deployment with Azure File Sync
What this guide is for
Scenario: Migrate SMB shares from a Linux Samba server to a Windows Server (2012 R2+), then use Azure File Sync to create a hybrid deployment that caches files on-premises and stores them in Azure file shares.
Important limitation: Azure File Sync runs only on Windows Server with direct-attached storage (DAS). It does not sync to/from Linux clients, remote SMB shares, or NFS shares.
Goals
Move data from Linux/Samba to Windows Server, then sync to Azure file shares via Azure File Sync.
Preserve data integrity and minimize downtime (fit within or slightly exceed maintenance windows).
High-level approach
Use Robocopy from Windows Server to copy data from Samba SMB shares into the Windows Server folders provisioned for Azure File Sync.
Install and configure Azure File Sync agent on Windows Server and create sync groups that link local server endpoints to cloud endpoints (Azure file shares).
Optionally use cloud tiering to keep local storage smaller than the cloud namespace while caching frequently accessed data.
Key planning considerations
A single server can sync up to 30 Azure file shares. If you have more, consider:
Share grouping: combine multiple on-premises shares as top-level subfolders under one root to sync to a single Azure file share.
Volume sync: sync a volume root to a single Azure file share (but keep item counts per share reasonable).
Scale limits and best practices:
Aim to keep items per Azure file share well under the tested 100M limit β prefer <20β30M items for performance and operational speed (initial scans, restores, DR).
Storage account is a performance target (IOPS/throughput). Prefer one file share per storage account for very active shares; low-activity shares can be grouped.
There is a limit of 250 storage accounts per subscription per region.
Create a mapping table
Map on-prem folders to Azure file shares and storage accounts before provisioning. Use a namespace-mapping template if helpful.
Phased migration steps (overview)
Phase 2 β Provision a Windows Server instance on-premises
Deploy Windows Server 2022 (recommended) or at least 2012 R2.
Use direct-attached storage (DAS). You may provision less local storage if you use cloud tiering; in that case migrate in batches or use Robocopy /LFSM.
Size CPU/RAM based on number of items to sync.
Phase 6 β Configure Azure File Sync
For each Azure file share, create a sync group and add the server endpoint (local folder) to it.
Enable cloud tiering if local storage is smaller than cloud data; for migration, set tiering to 99% free on volume to force tiering and free space.
Verify sync by creating test files.
Phase 7 β Copy data with Robocopy
Use Robocopy from Windows Server to copy from Samba SMB shares into the Azure File Sync-enabled folders.
Recommended sample command for mirror runs (adjust paths and log location): robocopy <Dest.Path> /MT:20 /R:2 /W:1 /B /MIR /IT /COPY:DATSO /DCOPY:DAT /NP /NFL /NDL /XD "System Volume Information" /UNILOG:
Important Robocopy notes:
Use /LFSM on targets with tiered storage (not supported for remote SMB destinations); it pauses copies when free space falls below a floor (not compatible with some other switches).
/MIR mirrors source β target; be careful to match folder levels to avoid data loss.
/COPY and /DCOPY control fidelity (ACLs, timestamps, owner). Auditing info cannot be stored in Azure file shares.
Adjust /MT (threads), /R (retries), /W (wait) based on network, CPU, and expected file usage.
Use Windows Server 2022 recommended; if using Server 2019, ensure certain updates (e.g., KB5005103) are applied.
Phase 8 β User cut-over
Run Robocopy repeatedly until incremental change sets are small enough to fit your acceptable downtime window.
To cut over: prevent changes on the Linux share (change ACLs or take offline), run a final Robocopy, create SMB shares on the Windows Server, and reapply share-level permissions.
If local users were used on Linux, recreate local users on Windows and map SIDs appropriately; AD accounts and ACLs copy over intact.
After migration, reset cloud tiering free-space policy from the migration 99% back to a normal value (e.g., 20%) on all sync groups to avoid overly restrictive caching.
Common file-sync scenarios & constraints (high level)
You canβt have multiple cloud endpoints (same Azure file share) configured across different sync groups.
A sync group supports only one server endpoint per registered server.
Consolidating multiple disks or multiple servers to one Azure file share has constraints β suggested workarounds include tiering, sequential volume targeting, or using additional Azure File Sync servers.
Cross-tenant (different Microsoft Entra tenants) managed identity topologies are not supported: Storage Sync Service, server, managed identity, and RBAC must be in the same tenant.
Troubleshooting common issue
The most common issue during migration is "Volume full" on the Windows Server when Robocopy outpaces cloud tiering. Wait for cloud tiering to free space (runs hourly), then rerun the Robocopy job (mirror/purge options allow safe retries).
Use the Azure File Sync troubleshooting docs for additional diagnostics.
References / Next steps
Azure File Sync planning and deployment guides and troubleshooting docs (links in original article).
Consider sizing guidance and further reading on Azure File Sync scale targets, cloud tiering, and Robocopy options.
Last updated: 05/31/2024
If you want, I can:
Produce a one-page checklist for the migration phases.
Extract the Robocopy command and options into a copy-ready code block and annotate recommended settings for small vs large migrations.
Was this helpful?