Migrate to NFS Azure File Shares from Linux
Summary β Migrate to NFS Azure file shares
Overview
This article explains how to migrate data from Linux file servers to NFS Azure file shares (available only as Premium file shares on SSD).
It compares single-threaded rsync with multithreaded fpsync (which uses rsync/cpio/tar under the hood) and gives guidance for commonly used migration phases (baseline, incremental, final pass).
Key note: Azure Files doesn't support NFS access control lists (ACLs).
Applies to
NFS access is supported only on Premium file shares (FileStorage) with LRS/ZRS. Standard GPv2 file shares do not support NFS.
Prerequisites
Mount at least one NFS Azure file share on a Linux VM. See: https://docs.azure.cn/en-us/storage/files/create-classic-file-share
Recommended: mount with nconnect (e.g., nconnect=8) to use multiple TCP connections for better performance. See: https://docs.azure.cn/en-us/storage/files/nfs-performance#nfs-nconnect
Tool selection and behavior
rsync: single-threaded, flexible, widely used.
fpsync: multithreaded wrapper that partitions the source (via fpart) and runs parallel synchronization jobs; can use rsync (default), cpio, or tar as the copy tool.
Important: fpsync synchronizes the contents of the source directory (enforces trailing '/'); it does not create a parent folder named after the source in the destination.
For large distributed filesystems, reducing per-file network round-trips and parallelizing transfers improves throughput.
Install fpart (required for fpsync)
Ubuntu:
sudo apt-get install fpartRHEL 7/8/9 (example for enabling EPEL and installing fpart):
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm sudo yum install fpart -y(use the appropriate EPEL URL for RHEL 8/9 as shown in the original guide)
If no package is available, install fpart from source: https://www.fpart.org/#installing-from-source
Migration workflow (three phases)
Typical fpsync invocation
Performance findings (high-level)
Multithreaded fpsync substantially improved throughput and IOPS versus single-threaded rsync in the authors' tests.
Distributing files across directories (so work can be parallelized) improves performance.
Larger file sizes yield better throughput than many small files.
Selected numeric summary (from the article tests)
1.1 (baseline)
1,000,000
1
0β32 KiB
18 GiB
0.33 MiB/s
1.20 MiB/s
267%
1.2 (incremental)
1,000,000
1
0β32 KiB
18 GiB
3.25 MiB/s
36.41 MiB/s
1,020%
2 (baseline)
191,345
3,906
0β32 KiB
3 GiB
0.27 MiB/s
6.04 MiB/s
2,164%
3 (baseline)
5,000
1
10 MiB
50 GiB
105.04 MiB/s
308.90 MiB/s
194%
Notes and caveats
Tests used Standard_D8s_v3 VMs (8 vCPUs, 32 GiB RAM) and NFS Azure File shares with >1 TiB provisioned.
Best observed fpsync settings in these tests: 64 threads when using rsync and 16 threads when using cpio with nconnect=8 on the mount. Actual results depend on your dataset and environment.
Some experiments were intentionally small; Azure Files throughput can be higher in other configurations.
Third-party tools (fpsync, fpart, rsync, cpio) are not Microsoft-supported; review licenses and support yourself.
Further reading
Improve NFS Azure file share performance: https://docs.azure.cn/en-us/storage/files/nfs-performance
fpsync/fpart documentation: http://www.fpart.org/fpsync/
Last updated in the source article: 11/11/2025
If you want, I can:
Produce a concise checklist for a migration run (commands + recommended parallelism),
Or generate a sample fpsync + mount configuration for nconnect usage based on your VM size and dataset.
Was this helpful?