Hyper-V Block Size for NTFS: What’s Recommended

Usually when we get this question the IT admin is concerned with Hyper-V performance.

Hyper-V and NTFS Block Sizes Explained

Understanding the best NTFS cluster block size for Hyper-V requires an understanding of how hard drives, file systems, and Hyper-V work.

However, the max. difference in performance we discovered was a mere ~10%. IT admins are more likely to kill performance by using one of the “bad” speed killers listed below.

But first, let’s talk about block sizes. In the case of VHD, Hyper-V uses 512 byte disk I/O operations internally as it aligned with most modern hard drives until about a decade ago. That’s why for the VHDX virtual disk format, Microsoft aligned their internal block size to 4096 bytes, to match new modern hard drive characteristics.

NTFS default format has a default of 4096 bytes, but that cluster size only works for partitions of up to 16TB. If you use that cluster size, hence, it should be fine. If you use more, such as 64KB, the issue is that the system will need to read the entire block before being able to write to it. 64 KB will need to be cached somewhere, usually inside of the hard disk, then 4KB need to be changed, and the entire thing needs to be written back. And yet this process will cost you only a 10% performance decrease on average.

 

Typical Hyper-V Performance Killers

Let’s talk about the “bad stuff” now because that’s where performance is being killed in a large scale:

  • Worst thing ever: Snapshots a.k.a. Checkpoints in newer Hyper-V versions. It’s not recommended to use them if you want best performance. If you need a copy of the VM, use a backup, or export the VM instead. Checkpoints are intended for demo and development systems only. In the case of mechanical drives, the disk’s heads need to jump back and forth, contiguous blocks are no longer contiguous, which affects Windows disk caching, and disk fragmentation will likely skyrocket as virtual disks grow in size often.
  • Why does disk fragmentation become a problem with snapshots? It’s because of the underlying dynamically growing disks.  Dynamically growing disks will surely affect even the most capable server with just a dozen VMs, unless SSDs are used. Again, the issue is that the drive’s seek time adds up even for virtually neighboring’ blocks, which may now be 5msec apart from each other. A mere thousand of these read operations and you’re looking at a dramatic 5 second delay. Unfortunately RAID arrays offer no relief when this happens either, since they usually operate at a larger block size than NTFS. SSD hard drives are a fix for this, however, but there is still some CPU overhead involved in managing snapshots. Another thing to consider is that SSDs will likely wear down faster when checkpoints or dynamic disks are being used.
  • Another important factor: free disk space. You’ll need at least 15% free space everywhere, including inside VMs and a minimum of 10GB. This is a characteristic of NTFS and when NTFS doesn’t have enough space to work with, it ends up fragmenting the drive really badly and file access time suffers as a consequence.

 

Hyper-V Backups: Fast, Customizable, Reliable, Affordable Backups

For your Hyper-V backups check out BackupChain. Check it out today, a 20 day trial is available here.

BackupChain Overview

The Best Backup Software in 2020
Download BackupChain®

The all-in-one Backup Solution for
Disk Image Backup
Drive Cloning and Disk Copy
Hyper-V Backup and VHD File Backups
VirtualBox Backup and VDI File Backups
VMware Backup and VMDK Backups
FTP Backup and Secure FTPS Backups
Cloud Backup and Remote Backups
File Server Backup and Data Backups

Popular

Resources