Hyper-V Fixed Disks: Pros and Cons

This article outlines some considerations when using fixed disks for Hyper-V and fixed disk Hyper-V backup.

Performance of Fixed Disks at Run-time

Fixed-sized VHDX are great building blocks for VMs as they greatly reduce overhead in Hyper-V. Assuming you refrain from using Hyper-V checkpoints, disk access to fixed disks is always linear. In addition, neighboring blocks of data are indeed stored next to one another, which isn’t the case when using dynamic disks. Since it’s very likely a service inside the VM will access neighboring blocks together, each read and write access are now faster because Windows does not have to move hard drive disk heads around. Seek time is a very big performance penalty and, by the way, the main reason why disk fragmentation has such a strong, negative effect on server performance.

In fact, fixed disk access is almost as good as direct disk access and involves only a small overhead compared to direct disk.

Flexibility of Fixed Disks

Now that Windows Server 2012 has become the standard for Hyper-V, we can also resize fixed-sized VHDs on the fly without having to shut down the virtual machine. In a way, the fixed-sized VHD isn’t so “fixed” after all as it was with previous versions of Windows Server. Being able to move fixed VHDs around and being able to resize them while running are crucial advantages over pass-through disks.


Inflexibility of Fixed Disks

The obvious reason why most users favor dynamically expanding disks is thin-provisioning of disk space; by allowing IT admins to “over-allocate” existing disk resources, they can squeeze in more VMs on the same server. The idea of thin-provisioning is taken to the next level in newer Windows Server editions that also allow dynamic memory configurations. While fixed disks may be grown live at a later point, the IT admin has still more work to do than when using dynamic disks: monitor the VM and log into it to resize the partition once the underlying VHDX has been resized. Dynamic disks are set to a maximum size, created with a minimal size to start off, and then grow from there automatically. On a well tuned system, this may actually work well, particularly when the IT admin is aware of the internals of each VM hosted on a particular server; however, such in-depth knowledge is rarely the case, as in the case of hosting providers, for example.


Maintaining System Stability by not Using Thin-Provisioning

Not too long ago, if economics students criticized the idea of ‘too big to fail’ and how banks rely on each other reciprocally they were ridiculed, since they “do not understand” statistics and the “superior” risk management used by the financial system. As we all know now, history has proven otherwise. Thin provisioning is quite similar to creating–potentially–huge “debt” and having nothing to back it with.

If system stability is your highest priority, you would not want to utilize thin-provisioning to a large degree. If multiple VMs request additional disk space and/or memory during peaks, there have to be enough resource to cover all scenarios. By allocating resources ahead of time you lose flexibility but gain stability by guaranteeing each VM its resources.

Just as banks work well, until there is a bank run or other global crisis affecting multiple banks, the Hyper-V host and all its VMs may work quite stably using thin-provisioned RAM and disk space; yet, the underlying assumption to make this work is that the IT admin has detailed knowledge on the internals of each VM and how it is being used.

Note that outside events are likely to trigger a domino effect on the Hyper-V host, such as Hyper-V backups, especially when multiple VMs are backed up simultaneously. High disk activity can become a bottleneck for other VMs and the live backup signal could trigger lots of services inside the VM to prepare for backup, for example SQL Server or Exchange Server. Then, all over sudden, “quiet” VMs become very active, request more RAM and CPU, and use more disk space, all simultaneously and all on the same disk.

Some smaller organizations use separate disks or disk arrays for each VM, either LUN, internal disks, or USB, as a low-cost solution for better performance, portability, and stability.


Hyper-V Backup Performance of Fixed Disks

Hyper-V backup will not necessarily be faster when using fixed disks. Imagine, for example, a fixed disk of 1 TB size where only 1% is actually being used. Backing up the entire disk will take hours, unnecessarily, because of the huge virtual disk file size; however, if there is deleted data in the VHD, it will be backed up as well and that may be useful in disaster recovery scenarios. For backup performance, on the other hand, it’s a far-from-optimal situation. A dynamic disk would do a better job. But once the VHD is being actively used again and its data contents grow, the dynamic disk may become fragmented.

Because fixed sized VHDs are allocated just once, a single host-level defrag is sufficient to ensure top backup performance. Hyper-V backups read the entire virtual disk file and since it is contiguous, all read-ahead caches and algorithms can be exploited to provide continuous peak transfer rates. On modern single drives this can easily exceed 150MB/sec per drive, on a striped RAID array it could achieve multiples of that , N * 150 MB/sec, where N is the number of disks striped across.

With dynamic disks, Hyper-V backup performance may suffer if disk fragmentation is present. Each time when hard disk heads have to be repositioned, a seek of dozens to hundreds of milliseconds has to take place. What makes it worse is that read-ahead logic and algorithms do not work as efficiently or do not work at all when that happens. In addition, the whole point of having a striped RAID is to provide faster access to contiguous files. Random access is not necessarily faster on a striped array compared to a single drive, it may even be worse in some configurations.

Hyper-V Backup: BackupChain

Try BackupChain for 20 days for free; it’s an all-in-one backup solution that includes Hyper-V VM backup and a lot more.

BackupChain Features

BackupChain allows you to set up a range of backup scenarios, from a simple Windows 10 backup software with HyperV backup to USB support, to large file server backup and remote FTP cloud storage backup system.
BackupChain supports several virtualization platforms and functions as Hyper-V backup software on Windows 8 and 10 but supports also the fully featured live Hyper-V backup on Windows Server and core servers. VDI backup and live Vmware backup aren’t missing either to make a complete product offering.
Furthermore, aiming to be the best Hyper-V backup, our backup software for windows servers supports FTP cloud backup, Hyper-V granular restore, mdf backup and granular backup for virtual machine images.

Our file versioning backup method works locally but also when backing up over the internet, perhaps to a DIY cloud storage or our own cloud backup offerings.
DriveMaker, our freeware FTP client and WebDrive alternative, is a freeware for Windows Server and PCs that allows you access your cloud storage as a mounted drive.
Another module worth mentioning is the ability to perform EDB backups to create Microsoft Exchange Server backups.