Improving Hyper-V Speed and Achieving High Performance and Throughput

Below you will find detailed recommendations to improve overall Hyper-V performance as well as instructions on how to tweak BackupChain's settings to achieve maximum backup speed.

We put together a couple of system tweaks below that may result in considerable performance gains.

General Guidelines to Improve Hyper-V Speed and Achieve High System Performance

  • Don't use dynamically expanding VHDs or VHDXs. These are only meant for test systems and are not recommended for production systems by Microsoft.
  • Don't use Hyper-V snapshots. These are also only for test and development purposes and not recommended by Microsoft for production use.
  • Use large NTFS cluster sizes, such as 64K.
  • Do not use drive compression or encryption of any kind.
  • Use a separate drive for the Windows paging file. It's important to use a fixed-sized paging file (set min and max to the same value, 3x the RAM you've got.)
  • Defragment all drives regularly, including from within the virtual machine operating system
  • Use fixed sized VHDs with plenty of free space for the VM operating system
  • Have at least 10 to 20% free space on every disk on the host. NTFS and VSS quickly become inefficient when disk space is below that limit.
  • Keep at least 1 GB free RAM on the host
  • Increase the VSS storage size allocation limits for each drive to at least 10% of each drive's size. Command: vssadmin resize shadowstorage
  • Increase the Windows paging file size to at least 2.5x the RAM size. Use the same setting for minimum and maximum. Ensure the paging file is not fragmented
  • Make sure your system isn't clogged with orphaned VSS snapshots. (Command: vssadmin list shadows). See this helpful article: How to Delete All VSS Shadows and Orphaned Shadows.

 

General Hardware Recommendations to Improve Hyper-V Speed

  • Use high RPM drives
  • Use striped RAID for virtual hard drive storage
  • Use USB 3 or eSATA for external backup drives
  • Use 10 Gbit Ethernet if possible for network traffic
  • Isolate backup network traffic from other traffic.
  • Use separate disks for VMs with high I/O requirements
  • Increase the VM's RAM
  • Increase the host's RAM. Always keep at least 1 GB available on the host

 

Cluster Shared Volume Recommendations to Improve Hyper-V Speed

  • Try all the steps shown above first
  • If using a cluster shared volume, traffic isolation is very important.
  • Use separate NICs for SAN, backup, and cluster management traffic.
  • Use 10 Gbit Ethernet if available
  • Try to slowly increase the I/O speed limits in BackupChain's Options tab
  • Separate busy VMs into separate volumes
  • Add additional nodes to spread the load
  • Pick a time for cluster shared volume backup when the network traffic is low.
  • Disable NetBIOS over TCP/IP
  • Enable jumbo packets
  • Use high quality network switches
  • Keep the LANs short and connect only a few nodes to each CSV. I.e. split large setups into separate CSVs
  • Don't use several switches on a Ethernet bus because each of them adds latency

 

BackupChain Settings to Increase Hyper-V Backup Speed

On most systems administrators generally want to keep the Hyper-V backup process in the background so it has little if any impact on the overall system. Since most Hyper-V hosts are active 24/7, there is hardly ever a time to shut down virtual machines for the maintenance.

However, there are time windows, usually at night, where a backup process could be given additional system resources and finish faster, at the cost of a minor system slowdown. BackupChain's default settings are such that a regular system will not be impacted by a backup. This does imply, however, that the backup may run slower.

For those users who do wish to run at max. speed, these are the recommended settings:

1. In the Deduplication tab, set the block size to 16MB. You should keep 512MB to 1GB of RAM available at all times for this option. To see the effect this has on your backup duration, you may need to either wait for up to 5 backup cycles until a new full delta is created, or simply start a new backup folder. Larger block sizes reduce overhead drastically but also lead to larger delta files.

2. In the Options tab, scroll down to the Resource Allocation Limits section. Lift the CPU core limit, set the process priority to 'normal'. If your Hyper-V host is using a cluster shared volume, you may want to set higher disk I/O limits. On other hosts with local drives you can lift all speed limits. Note that on cluster shared volume systems you should adjust the disk limits carefully to avoid traffic collisions with the Microsoft's Cluster Management traffic. Ideally the host should have separate NICs and network switches to isolate backup traffic from cluster management traffic, as recommended by Microsoft.

3. Ensure data compression is turned on in the Compression tab. Why is a compressed backup faster? Regular data content, which is compressible to a degree (approx. 50%), can be compressed quite efficiently using only little CPU time. This in turn drastically cuts down the actual data volume that needs to be written. Note that disk write access is always slower than read throughput. If less data needs to be written, the overall backup runs quite faster. The key assumption is that the content is compressible. This is not the case with music, video, and encrypted files, for example. If your VMs use an encrypted image format, such as BitLocker Drive Encryption, it's better to switch off compression entirely in the Compression tab.

4. On systems with serious RAM limits, close the BackupChain Monitor application using File->Exit whenever you don't need the application.

5. Why is deduplication the fastest option? Because we have optimized BackupChain's deduplication engine to use several CPU cores in parallel. This greatly improves performance, since hard drive and network speed are the common bottlenecks in the system. Plain file backup without compression tends to be the slowest option because they is no gain from reducing the data volume (see #3 above). Regular ZIP compression does not support parallelism; hence, ZIP is relatively slow because it uses just one CPU core.

More Resources for Hyper-V and I.T. Professionals

Check out:
Our main blog
Our Hyper-V blog
Tech Support articles
Backup related articles

 

Contact BackupChain Tech Support (or call your Priority Support number) if you need assistance.