Skip to main content
Alex Hyett

Setting up a Windows 11 Gaming VM

I recently built a new server with Unraid. The main purpose of this new server is to use as a storage device for all of my media. I currently have over 100 GB of videos for my YouTube channel taking up space on my main computer and that is without keeping the source videos.

I really didn't want another computer sitting in my office as I already have a mini gaming PC that I put together but has been sitting gathering dust under my desk for the last 18 months since I got my Steam Deck.

So the plan was to Frankenstein parts from my old gaming PC, along with some more parts my Dad gave me from his old machine. The result is what would have been a reasonable gaming rig 10 years ago.

It was from reading through the Unraid forums that I realised that I could set up a Windows virtual machine, pass-through the graphics card and I would have an on demand gaming machine that I could access remotely.

But Why? #

As a busy family man my evenings are spent with my wife on the couch and quite frankly after 8 hours of sitting at my desk I don't fancy doing the same for a gaming session. The Steam Deck has been a godsend for this. I can turn on my Steam Deck and play a few games sitting on the couch while she watches TV or plays her own games on the Switch.

However, even though the Steam Deck is great for many games, there are quite a few that it struggles with. The Steam Deck even though can play AAA titles, isn't really designed for it. Even if the game plays, the fans will be going like crazy, it gets hot and the battery will be empty in under 2 hours.

In some cases, I can get more than 2 hours of game time, but I don't play the game often enough to justify the 100 GB of space it is using up.

These are the games currently on my VM only list:

For these games, I now have a Windows VM with near bare metal performance that I can install it on. I then have Sunshine installed on the VM, and Moonlight installed on my Steam Deck, so I can stream them remotely. The performance is fantastic with no noticeable input lag and because the Steam Deck isn't doing any heavy lifting, the battery lasts 5 - 6 hours.

My current criteria for installing on the VM versus on the Steam Deck are:

If a game meets at least one of these criteria then it gets installed on the VM.

As an added bonus I can now play some Windows only games that aren't supported on the Steam Deck like WRC 10 and Riders Republic. I also bought Forza Horizon 4 Ultimate when it was on offer from the Microsoft Store that I haven't played since getting my Steam Deck. There are also some games that just can't be played without a keyboard and mouse which I can now steam to my laptop.

I also have a bunch of games on the Epic Store that I picked up in their free deals that I am going to install without having to install another game launcher on my Steam Deck.

How? #

This took a bit of trial an error to get the system working properly, so I thought I would document it for myself and others to benefit from.

Before I give you all the details, here is the hardware I am working with in relation to the VM:

I originally had the 500 GB SSD in a cache pool with another 250 GB SSD. However, given I wanted to dedicate 500 GB to the VM for games, it made more sense to just pass through the SSD directly to the VM.

I also did this to try and overcome some performance issues which I will cover in a bit. As it turns out, the performance problem wasn't related to the drive, so I could have kept it as a vdisk.

For the 500 GB NVMe drive, I have a 300 GB vdisk assigned as additional game storage. The vdisk on the NVMe is significantly faster than the SSD, so I use this for any games that are slow loading (I am looking at you WRC 10).

In addition to the above storage I also have one of the shares on my array mapped as a network drive in the VM. This is where I transfer games that I rarely play or have no noticeable performance issues loading via a samba share.

Most of these setting changes need to be done via the XML settings file. This means that you can no longer use the UI to make changes without breaking the XML. I have moved to storing the XML settings file in git, so I can redo the manual changes after using the UI.

Graphics Card #

When you first set up the VM you need to use the virtual graphics card. This allows you to connect to the VM using VNC in the browser. Once Windows was installed and set up I also installed Parsec and Moonlight, so I could connect to the VM using that.

Once the graphics card is added, video output will go through that, and you will no longer be able to connect using VNC.

I found a SpaceInvaders video on YouTube that goes through how to add the graphics card.

The main takeaway is that you need to manually add multifunction='on' and make sure the internal audio part of the graphics card is set to the same slot. The function of the second address is also changed from 0x0 to 0x1

<hostdev mode='subsystem' type='pci' managed='yes'>
	<driver name='vfio' />
	<source>
		<address domain='0x0000' bus='0x01' slot='0x00' function='0x0' />
	</source>
	<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'
		multifunction='on' />
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
	<driver name='vfio' />
	<source>
		<address domain='0x0000' bus='0x01' slot='0x00' function='0x1' />
	</source>
	<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x1' />
</hostdev>

This is to prevent driver problems on Windows as this will more closely resemble what you would have if it were a physical card. By having the cards using different addresses you are effectively splitting the graphics and audio of your graphics card which you wouldn't be able to do in real life.

CPU #

Given I am using a 10-year-old i5-4690k I am a bit limited with the number of cores I can assign to the VM.

I don't have many docker images running, so I choose to assign cores 1, 2, and 3 to the VM leaving core 0 to Unraid and the docker images.

One of my docker images is JellyFin but I won't be watching anything and playing games at the same time anyway.

This is what my CPU settings look like for my VM:

<vcpu placement='static'>3</vcpu>
<cputune>
	<vcpupin vcpu='0' cpuset='1' />
	<vcpupin vcpu='1' cpuset='2' />
	<vcpupin vcpu='2' cpuset='3' />
</cputune>
<features>
	<acpi />
	<apic />
	<hyperv mode='custom'>
		<relaxed state='on' />
		<vapic state='on' />
		<spinlocks state='on' retries='8191' />
		<vendor_id state='on' value='none' />
		<vpindex state='on' />
		<synic state='on' />
		<stimer state='on' />
	</hyperv>
</features>
<cpu mode='host-passthrough' check='none' migratable='off'>
	<topology sockets='1' dies='1' cores='3' threads='1' />
	<cache mode='passthrough' />
</cpu>
<clock offset='localtime'>
	<timer name='hypervclock' present='yes' />
	<timer name='hpet' present='no' />
</clock>

The last 3 settings in hyperv I found from a forum post. I am not sure how much difference these settings made, but it hasn't made a noticeable difference.

I did try not pinning the CPUs (deleting the cputune block) but just limiting it to 3 cores, and it didn't make a difference in performance.

Hard Drives #

There seemed to be mixed views on whether to use the virtio driver or scsi for the vdisk drives.

This is how I have my drives set up:

<disk type='file' device='disk'>
	<driver name='qemu' type='raw' cache='writeback' discard='unmap' />
	<source file='/mnt/user/domains/Windows 11/vdisk1.img' />
	<target dev='hdc' bus='scsi' />
	<serial>vdisk1</serial>
	<boot order='1' />
	<address type='drive' controller='0' bus='0' target='0' unit='2' />
</disk>
<disk type='file' device='disk'>
	<driver name='qemu' type='raw' cache='writeback' discard='unmap' />
	<source file='/mnt/user/domains/Windows 11/vdisk2.img' />
	<target dev='hdd' bus='virtio' />
	<serial>vdisk2</serial>
	<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' />
</disk>
<disk type='block' device='disk'>
	<driver name='qemu' type='raw' cache='writeback' discard='unmap' />
	<source dev='/dev/disk/by-id/ata-CT500MX500SSD1_2025E2AE08D4' />
	<target dev='hde' bus='sata' />
	<serial>vdisk3</serial>
	<address type='drive' controller='0' bus='0' target='0' unit='4' />
</disk>

Disk 1 is a 64 GB vdisk that just contains Windows. This is located on the NVMe drive and is using the scsi bus. Disk 2 is my 300 GB vdisk also located on the NVMe drive and is using virtio. Disk 3 is my physical SSD which is being passed through as an SATA drive.

The only thing that I have added to this is the discard='unmap' to each of the driver lines. This is apparently needed to allow Windows to perform TRIM on the drives.

Network Drive #

The last and most important thing worth sharing in this setup is my network drive. I have a games share that is shared as a samba drive and mapped as a network drive in the VM.

I have to say that it is not fast at all, but it just serves as some extra storage. I did read something that said passing the share through as a VirtioFs drive increase the performance. This involves making sure the drivers are installing in Windows, installing WinFS and making sure the VirtioFS service is running.

It took my far to long to realize that it was this that caused my CPU to remain at 100% on all cores while doing nothing at all in Windows. Everything became really unresponsive and naturally loading any game took a lifetime. If you are thinking of installing VirtioFS, DON'T DO IT.

Resources #

These links were helpful trying to come up with the above. If my setup didn't work then they might help you too.

Metadata
Skip back to main content