There are multiple reasons why you might want to use GVT-g to pass parts of your iGPU through to your guest operating system. If you do not have a dedicated GPU for each VM/LXC for example or even no GPU attached to your host at all, it might still be this option to make use of graphics acceleration. There are a few requirement that need to be met in advance but once the setup is complete it’s a very cost effective solution. In my particular case I wanted to transcode my Plex Media on the fly with hardware acceleration. Since I moved to a datacenter, it’s not possible to extend the hardware at will with a dedicated graphics card.
Prerequisites
Before we can start with the setup – here are a few requirements that need to be met beforehand:
- root (or sudo) shell access
- supported CPU
Intel published a list containing all supported CPUs here. You should check if your CPU is listed as a CPU with GVT-g support. If your CPU has virt-io support it gets a little bit more complicated. That’s why those CPUs are not usable for this specific tutorial.
Setup
Step 1: Log into your Proxmox node via SSH or open a shell via the web gui.
Step 2: Use a text editor to open your GRUB config file (for systemd-boot please refer to the Proxmox PCI passthrough guide).
nano /etc/default/grub
Step 3: Find the line that starts with GRUB_CMDLINE_LINUX_DEFAULT
.
Step 4: Add intel_iommu=on
and i915.enable_gvt=1
to the list parameters, for example:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_gvt=1"
Step 5: Save the file and then update GRUB.
update-grub
Step 6: Reboot the Proxmox node.
Step 7: Validate your changes. If this command does not result in any output, then something went wrong.
dmesg | grep -e DMAR -e IOMMU
Step 8: Use a text editor like nano to add the following modules to /etc/modules
.
# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
# Modules required for Intel GVT
kvmgt
exngt
vfio_mdev
Step 9: Save your changes and reboot again.
Usage
After the setup you should be able to add a virtual/managed GPU to your VM/LXC. The following steps must be executed by the root user. Otherwise you won’t be able to add the PCI Device.
Step 1: Add a new PCI Device
Step 2: Select GPU as Device
It might be required to check the PCI ID beforehand because the device name is not as helpful as pictured above. You can check your ID by executing the following command:
lspci -nnv | grep VGA
This should result in output similar to this:
00:02.0 VGA compatible controller [0300]: Intel Corporation HD Graphics 630 [8086:5912] (rev 04) (prog-if 00 [VGA controller])
Here, the PCI address of the iGPU is 00:02.0
. Make note of that address.
Step 3: Select MDev Type
Depending on how much performance is needed you can choose between multiple configurations. The selection is specific to your CPU.
Step 4: Select the correct settings (Advanced enabled)
If you’re using a Linux based guest you should only select ROM-Bar. However if your machine type is q35 (mostly used for windows clients) you should specify PCI-Express as enabled.
Conclusion
After all those steps you should see a GPU inside your guest OS. If you configure for example Plex (see here) to use hardware acceleration and you start a transcoded stream you should see some utilization in intel-gpu-top
And also Plex confirms that the Intel GPU is used by displaying “(hw)” for video.
Sources
https://blog.ktz.me/passthrough-intel-igpu-with-gvt-g-to-a-vm-and-use-it-with-plex/
https://cetteup.com/216/how-to-use-an-intel-vgpu-for-plexs-hardware-accelerated-streaming-in-a-proxmox-vm/
Hello! Thank you for the guide. I followed all the steps but my MDev is greyed out.
The GUI seems to be updated. Try selecting raw device and then your GPU. That still seems to work for me.