A big drawback when using virtual machines is often poor video performance. This is because a hypervisor like KVM (which is used by proxmox) emulates a graphics card with code. This code runs on the CPU which is typically not very optimized for rendering images. Therefor the performance is not very fluent. Other specialized hardware connected on the PCIe bus may also be necessary to provide additional functionality/connectivity you may want.
This guide dives deep inside linux. Be aware that malconfiguration can make your installation useless. Try it on a non production system first!
Also check if your hardware is capable of IOMMU (I/O Memory Management Unit). Both your motherboard and your CPU need this. Generally, Intel systems with VT-d, and AMD systems with AMD-Vi support this. But it is not guaranteed that everything will work out of the box, due to bad hardware implementation and missing or low quality drivers. Further, server grade hardware has often better support than consumer grade hardware. Please refer to your hardware vendor to check if they support this feature under Linux for your specific setup.
This step is necessary to activate the IOMMU on your system which is done by adding one of the following lines to
- for Intel CPUs
- for AMD CPUs
After updating this file you need to execute one more command to make your changes active:
Update Kernel Modules
Also you have to make sure the following modules are loaded. This can be achieved by adding them to
vfio vfio_iommu_type1 vfio_pci vfio_virqfd
After changing anything modules related, you need to refresh your initramfs. On Proxmox this can be done by executing:
update-initramfs -u -k all
Check if it worked
Finally reboot to bring the changes into effect and check that it is indeed enabled.
dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
should display that IOMMU, Directed I/O or Interrupt Remapping is enabled, depending on hardware and kernel the exact message can vary.
It is also important that the device(s) you want to pass through are in a separate IOMMU group. This can be checked with:
find /sys/kernel/iommu_groups/ -type l
It is not possible to display the frame buffer of the GPU via NoVNC or SPICE on the Proxmox VE web interface. When passing through a whole GPU and graphic output is wanted, one has to either physically connect a monitor to the card, or configure a remote desktop software (for example, VNC or RDP) inside the guest. If you want to use the GPU as a hardware accelerator this is not required.
When passing through a GPU, the best compatibility is reached when using q35 as machine type, OVMF (EFI for VMs) instead of SeaBIOS and PCIe instead of PCI. Note that if you want to use OVMF for GPU passthrough, the GPU needs to have an EFI capable ROM, otherwise use SeaBIOS instead.
The host must not use the card. There are two methods to achieve this:
- pass the device IDs to the options of the vfio-pci modules by adding
options vfio-pci ids=1234:5678,4321:8765to a .conf file in
4321:8765are the vendor and device IDs obtained by:
- blacklist the driver completely on the host, ensuring that it is free to bind for passthrough, with
blacklist DRIVERNAMEin a .conf file in
For both methods you need to update the initramfs again and reboot after that.
To check if your changes were successful, you can use
lspci -nnk and check your device entry. If it says
Kernel driver in use: vfio-pci
or the in use line is missing entirely, the device is ready to be used for passthrough.
This can be done via console or by using the GUI. This guide only shows the GUI-way.
First select your virtual machine which should use the PCIe card. Then change to the hardware tab.
Under the option Add select PCI Device. Afterwards select the desired device in the drop down menu.
Some devices may require you to check the PCI-Express option. Just try it out.
If a device splits itself into multiple IOMMU-Groups but you want all functionality, you can check the All Functions box.
After that the PCIe Device can be used from inside the VM.