How to Pass a Single GPU on a Virtual Machine: QEMU/KVM

How to Pass a Single GPU on a Virtual Machine: QEMU/KVM

How to Pass a Single GPU on a VM QEMU KVM Guide

Virtualization no longer means compromising on performance. By bypassing the software layer and handing direct hardware control to your virtual machine, Single-GPU Passthrough unlocks near native-level gaming and rendering speeds on Linux, all without the need for secondary hardware or complex dual-booting.

Hi, my name is Abdullah Musa and welcome to my blog, MusaBase! In this comprehensive guide, I'll show you how to passthrough a single GPU in a QEMU/KVM VM on Linux, no SSH connection, second GPU, or remote display streaming required. In this guide you'll learn:

  • Preparing your Bootloader for IOMMU Configuration
  • Creating a QEMU VM with Configurations to Passthrough a Single GPU
  • VFIO Configuration
  • Single-GPU Passthrough Script Integration
  • Attaching the GPU to the VM

I'm using Arch Linux btw!, but you can follow this tutorial on any Linux distribution, provided your CPU and GPU support hardware-assisted virtualization.







Why Single-GPU Passthrough Has Fails Until Now?

In practice, single-GPU passthrough often fails because the lone GPU can't be reliably isolated or reset for the guest VM. By default, the motherboard/chipset IOMMU groups your GPU with other peripherals, so VFIO can't detach it unless you use an ACS override patch to break those groups.

The Core Issue

Without clean isolation and reset capabilities, the hypervisor either rejects the GPU device or the VM boots to a black screen or hangs during device reset.




The Fix

Fortunately, there's a GitLab repository by Risingprism called Single GPU Passthrough, which includes detailed instructions and handy tweaks. In the next sections, I'll guide you through installing this script and applying the additional configurations needed for smooth single-GPU passthrough.




Prerequisites

Before we begin the GPU Passthrough process, ensure your hardware and software meet the following requirements:

  • System Architecture: You must be running a 64-bit Linux distribution.
  • Virtualization Stack: Ensure QEMU/KVM is properly configured. If you haven't set this up, follow my QEMU installation and configuration guide.
  • BIOS/UEFI Settings: Virtualization must be enabled in your firmware:
    • Intel CPUs: Enable VT-d and VT-x.
    • AMD CPUs: Enable IOMMU and AMD-V.
  • Host OS: For a high-performance foundation, I recommend an optimized host. See my Arch Linux installation walkthrough for more details.
  • Hardware Compatibility: Confirm that both your GPU and motherboard firmware support UEFI.
  • Installation Mode: Your Linux distribution must be installed in UEFI mode to ensure compatibility with modern VFIO scripts.

System Requirements:

  • Processor: Intel or AMD x86-64 compatible with IOMMU support.
  • RAM: 8GB minimum, though 16GB or more is recommended for a smooth gaming experience.
  • Storage: Space requirements vary depending on the Guest OS and the applications you plan to install.
  • Operating System: Any modern Linux distribution (Arch, Fedora, or Debian-based).
  • Virtualization Tool: QEMU/KVM with Libvirt.



Step 1: Editing the Bootloader

A bootloader is a program that runs at system startup and loads the Linux kernel, allowing the operating system to boot. This example shows how to configure the GRUB bootloader, but you can adapt these steps for systemd-boot as well.

1.1: GRUB Bootloader

GRUB is a powerful and flexible bootloader that supports multiple operating systems and advanced boot options, making it ideal for dual-boot setups.

1.1.1: Edit GRUB Configuration

To configure GRUB, follow these steps:

  • Open a terminal and run:
sudo nano /etc/default/grub
  • In the GRUB file, look for the line starting with "GRUB_CMDLINE_LINUX_DEFAULT".
  • Add the following parameter inside the quotes:
intel_iommu=on
  • Press Ctrl + O, Enter to save, then Ctrl + X to exit the editor.

1.1.2: Update GRUB Configuration

Arch / Fedora / openSUSE distros:

  • In the terminal, run:
sudo grub-mkconfig -o /boot/grub/grub.cfg
  • On Fedora 33 or earlier:
sudo grub-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

Ubuntu, Debian, Linux Mint, and derivatives:

sudo update-grub

Manjaro:

sudo update-grub

Reboot the system to apply the changes.


1.2: Systemd-boot

systemd-boot is a lightweight, UEFI-only boot manager that offers faster boot times and simpler configuration but does not support legacy BIOS systems.

1.2.1: Edit Systemd Bootloader

To edit systemd-boot, look for the options root= line in your loader entry.

POP!_OS:

  • Open a terminal and run:
sudo nano /boot/efi/loader/entries/Pop_OS-current.conf
  • Look for the line that starts with options root=.
  • Add intel_iommu=on after the existing options.
intel_iommu=on
  • Next, press Ctrl + O to save, Enter to confirm, then Ctrl + X to exit the file.

1.2.2: Update Systemd Bootloader

  • To update systemd-boot, run the following command:
sudo bootctl update

Note: Reboot your system to apply the changes.




Step 2: Installation & Configuration of QEMU

In this step, we will install and configure QEMU on Linux. If you already have QEMU installed, you can skip the installation section and move directly to the configuration part.

2.1: Installation

If you haven't installed QEMU yet, please follow my QEMU/KVM installation and setup guide. Once the virtualization stack is ready, we'll move on to configuring its services and libraries.

2.2: Configuration

Note: The following steps are crucial; please follow them carefully.

2.2.1: Editing libvirtd.conf for Logs & Troubleshooting

libvirtd.conf is the main configuration file for libvirtd, the background service that manages virtual machines on Linux. It controls things like user permissions, network access, and how virtualization tools such as QEMU and KVM are allowed to run and communicate with the system.

  • Open a terminal and run:
sudo nano /etc/libvirt/libvirtd.conf
  • At the end of the file, add:
log_filters="3:qemu 1:libvirt"
log_outputs="2:file:/var/log/libvirt/libvirtd.log"
  • Press Ctrl + O, then press Enter to save the changes, and press Ctrl + X to exit the file.

2.2.2: Editing qemu.conf to Avoid Permission Issues

qemu.conf is the main configuration file for QEMU itself. It controls how virtual machines are executed, including security settings, user and group permissions, device access, and integration with libvirt. In simple terms, it defines how QEMU is allowed to run and interact with your system.

  • Run the following command to open qemu.conf for editing:
sudo nano /etc/libvirt/qemu.conf
  • Press Ctrl + F and search for user =.
  • Locate the lines:
#user = "root"
#group = "root"
  • Remove the #, then replace "root" with your actual username and group name.
  • For me, my username is retro1o1 and my main user group is wheel, but make sure you enter your username and group name correctly.

2.3: Adding Your User to the Libvirt Group

To allow proper libvirt file access, add your user to both the kvm and libvirt groups:

  • Run the following command to add your user to the libvirt group:
sudo usermod -a -G kvm,libvirt $(whoami)
  • Now, enable and start the libvirtd service.
  • Run the following commands consecutively:
sudo systemctl enable libvirtd
sudo systemctl start libvirtd
  • You can verify that libvirt has been added to your user's groups by running this command:
sudo groups $(whoami)
  • In the output, you should see kvm and libvirt listed, something like:
  • Finally, restart libvirt to load the changes:
sudo systemctl restart libvirtd

2.4: Enabling Internet for VMs via Default Network

To ensure VM internet access, activate the default virtual network in one of two ways:

2.4.1: Automatic Start

  • Run the following command:
sudo virsh net-autostart default

2.4.2: Manual Network Start

  • To start the network manually:
sudo virsh net-start default

Note: You must run this command before each VM launch if the network is not auto-started.




Step 3: Creating the VM & Installing Windows 11

Now that we've configured libvirt permissions, it's time to install an operating system on your VM. We'll use Windows 11 in this guide, but you can follow the same steps for Windows 10.

3.1: Virtual Machine Setup

  • Launch Virt-Manager by either running virt-manager in a terminal or selecting it from your desktop's application menu.
  • Click the New VM 🖥️ icon in the toolbar.
  • Choose Local install media (ISO image or CDROM) and click Forward.
  • Browse to your Windows 11 (or 10) .iso file. If virt-manager doesn't auto-detect the OS in the bottom dropdown, select Windows 11 manually.
  • Allocate RAM and CPU cores based on your system resources and workload.
  • Create or choose a virtual disk for storage, using either the default path or a custom virtual storage drive.
  • On the final screen, name your VM win10.
  • Check Customize configuration before install, then click Finish.

3.2: Configuring Virtual Hardware in Virt-Manager

In the VM settings window:

3.2.1: Chipset & Firmware

  • Change Chipset to Q35.
  • Change Firmware to UEFI.

3.2.2: CPU Configuration

  • Enable Copy host CPU configuration (host-passthrough).
  • Click Topology, then enable Manually set CPU topology.
  • Adjust Sockets, Cores, and Threads to match your physical CPU's specifications.

I have an Intel i5 4th gen, 4-core CPU with no Hyper-Threading, so my CPU configuration looks like the image below. Please set yours according to your CPU specifications.


3.3: Install Windows

Launch the VM. If configured correctly, the QEMU boot screen will load your Windows installer.




Step 4: (Optional) Setting Up VFIO on Arch Linux

Note: This optional step is for Arch Linux users. If the passthrough script fails, follow these instructions, adjusting file paths and commands for other distros as needed.

4.1: Load VFIO Modules

Virtual Function I/O (VFIO) is a Linux kernel framework enabling QEMU to directly access PCI devices (such as GPUs) for secure passthrough. Before running the setup script, verify that VFIO modules are loaded.

  • In a terminal, check for loaded modules:
lsmod | grep vfio
  • Make sure these three modules are loaded:
    • vfio
    • vfio_iommu_type1
    • vfio_pci

If any are missing, edit your initramfs config (/etc/mkinitcpio.conf) to include them.

  • Run:
sudo nano /etc/mkinitcpio.conf
  • In the MODULES=(...) line, add:
vfio vfio_iommu_type1 vfio_pci
  • The final edited line will look like this:
  • Next, press Ctrl + O to save, press Enter to confirm, then press Ctrl + X to exit the file.
  • Now, regenerate initramfs with:
sudo mkinitcpio -p linux

Reboot your system after regenerating the initramfs.




Step 5: Installing the Single-GPU Passthrough Script

Once Windows finishes installing, shut down the VM and open a terminal.

  • In the terminal, run the following command:
git clone https://gitlab.com/risingprismtv/single-gpu-passthrough.git
  • Change into the repository folder:
cd single-gpu-passthrough
  • Now, in the single-gpu-passthrough folder, run these two commands consecutively to make and run the installer:
sudo chmod +x install_hooks.sh
sudo ./install_hooks.sh

What Does the install_hooks.sh Script Do?

The script checks for and sets up these hook files:

  • Verifies that /etc/libvirt/hooks/qemu exists.
  • Verifies that /usr/local/bin/vfio-startup and /usr/local/bin/vfio-teardown exist.
Hook Script Actions:
  • On VM Shutdown:
    • Stops the sleep-inhibit service.
    • Unbinds the GPU from vfio-pci.
    • Rebinds the GPU to the host PCI driver.
    • Restarts the display manager.

Note: When the VM shuts down, the display manager will restart to rebind the GPU, which will log you out of the current session and require you to log back into your desktop. Therefore, save any changes or files before booting your VM for single-GPU passthrough.




Step 6: Attaching the GPU to Your VM

Before you add your GPU, remove the default virtual display devices that QEMU attaches by default.

6.1: Removing the Default Virtual Display

QEMU adds Spice and QXL as virtual display devices. Remove both since you'll use the passthrough GPU for video output.

6.1.1: Remove Spice & QXL

  • In Virt-Manager, double-click your Windows VM. (This opens the VM console window.)
  • Click the info (ℹ️) button to view VM settings.
  • Right-click Display Spice and select Remove Hardware.
  • Repeat the same for Video QXL.

If you can't delete the components or the remove button is greyed out, do the following:
  • Go back to Virt-Manager and click on Edit in the top menu bar.
  • Click on Preferences.
  • Check the box for Enable XML Editing.
  • Now, go back to your Windows Virtual Machine settings, click on the Overview section, then click on XML.
  • In the XML file, look for the graphics tag for Display Spice:

<graphics type="spice" autoport="yes">
  <listen type="address"/>
  <image compression="off"/>
  <gl enable="no"/>
</graphics>
  • For Video QXL, look for:

<video>
  <model type="qxl" ram="65536" vram="65536" vgamem="16384" heads="1" primary="yes"/>
  <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
</video>
  • Also remove the audio line:
<audio id="1" type="none"/>
  • Remove these blocks of code from the XML file.
  • After removing these components through XML, click Apply to save the changes.


6.2: Fetching the GPU ROM

Note: Some older AMD or NVIDIA cards (and even some newer models) may require a dumped ROM vBIOS for successful passthrough.
Let's proceed without a ROM first. Only if you encounter a persistent black screen or reset failure should you fetch and attach your GPU's vBIOS. You can find pre-dumped files for many cards on TechPowerUp if you know its exact model and vendor.

For detailed instructions on dumping your own GPU ROM or attaching the GPU with a ROM, refer to the script's official wiki.



6.3: Attaching the GPU

Finally, attach your physical GPU to the VM.

  • Open Virt-manager and double-click your Windows VM.
  • Click the info (ℹ️) button to open the VM's virtual hardware details.
  • Click Add Hardware at the bottom left of the window.
  • Choose PCI Host Device.
  • Select your GPU device from the list, then click Finish.
  • Repeat the process to add the GPU's audio controller (often listed as "HD Audio Controller" for the same device).
  • For my system, the devices are named as follows (yours will be different):


6.3.1: Attaching the ROM to Your GPU

Note: Refer to Step 6.2 for instructions on dumping your GPU's ROM. If you encounter a black screen or "monitor going to sleep" issues, attach the vBIOS as follows:

  • Ensure your ROM file is in the correct directory (e.g., /usr/share/vgabios/).
  • In the Hardware Details window, select the GPU entry, then click the XML tab.
  • Insert a <rom> element just above the closing <address type='pci'> tag, for example:
<rom file='/usr/share/vgabios/rx-590.rom'/>

After adding your GPU and its Audio Controller, your virtual machine's hardware list should look similar to this:

Important Note: Your PCI devices are divided into groups called IOMMU Groups. Your GPU belongs to one or more of these groups, and you must pass the entirety of the group that contains your GPU to the VM for passthrough to work correctly. For more details, please refer to the guide on IOMMU Groups.




Step 7: Additional Tweaks & Optimizations

7.1(a): Enabling Simultaneous Multithreading (SMT) on AMD CPUs

  • In Virt-Manager, open your Windows VM and click the ℹ️ button to view its Virtual Hardware Details.
  • Click the XML tab and locate the line starting with <cpu mode="host-passthrough" check="none">.
  • Replace the entire <cpu> block with the following example, which defines the topology and enables the required AMD feature:
<cpu mode='host-passthrough' check='none'>
  <topology sockets='1' dies='1' cores='12' threads='2'/>
  <feature policy='require' name='topoext'/>
</cpu>

7.1(b): Disabling SMEP on Intel CPUs

  • Go to the XML tab in your VM's Virtual Hardware Details window and find the <cpu> tag.
  • Edit the <cpu> section to match the following example, which explicitly disables the SMEP feature:
<cpu mode='host-passthrough' check='none'>
  <topology sockets='1' dies='1' cores='12' threads='2'/>
  <feature policy='disable' name='smep'/>
</cpu>

7.2: Adding a Physical Mouse & Keyboard

When you run your VM with GPU passthrough enabled, the emulated mouse and keyboard attached by default will not work on the passed-through GPU's display. To control the VM, you must pass through your physical mouse and keyboard as USB devices.

  • Open the VM's Virtual Hardware Details and click Add Hardware.
  • In the new window, select USB Host Device, choose your mouse from the list, and click Finish.
  • Repeat the process to add your physical keyboard.

7.3: Add VirtIO (virtio-win) Drivers

VirtIO is a virtualization standard for network and disk device drivers that uses a lightweight, shared-memory interface to pass data between the guest and host with minimal CPU overhead.
Without VirtIO drivers, Windows VMs fall back to slow, fully-emulated hardware that caps disk and network performance while increasing CPU overhead. Installing the virtio-win driver ISO during or after VM setup unlocks paravirtual block, network, balloon, and other device drivers, drastically improving I/O throughput, reducing latency, and enabling advanced features in the Windows guest.

  • Download the latest virtio-win driver ISO from the Fedora People archive. (As of this writing, the latest stable version is 0.1.266-1).
  • Open Virt-Manager, double-click your Windows VM, and go to Virtual Hardware Details.
  • Click Add Hardware and select Storage.
  • In the storage details section, select the option for "Select or create custom storage", then click the Manage... button.
  • Click Browse Local and select your downloaded virtio-win.iso file.
  • In the Device type dropdown, select CDROM device and click Finish.

Now, when you boot your Windows VM, you will see this ISO mounted as a CDROM device. Installing these drivers from the ISO is also the solution for common issues like no sound inside the Windows VM and internet connection problems.




Step 8: Launching Your VM with GPU Passthrough

Everything is now configured. Let's start your Windows VM with GPU passthrough. When it boots, your host display will go dark as the GPU unbinds from the host and binds to the guest VM. Since I'll be inside Windows, I cannot show you host-side screenshots or screen recordings, so I will use my phone's camera to record this part of the process.

  • Open a terminal.
  • Run the following command to launch Virt-Manager with the necessary permissions:
sudo virt-manager
  • In the Virt-Manager window, double-click your Windows VM.
  • Click the ▶️ (Play) button to start the virtual machine.




Benchmarks Inside the VM



🎉 Congratulations, your Single-GPU Passthrough VM is now up and running!

You have achieved near-native performance for your virtual machine. To maintain peak efficiency, ensure you have the VirtIO drivers installed for audio, network, and storage. You can also pass through physical disks for even faster I/O, just remember to unmount them from the host first to avoid data corruption.

What's Next?

🎮 Pure Gaming Environment: Now that your GPU is ready, why not test it with a dedicated gaming OS? Follow my guide to virtualize SteamOS in QEMU for a console-like experience.

💻 Native Dual-Boot: If you ever need to run your host OS and Windows with 100% hardware access simultaneously, my Arch Linux Dual-Boot guide is the perfect next step.

🌐 Explore Cloud OS: For a lightweight, cloud-focused alternative on your VM, check out how to set up Chrome OS Flex on QEMU.

🛠️ Support & Troubleshooting: If you encounter any issues with hooks, black screens, or IOMMU grouping, feel free to drop a comment below. I am here to help you fine-tune your setup.

If this guide helped you, subscribe for more advanced Linux and virtualization tutorials.
101 out, I’ll see you in the next one! 🚀

Load comments