How to Pass a Single GPU on a Virtual Machine: QEMU/KVM
Guide Linux QEMU WalkthroughGPU Passthrough with QEMU/KVM is now simpler than ever.
Hi my name is Abdullah Musa and welcome to my blog, MusaBase!. In this comprehensive guide, I'll show you how to passthrough a single GPU in QEMU/KVM on Linux, no SSH connection or second GPU required. I'm using Arch Linux, but you can follow this tutorial on any Linux distribution, provided your CPU and GPU support hardware-assisted virtualization.
Jump to:
- Why Single-GPU Passthrough Fails Until Now?
- The Fix
- Prerequisites
- Step 1: Editing the Bootloader
- Step 2. Installation & Configuring QEMU
- Step 3. Creating the VM & Installing Windows 11
- Step 4. (Optional) Setting Up VFIO on Arch Linux
- Step 5. Installing the Single-GPU Passthrough Script
- Step 6. Attaching the GPU to Your VM
- Step 7. Additonal Tweaks & Optimzations
- Step 8. Launching Your VM with GPU Passthrough
- Benchmark inside VM
- Afterwards
Why Single-GPU Passthrough Fails Until Now?
IN practice, single-GPU passthrough often fails because the lone GPU can't be reliably isolated or reset for the guest VM. By default, motherborad/chipset IOMMU groups your GPU with other peripherals, so VFIO cant't detach it, unless you use an ACS override patch to break those groups.
The Core Isssue
Without clean isolation and reset capabilities, the hypervisor either rejects the GPU device or the VM boots to a black screen or hangs during device reset.
The Fix
Fortunately, there's a GitLab repository by Risingprism called Single GPU Passthrough, which includes detailed instructions and handy tweaks. In the next sections, I'll guide you through installing this script and applying the additional configurations needed for smooth single-GPU passthrough.
Prerequisites
Before we begin, ensure you're running a 64-bit Linux distribution and have QEMU installed, plus in your system`s BIOS enable virtualization support, VT-x on Intel CPUs or AMD-V on AMD CPUs. If you haven't installed QEMU yet, check out this QEMU install & config guide for a quick walkthrough. Finally, confirm that both your GPU and motherboard firmware support UEFI, and that your Linux distro was installed in UEFI mode.
- Processor: Intel or AMD x86-64-bit compatible with virtualization support.
- RAM: 8GB or more, basically the higher the better.
- Hard-Drive: Vary on your OS and other files you want to save or create inside your VM.
- Operating System: Any recent Linux distribution.
- Virtual Tool: QEMU
Step 1: Editing the Bootloader
This example shows how to configure the GRUB bootloader, but you can adapt these steps for systemd-boot as well.
1.1: GRUB Bootloader
1.1.1: Edit GRUB Configuration
To configure GRUB, follow these steps:
- Open terminal and run:
sudo nano /etc/defualt/grub
- In the GRUB file look for the line beginning with "GRUB_CMDLINE_LINUX_DEFAULT".
- Append the following parameter inside the quotes:
intel_iommu=on
- Press Ctrl + O, Enter to save, then Ctrl + X to exit the editor.
1.1.2: Update GRUB Configuration
Arch / Fedora / OpenSuse distros:
- In the terminal, run:
sudo grub-mkconfig -o /boot/grub/grub.cfg
- On Fedora 33 or earlier :
sudo grub-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
Ubuntu, Debian, Linux Mint, and derivatives:
sudo update-grub
Manjaro
sudo update-grub
Reboot the system to apply the changes.
1.2: Systemd-boot
1.2.1: Edit Systemd Bootloader
To edit systemd-boot look for options root= line in your loader entry.
POP!_OS
- Open terminal and run:
sudo nano /boot/efi/loader/entries/Pop_OS-current.conf
- Look for the line that starts with options root= .
- Append intel_iommu=on after the existing options.
intel_iommu=on
- Next, press Ctrl + O to save, Enter, Ctrl + X to exit the file.
1.2.2: Update Systemd Bootloader
- Now to update the systemd-bootloader, run the following command:
sudo bootctl update
Note: Reboot your system to apply the new changes.
Step 2. Installation & Configuring of QEMU
2.1. Installation
If you haven't installed QEMU yet, follow this installation guide to set it up. Once QEMU is ready, we'll move on to configuring its services and libraries.
2.2. Configuration
Note: The followings steps are crucial, please follow them carefully.
2.2.1: Editing libvirtd.conf for Logs & Troubleshooting
- Open a terminal and run:
sudo nano /etc/libvirt/libvirtd.conf
- At the end of the file, add:
log_filters="3:qemu 1:libvirt"
log_outputs="2:file:/var/log/libvirt/libvirtd.log"
- Press Ctrl + O then press Enter to save the changes and press Ctrl + X to exit the file.
- log_filters: Defines minimum log levels, only QEMU's WARNING (3)+ and libvirt's DEBUG (1)+ messages are recorded.
- log_outputs: Sends all INFO (2)+ messages to /var/log/libvirt/libvirtd.log, giving you both QEMU warnings/errors and libvirt infromational logs.
2.2.2: Editing qemu.conf
- Run the following command to open qemu.conf for editing:
sudo nano /etc/libvirt/qemu.conf
- Press Ctrl + F and search for user =.
- Locate the lines:
#user = "root"
#group = "root"
- Remove the # , then replace "root" with your acutal username and group name.
- For me, my username is retro1o1 and my main user group is wheel but make sure you enter your username and group name correctly.
2.3: Adding Your User to the Libvirt Group
To allow proper libvirt file access, add your user to both kvm and libvirt groups:
- Run the following to command to add user to libvirt group:
sudo usermod -a -G kvm,libvirt $(whoami)
- Now, enable and start the libvirtd service.
- Run the following commands consecutively:
sudo systemctl enable libvirtd
sudo systemctl start libvirtd
- We can also verify that libvirt has been added to our users groups by running this command:
sudo groups $(whoami)
- In the output we should have kvm and libvirt listed in the line, something like:
- Finally, restart libvirt to load changes:
sudo systemctl restart libvirtd
2.4: Enabling Internet for VMs via Default Network
to ensure VM internet access, activate the default virtual network in one of two way:
2.4.1: Automatic Start
- Run the following command:
sudo virsh net-autostart default
- Running the above command will tell the libvirt to mark the "default" virtual network so that it will be automatically started by the libvirt daemon every time host boots.
2.4.2. Manual Start
- To start manually:
sudo virsh net-start default
Note: You must run this command before each VM launch if not auto-started.
Step 3: Creating the VM & Installing Windows 11
Now that we've configured libvirt permissions, it's time to install an operating system on your VM. We'll use Windows 11 in this guide, but you can follow the same steps for Windows 10.
3.1: Virtual Machine Set-up
- Launch Virt-Manager, either run virt-manager in a terminal or select it from your desktop`s application menu.
- Click the New VM 🖥️ icon in the toolbar.
- Choose Local install media (ISO image or CDROM) and click Forward.
- Browse to your Windows 11 (or 10) .iso file, next, if on the bottom dropdown the virt-manager doesn't auto-detect the OS, select Windows 11 from the dropdown.
- Allocate RAM and CPU cores based on your system resources and workload.
- Create or choose a virtual disk for storage, use the default path or create a custom virtual storage drive.
- On the final screen, name your VM win10.
- Check Customize configuration before isntall, then click Finish.
Note: Name the VM win10, the passthrough script looks for this exact name. Changing it may cause boot-time black screens or shutdown issues, forcing you to power off the host manually.
3.2: Configuring Virtual Hardware in Virt-Manager
In the VM settings window:
3.2.1: Chipset & Firmware
- Change Chipset to Q35 .
- Change Firmware to UEFI
3.2.2: CPU Configuration
- Enable Copy host CPU configuration (host-passthrough).
- Click Topology, then enable Manually set CPU topology.
- Adjust Sockets, Cores and Threads to match your physical CPU`s specifications.
I have the Intel i5 4th gen, 4 cores cpu with no Hyper-Threadingm, so my configurations for CPU looks like in the image below. Please set them according to your CPU specs.
3.3: Install Windows
Launch the VM, if configured correctly, the QEMU boot screen will load your Windows installer.
Step 4. (Optional) Setting Up VFIO on Arch Linux
Note: This optional step is for Arch Linux users. If the passthrough script fails, follow these instruction, adjusting file paths and commands for other distros as needed.
Step 5. Installing the Single-GPU Passthrough Script
Once Windows finishes installing, shut down the VM and open a terminal.
- In the terminal run the following command:
git clone https://gitlab.com/risingprismtv/single-gpu-passthrough.git
This clones a repo containing scripts that install three hook files to unbind the GPU from the host and bind it to the VM, and back again.
- Change into the repo folder:
cd single-gpu-passthrough
- Now in the single-gpu-passthrough folder run these two commands consecutively to Make and run the installer:
sudo chmod +x install_hooks.sh
sudo ./install_hooks.sh
The install_hooks.sh file
The script checks for and sets up these hooks files:
- Verifies /etc/libvirt/hooks/qemu exists.
- Verifies /usr/local/bin/vfio-startup and /usr/local/bin/vfio-teardown exist.
Hook script actions:
- On VM start:
- Stops host display manager so X/Wayland frees the GPU.
- Unbinds GPU (and HDMI audio) from host drivers.
- Binds GPU to vfio-pci.
- Starts libvirt-nosleep@.service to prevent host suspend.
- On VM shutdown:
- Stops the sleep-inhibit service.
- Unbinds GPU from vfio-pci.
- Rebinds GPU to host PCI driver.
- Restarts the display manager.
Note: When the VM shuts down, the display manager will restart to rebind the GPU, which will log out the current session and require you to log back into your desktop. So save any changes or files before booting your VM for single-gpu-passthrough.
Step 6. Attaching the GPU to Your VM
Before you add your GPU, remove the default virtual display devices that QEMU attaches by default.
6.1. Removing the Default Virtual Display
QEMU adds Spice and QXL as virtual display devices. Remove both since you'll use the passthrough GPU for video.
Steps to remove Spice & QXL:- In Virt-Manager, double-click your Windows VM. (opens VM console window)
- Click the info (ℹ️) button to view VM settings.
- Right-click Display Spice and click on Remove Hardware.
- Repeat the same for Video QXL.
If you can't delete the components or the remove button is greyed out then do the following:
6.2. Fetching the GPU ROM
Note: Older AMD or NViDIA (even newer) models, may require a dumped ROM (vBIOS) for passthrough. You can download your card`s vBIOS from TechPowerUp if you know its exact model and vendor.
Let's proceed without a ROM first, only if you encounter a persistent black screen or reset failure should you dump and attach your GPU's vBIOS. See the script's wiki for detailed instructions on Dumping GPU ROM and Attach GPU with ROM.
6.3. Attaching the GPU
Finally attach your GPU to the VM.
- Open Virt-manager and double-click on your Windows VM.
- Clink the info (ℹ️) to open VM`s virtual hardware details.
- Click Add Hardware at the bottom left of the box.
- Choose PCI Host Device.
- Select your GPU device and then repeat to add the GPU's audio.
- For me this is:
- Add your GPU for PCI Host Device.
- Now, add your GPU`s Audio Controller.
- Do the same as you did for adding GPU, click on Add Hardware then Select PCI Host Device and choose your GPU`s Audio Controller.
6.3.1. Attaching the ROM to Your GPU
Note: Refer to Step 6.2 for dumping your GPU's ROM. If you encounter black-screen or "monitor going to sleep" issues, attach the vBIOS:
- Confirm your ROM file resides in the directory (/usr/share/vgabios/).
- In Hardware Details, select the GPU entry. Click XML.
- Insert a <rom> element just above the <adress type='pci'> tag, for example:
<rom file='usr/share/vgabios/rx-590.rom'/>
After adding your GPU and its Audio Controller, your Virtual machine`s hardware settings should look similar to this:
Important Note: Your PCI devices are divided in groups, called IOMMU Groups. Your GPU is in one, or multiple of these groups, and you need to pass to the VM the entirety of the group that contains your GPU. Please follow this link IOMMU Groups for more details.
Step 7. Additional Tweaks & Optimizations.
7.1.(a): Enabling Hyper-Threading on AMD CPUs
- In Virt-Manager, open your Windows VM and click ℹ️ to view Virtual Hardware Details.
- Click XML, then locate <cpu mode="host-passthrough"check="none">.
- Replace it with your CPU's topology, for example:
<cpu mode='host-passthrough' check='none'>
<topology sockets='1' dies='1' cores='12' threads='2'/>
<feature policy='require' name='topoext'/>
</cpu>
7.1.(b) Disabling SMEP on Intel CPUs
- Go to XML section of your Virtual Hardware Details box and look for <cpu> tag.
- Edit the CPU section code like this:
<cpu mode='host-passthrough' check='none'>
<topology sockets='1' dies='1' cores='12' threads='2'/>
<feature policy='disable' name='smep'/>
</cpu>
7.2. Adding Physical Mouse & Keyboard
When you run your Virtual machine with passthrough enabled, the virtual mouse and keyboard attached with the VM won`t work, so here add your physical mouse and keyboard with your virtual machine in USB Host Device as well.
- Open Virtual Hardware Details and click on Add Hardware.
- Go to USB Host Device and select your Mouse and click Finish.
- Repeat the same steps for adding your Keyboard.
7.3. Add VirtIO (virtio-win) Drivers
VirtIO is a virtualization standard for network and disk device drivers that uses a lightweight, shared-memory interface to pass data between guest and host with minimal CPUI overhead.
Without VirtIO drivers, Windows VMs fall back to slow, fully-emulated hardware that caps disk and network performance and increases CPU overhead. Installing the virtio-win driver ISO during or after VM setup unlocks paravirtual block, network, balloon, and other device drivers, drastically imporving I/O throughput, reducing latency, and enabling advanced features on Windows guest.
- Download the drivers ISO from Fedorapeople website (Make sure to get the latest version, currently 0.1.266-1).
- Next, open virt-manager, double-click on your Virtual Machine and go to Virtual Hardware Details.
- Click on Add Hardware and select Storage.
- Next, in the storage`s details section, check the circle for "Select or create custom storage" then press Manage button.
- Next, click on Browse Local and select "virtio-win.iso".
- Next, in the device type option select CDROM device and click Finish.
Now, when you boot your Widnows in VM, you would see this ISO as CDROM device. These drivers also helps in no-sound inside VM`s Windows and Internet Connection issues.
Step 8. Launching Your VM with GPU Passthrough
Everything's configured, let's start your Windows VM with GPU passthrough. When it boots, your host display will go dark as the GPU unbinds from the host and binds to the guest VM. Since i'll be inside Windows, i cannot show you host-side screenshots or screen-recording. So i will be using my phone's camera.
- Open a terminal.
- Run:
sudo virt-manager
- Double click on your Virtual machine.
- Click on ▶️.
Benchmark insdie VM
Afterwards
You're all set! You can even passthorugh physical disks, just unmount them from the host first to avoid data loss or corruption. Feel free to allocae extra CPU cores and RAM, or use a raw disk format for peak performance. If you encounter any issues with audio, network, display or storage, mount the virtio-win driver ISO in your Windows VM and install the matching VirtIO drivers for optimal efficiency.
🎉 Congratulations, your single-GPU passthrough VM is now up and running!
If you hit any snags, drop a comment below, I'm here to help.
If this guide helped you, subscribe for more step-by-step tutorials.
1O1 out, I'll see you in the next one!