How to Pass a Single GPU on a Virtual Machine: QEMU/KVM
gpu passthrough hardware acceleration howto kvm linux musabase guides qemu QEMU/KVM single gpu passthrough vfio virtual machine virtualization
Virtualization no longer means compromising on performance. By bypassing the software layer and handing direct hardware control to your virtual machine, Single-GPU Passthrough unlocks near native-level gaming and rendering speeds on Linux, all without the need for secondary hardware or complex dual-booting.
Hi, my name is Abdullah Musa and welcome to my blog, MusaBase! In this comprehensive guide, I'll show you how to passthrough a single GPU in a QEMU/KVM VM on Linux, no SSH connection, second GPU, or remote display streaming required. In this guide you'll learn:
- Preparing your Bootloader for IOMMU Configuration
- Creating a QEMU VM with Configurations to Passthrough a Single GPU
- VFIO Configuration
- Single-GPU Passthrough Script Integration
- Attaching the GPU to the VM
I'm using Arch Linux btw!, but you can follow this tutorial on any Linux distribution, provided your CPU and GPU support hardware-assisted virtualization.
Why Single-GPU Passthrough Has Fails Until Now?
In practice, single-GPU passthrough often fails because the lone GPU can't be reliably isolated or reset for the guest VM. By default, the motherboard/chipset IOMMU groups your GPU with other peripherals, so VFIO can't detach it unless you use an ACS override patch to break those groups.
The Core Issue
Without clean isolation and reset capabilities, the hypervisor either rejects the GPU device or the VM boots to a black screen or hangs during device reset.
The Fix
Fortunately, there's a GitLab repository by Risingprism called Single GPU Passthrough, which includes detailed instructions and handy tweaks. In the next sections, I'll guide you through installing this script and applying the additional configurations needed for smooth single-GPU passthrough.
Prerequisites
Before we begin the GPU Passthrough process, ensure your hardware and software meet the following requirements:
- System Architecture: You must be running a 64-bit Linux distribution.
- Virtualization Stack: Ensure QEMU/KVM is properly configured. If you haven't set this up, follow my QEMU installation and configuration guide.
- BIOS/UEFI Settings: Virtualization must be enabled in your firmware:
- Intel CPUs: Enable VT-d and VT-x.
- AMD CPUs: Enable IOMMU and AMD-V.
- Host OS: For a high-performance foundation, I recommend an optimized host.
- Hardware Compatibility: Confirm that both your GPU and motherboard firmware support UEFI.
- Installation Mode: Your Linux distribution must be installed in UEFI mode to ensure compatibility with modern VFIO scripts.
System Requirements:
- Processor: Intel or AMD x86-64 compatible with IOMMU support.
- RAM: 8GB minimum, though 16GB or more is recommended for a smooth gaming experience.
- Storage: Space requirements vary depending on the Guest OS and the applications you plan to install.
- Operating System: Any modern Linux distribution (Arch, Fedora, or Debian-based).
- Virtualization Tool: QEMU/KVM with Libvirt.
Step 1: Editing the Bootloader
A bootloader is a program that runs at system startup and loads the Linux kernel, allowing the operating system to boot. This example shows how to configure the GRUB bootloader, but you can adapt these steps for systemd-boot as well.
1.1: GRUB Bootloader
GRUB is a powerful and flexible bootloader that supports multiple operating systems and advanced boot options, making it ideal for dual-boot setups.
1.1.1: Edit GRUB Configuration
To configure GRUB, follow these steps:
- Open a terminal and run:
sudo nano /etc/default/grub
- In the GRUB file, look for the line starting with "GRUB_CMDLINE_LINUX_DEFAULT".
- Add the following parameter inside the quotes:
intel_iommu=on
- Press Ctrl + O, Enter to save, then Ctrl + X to exit the editor.
1.1.2: Update GRUB Configuration
Arch / Fedora / openSUSE distros:
- In the terminal, run:
sudo grub-mkconfig -o /boot/grub/grub.cfg
- On Fedora 33 or earlier:
sudo grub-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
Ubuntu, Debian, Linux Mint, and derivatives:
sudo update-grub
Manjaro:
sudo update-grub
Reboot the system to apply the changes.
1.2: Systemd-boot
systemd-boot is a lightweight, UEFI-only boot manager that offers faster boot times and simpler configuration but does not support legacy BIOS systems.
1.2.1: Edit Systemd Bootloader
To edit systemd-boot, look for the options root= line in your loader entry.
POP!_OS:
- Open a terminal and run:
sudo nano /boot/efi/loader/entries/Pop_OS-current.conf
- Look for the line that starts with options root=.
- Add intel_iommu=on after the existing options.
intel_iommu=on
- Next, press Ctrl + O to save, Enter to confirm, then Ctrl + X to exit the file.
1.2.2: Update Systemd Bootloader
- To update systemd-boot, run the following command:
sudo bootctl update
Note: Reboot your system to apply the changes.
Step 2: Installation and Configuration of QEMU
In this step, we will install and configure QEMU on Linux. If you already have QEMU installed, you can skip the installation section and move directly to the configuration part.
2.1: QEMU Installation
If you haven't installed QEMU yet, please follow my QEMU/KVM installation and setup guide. Once the virtualization stack is ready, we'll move on to configuring its services and libraries.
2.2: QEMU Configuration
Note: The following steps are crucial; please follow them carefully.
2.2.1: Editing libvirtd.conf for Logs and Troubleshooting
libvirtd.conf is the main configuration file for libvirtd, the background service that manages virtual machines on Linux. It controls things like user permissions, network access, and how virtualization tools such as QEMU and KVM are allowed to run and communicate with the system.
- Open a terminal and run:
sudo nano /etc/libvirt/libvirtd.conf
- At the end of the file, add:
log_filters="3:qemu 1:libvirt"
log_outputs="2:file:/var/log/libvirt/libvirtd.log"
- Press Ctrl + O, then press Enter to save the changes, and press Ctrl + X to exit the file.
2.2.2: Editing qemu.conf to Avoid Permission Issues
qemu.conf is the main configuration file for QEMU itself. It controls how virtual machines are executed, including security settings, user and group permissions, device access, and integration with libvirt. In simple terms, it defines how QEMU is allowed to run and interact with your system.
- Run the following command to open qemu.conf for editing:
sudo nano /etc/libvirt/qemu.conf
- Press Ctrl + F and search for user =.
- Locate the lines:
#user = "root"
#group = "root"
- Remove the #, then replace "root" with your actual username and group name.
- For me, my username is retro1o1 and my main user group is wheel, but make sure you enter your username and group name correctly.
2.3: Adding Your User to the Libvirt Group
To allow proper libvirt file access, add your user to both the kvm and libvirt groups:
- Run the following command to add your user to the libvirt group:
sudo usermod -a -G kvm,libvirt $(whoami)
- Now, enable and start the libvirtd service.
- Run the following commands consecutively:
sudo systemctl enable libvirtd
sudo systemctl start libvirtd
- You can verify that libvirt has been added to your user's groups by running this command:
sudo groups $(whoami)
- In the output, you should see kvm and libvirt listed, something like:
- Finally, restart libvirt to load the changes:
sudo systemctl restart libvirtd
2.4: Enabling Internet for VMs via Default Network
To ensure VM internet access, activate the default virtual network in one of two ways:
2.4.1: Automatic Start
- Run the following command:
sudo virsh net-autostart default
2.4.2: Manual Network Start
- To start the network manually:
sudo virsh net-start default
Note: You must run this command before each VM launch if the network is not auto-started.
Step 3: Creating the VM and Installing Windows 11
Now that we've configured libvirt permissions, it's time to install an operating system on your VM. We'll use Windows 11 in this guide, but you can follow the same steps for Windows 10.
3.1: Virtual Machine Setup
- Launch Virt-Manager by either running virt-manager in a terminal or selecting it from your desktop's application menu.
- Click the New VM 🖥️ icon in the toolbar.
- Choose Local install media (ISO image or CDROM) and click Forward.
- Browse to your Windows 11 (or 10) .iso file. If virt-manager doesn't auto-detect the OS in the bottom dropdown, select Windows 11 manually.
- Allocate RAM and CPU cores based on your system resources and workload.
- Create or choose a virtual disk for storage, using either the default path or a custom virtual storage drive.
- On the final screen, name your VM win10.
- Check Customize configuration before install, then click Finish.
3.2: Configuring Virtual Hardware in Virt-Manager
In the VM settings window:
3.2.1: Chipset and Firmware
- Change Chipset to Q35.
- Change Firmware to UEFI.
3.2.2: CPU Configuration
- Enable Copy host CPU configuration (host-passthrough).
- Click Topology, then enable Manually set CPU topology.
- Adjust Sockets, Cores, and Threads to match your physical CPU's specifications.
I have an Intel i5 4th gen, 4-core CPU with no Hyper-Threading, so my CPU configuration looks like the image below. Please set yours according to your CPU specifications.
3.3: Install Windows 11 on VM
Launch the VM. If configured correctly, the QEMU boot screen will load your Windows installer.
Step 4: (Optional) Setting Up VFIO on Arch Linux
Note: This optional step is for Arch Linux users. If the passthrough script fails, follow these instructions, adjusting file paths and commands for other distros as needed.
4.1: Load VFIO Modules
Virtual Function I/O (VFIO) is a Linux kernel framework enabling QEMU to directly access PCI devices (such as GPUs) for secure passthrough. Before running the setup script, verify that VFIO modules are loaded.
- In a terminal, check for loaded modules:
lsmod | grep vfio
- Make sure these three modules are loaded:
- vfio
- vfio_iommu_type1
- vfio_pci
If any are missing, edit your initramfs config (/etc/mkinitcpio.conf) to include them.
- Run:
sudo nano /etc/mkinitcpio.conf
- In the MODULES=(...) line, add:
vfio vfio_iommu_type1 vfio_pci
- The final edited line will look like this:
- Next, press Ctrl + O to save, press Enter to confirm, then press Ctrl + X to exit the file.
- Now, regenerate initramfs with:
sudo mkinitcpio -p linux
Reboot your system after regenerating the initramfs.
Step 5: Installing the Single-GPU Passthrough Script
Once Windows finishes installing, shut down the VM and open a terminal.
- In the terminal, run the following command:
git clone https://gitlab.com/risingprismtv/single-gpu-passthrough.git
- Change into the repository folder:
cd single-gpu-passthrough
- Now, in the single-gpu-passthrough folder, run these two commands consecutively to make and run the installer:
sudo chmod +x install_hooks.sh
sudo ./install_hooks.sh
What Does the install_hooks.sh Script Do?
The script checks for and sets up these hook files:
- Verifies that /etc/libvirt/hooks/qemu exists.
- Verifies that /usr/local/bin/vfio-startup and /usr/local/bin/vfio-teardown exist.
Hook Script Actions:
- On VM Start:
- Stops the host display manager so X/Wayland frees the GPU.
- Unbinds the GPU (and HDMI audio) from host drivers.
- Binds the GPU to vfio-pci (to our VM's PCI).
- Starts [email protected] to prevent host suspend.
- On VM Shutdown:
- Stops the sleep-inhibit service.
- Unbinds the GPU from vfio-pci.
- Rebinds the GPU to the host PCI driver.
- Restarts the display manager.
Note: When the VM shuts down, the display manager will restart to rebind the GPU, which will log you out of the current session and require you to log back into your desktop. Therefore, save any changes or files before booting your VM for single-GPU passthrough.
Step 6: Attaching the GPU to Your VM
Before you add your GPU, remove the default virtual display devices that QEMU attaches by default.
6.1: Removing the Default Virtual Display
QEMU adds Spice and QXL as virtual display devices. Remove both since you'll use the passthrough GPU for video output.
6.1.1: Remove Spice and QXL
- In Virt-Manager, double-click your Windows VM. (This opens the VM console window.)
- Click the info (ℹ️) button to view VM settings.
- Right-click Display Spice and select Remove Hardware.
- Repeat the same for Video QXL.
If you can't delete the components or the remove button is greyed out, do the following:
- Go back to Virt-Manager and click on Edit in the top menu bar.
- Click on Preferences.
- Check the box for Enable XML Editing.
- Now, go back to your Windows Virtual Machine settings, click on the Overview section, then click on XML.
- In the XML file, look for the graphics tag for Display Spice:
<graphics type="spice" autoport="yes">
<listen type="address"/>
<image compression="off"/>
<gl enable="no"/>
</graphics>
- For Video QXL, look for:
<video>
<model type="qxl" ram="65536" vram="65536" vgamem="16384" heads="1" primary="yes"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
</video>
- Also remove the audio line:
<audio id="1" type="none"/>
- Remove these blocks of code from the XML file.
- After removing these components through XML, click Apply to save the changes.
6.2: Fetching the GPU ROM
Note: Some older AMD or NVIDIA cards (and even some newer models) may require a dumped ROM vBIOS for successful passthrough.
Let's proceed without a ROM first. Only if you encounter a persistent black screen or reset failure should you fetch and attach your GPU's vBIOS. You can find pre-dumped files for many cards on TechPowerUp if you know its exact model and vendor.
For detailed instructions on dumping your own GPU ROM or attaching the GPU with a ROM, refer to the script's official wiki.
6.3: Attaching the Physical GPU to VM
Finally, attach your physical GPU to the VM.
- Open Virt-manager and double-click your Windows VM.
- Click the info (ℹ️) button to open the VM's virtual hardware details.
- Click Add Hardware at the bottom left of the window.
- Choose PCI Host Device.
- Select your GPU device from the list, then click Finish.
- Repeat the process to add the GPU's audio controller (often listed as "HD Audio Controller" for the same device).
- For my system, the devices are named as follows (yours will be different):
6.4: Attaching the ROM to Your GPU
Note: Refer to Step 6.2 for instructions on dumping your GPU's ROM. If you encounter a black screen or "monitor going to sleep" issues, attach the vBIOS as follows:
- Ensure your ROM file is in the correct directory (e.g., /usr/share/vgabios/).
- In the Hardware Details window, select the GPU entry, then click the XML tab.
- Insert a <rom> element just above the closing <address type='pci'> tag, for example:
<rom file='/usr/share/vgabios/rx-590.rom'/>
After adding your GPU and its Audio Controller, your virtual machine's hardware list should look similar to this:
Important Note: Your PCI devices are divided into groups called IOMMU Groups. Your GPU belongs to one or more of these groups, and you must pass the entirety of the group that contains your GPU to the VM for passthrough to work correctly. For more details, please refer to the guide on IOMMU Groups.
Step 7: Additional Tweaks & Optimizations
7.1(a): Enabling Simultaneous Multithreading (SMT) on AMD CPUs
- In Virt-Manager, open your Windows VM and click the ℹ️ button to view its Virtual Hardware Details.
- Click the XML tab and locate the line starting with <cpu mode="host-passthrough" check="none">.
- Replace the entire <cpu> block with the following example, which defines the topology and enables the required AMD feature:
<cpu mode='host-passthrough' check='none'>
<topology sockets='1' dies='1' cores='12' threads='2'/>
<feature policy='require' name='topoext'/>
</cpu>
7.1(b): Disabling SMEP on Intel CPUs
- Go to the XML tab in your VM's Virtual Hardware Details window and find the <cpu> tag.
- Edit the <cpu> section to match the following example, which explicitly disables the SMEP feature:
<cpu mode='host-passthrough' check='none'>
<topology sockets='1' dies='1' cores='12' threads='2'/>
<feature policy='disable' name='smep'/>
</cpu>
7.2: Adding a Physical Mouse and Keyboard
When you run your VM with GPU passthrough enabled, the emulated mouse and keyboard attached by default will not work on the passed-through GPU's display. To control the VM, you must pass through your physical mouse and keyboard as USB devices.
- Open the VM's Virtual Hardware Details and click Add Hardware.
- In the new window, select USB Host Device, choose your mouse from the list, and click Finish.
- Repeat the process to add your physical keyboard.
7.3: Add VirtIO (virtio-win) Drivers
VirtIO is a virtualization standard for network and disk device drivers that uses a lightweight, shared-memory interface to pass data between the guest and host with minimal CPU overhead.
Without VirtIO drivers, Windows VMs fall back to slow, fully-emulated hardware that caps disk and network performance while increasing CPU overhead. Installing the virtio-win driver ISO during or after VM setup unlocks paravirtual block, network, balloon, and other device drivers, drastically improving I/O throughput, reducing latency, and enabling advanced features in the Windows guest.
- Download the latest virtio-win driver ISO from the Fedora People archive. (As of this writing, the latest stable version is 0.1.266-1).
- Open Virt-Manager, double-click your Windows VM, and go to Virtual Hardware Details.
- Click Add Hardware and select Storage.
- In the storage details section, select the option for "Select or create custom storage", then click the Manage... button.
- Click Browse Local and select your downloaded virtio-win.iso file.
- In the Device type dropdown, select CDROM device and click Finish.
Now, when you boot your Windows VM, you will see this ISO mounted as a CDROM device. Installing these drivers from the ISO is also the solution for common issues like no sound inside the Windows VM and internet connection problems.
Step 8: Launching Your VM with GPU Passthrough
Everything is now configured. Let's start your Windows VM with GPU passthrough. When it boots, your host display will go dark as the GPU unbinds from the host and binds to the guest VM. Since I'll be inside Windows, I cannot show you host-side screenshots or screen recordings, so I will use my phone's camera to record this part of the process.
- Open a terminal.
- Run the following command to launch Virt-Manager with the necessary permissions:
sudo virt-manager
- In the Virt-Manager window, double-click your Windows VM.
- Click the ▶️ (Play) button to start the virtual machine.
Benchmarks Inside the VM
Frequently Asked Questions: Single‑GPU Passthrough with QEMU/KVM
Why does single‑GPU passthrough often fail?
Single‑GPU passthrough fails because the lone GPU cannot be reliably isolated or reset for the guest VM. By default, the motherboard’s IOMMU groups your GPU with other peripherals, preventing VFIO from detaching it unless an ACS override patch is used. Without clean isolation and proper reset capabilities, the hypervisor may reject the device or the VM boots to a black screen.
What are the prerequisites for single‑GPU passthrough?
- 64‑bit Linux distribution with QEMU/KVM and libvirt installed.
- Intel CPU with VT‑d and VT‑x enabled, or AMD CPU with IOMMU and AMD‑V enabled in BIOS/UEFI.
- Host OS installed in UEFI mode.
- GPU and motherboard that support UEFI.
- At least 8 GB RAM (16 GB recommended).
- Properly configured bootloader with IOMMU enabled (e.g., intel_iommu=on or amd_iommu=on).
How do I enable IOMMU in the GRUB bootloader?
Edit /etc/default/grub and add intel_iommu=on (for Intel) or amd_iommu=on (for AMD) to the GRUB_CMDLINE_LINUX_DEFAULT line. Save the file, then update GRUB with sudo grub-mkconfig -o /boot/grub/grub.cfg (Arch, Fedora) or sudo update-grub (Debian/Ubuntu). Reboot to apply.
How do I enable IOMMU with systemd‑boot?
Edit your boot entry file (e.g., /boot/efi/loader/entries/Pop_OS-current.conf). Add intel_iommu=on or amd_iommu=on to the options line. Save the file and run sudo bootctl update. Reboot to apply.
How do I configure libvirtd to enable logging and troubleshooting?
Edit /etc/libvirt/libvirtd.conf and add the following lines at the end:
log_filters="3:qemu 1:libvirt" log_outputs="2:file:/var/log/libvirt/libvirtd.log"
This enables detailed logging for QEMU and libvirt, which helps diagnose passthrough issues.
How do I avoid permission issues with QEMU?
Edit /etc/libvirt/qemu.conf and uncomment the lines user = "root" and group = "root". Replace "root" with your actual username and group (e.g., user = "musa", group = "wheel"). This ensures QEMU runs with your user privileges and can access necessary devices.
How do I add my user to the libvirt and KVM groups?
Run:
sudo usermod -a -G kvm,libvirt $(whoami)
Then restart the libvirtd service:
sudo systemctl restart libvirtd
Verify with groups $(whoami); you should see kvm and libvirt listed.
How do I enable the default virtual network for VM internet access?
To start it automatically at boot:
sudo virsh net-autostart default
To start it immediately:
sudo virsh net-start default
What is VFIO and why is it needed?
VFIO (Virtual Function I/O) is a Linux kernel framework that allows safe, IOMMU‑protected direct access to PCI devices from userspace applications like QEMU. It is essential for GPU passthrough because it unbinds the GPU from its native host driver and rebinds it to the vfio‑pci driver, making it available to the guest VM.
How do I ensure VFIO modules are loaded?
Check with lsmod | grep vfio. You should see vfio, vfio_iommu_type1, and vfio_pci. If missing, add them to the MODULES array in /etc/mkinitcpio.conf (Arch) or equivalent initramfs configuration for your distribution, then regenerate initramfs and reboot.
What does the RisingprismTV single‑GPU passthrough script do?
The script installs libvirt hooks that automatically:
- Stop the host display manager before VM start, freeing the GPU.
- Unbind the GPU from host drivers and bind it to vfio‑pci.
- Inhibit system sleep while the VM runs.
- On VM shutdown, unbind the GPU from VFIO, rebind it to the host driver, and restart the display manager.
It is available at https://gitlab.com/risingprismtv/single-gpu-passthrough.
How do I install the single‑GPU passthrough script?
Clone the repository and run the installer:
git clone https://gitlab.com/risingprismtv/single-gpu-passthrough.git cd single-gpu-passthrough sudo chmod +x install_hooks.sh sudo ./install_hooks.sh
Why must I remove the default virtual display devices (Spice, QXL) before attaching the GPU?
QEMU adds virtual display devices by default. If they remain, the VM may still try to use them, causing conflicts or black screens. Removing them ensures the VM uses the passed‑through physical GPU as its primary display.
How do I remove Spice and QXL devices in Virt‑Manager?
Open your VM’s hardware details, right‑click Display Spice and Video QXL, and select Remove Hardware. If the option is greyed out, enable XML editing in Edit → Preferences, then delete the corresponding XML sections from the Overview → XML tab.
How do I attach my physical GPU to the VM?
In Virt‑Manager, open your VM’s hardware details, click Add Hardware, choose PCI Host Device, select your GPU from the list, and click Finish. Repeat for the GPU’s HDMI audio controller. Ensure you pass the entire IOMMU group that contains your GPU.
What is a GPU ROM file and when do I need it?
A GPU ROM (vBIOS) file contains the firmware that initialises the card. Some older GPUs (and some newer ones) require a dumped ROM for successful passthrough; otherwise, you may encounter a black screen or reset failure. You can dump your own ROM using tools like rom-parser or download a matching one from sites like TechPowerUp.
How do I attach a GPU ROM file in Virt‑Manager?
In the VM’s hardware details, select the GPU device, switch to the XML tab, and add a line like <rom file='/usr/share/vgabios/your-gpu.rom'/> just before the closing <address type='pci'/> tag. Adjust the path and filename accordingly.
How do I pass through my physical mouse and keyboard to the VM?
In Virt‑Manager, add hardware of type USB Host Device. Select your mouse and keyboard from the list and add them. This allows direct control of the VM without using emulated input devices.
Why should I install VirtIO drivers in Windows?
VirtIO drivers provide paravirtualised disk and network interfaces, dramatically improving I/O performance and reducing CPU overhead compared to emulated hardware. They also include drivers for ballooning, memory, and other devices. Without them, Windows VMs will be significantly slower.
How do I install VirtIO drivers in a Windows VM?
Download the virtio-win.iso from the Fedora People archive. In Virt‑Manager, add a new storage device, select the ISO as a CDROM, and boot the VM. Inside Windows, open the CDROM and run the appropriate installer for your version. After installation, you can switch disk and network interfaces to VirtIO for better performance.
What CPU topology tweaks are needed for AMD CPUs?
For AMD CPUs, you may need to enable the topoext feature and define the correct topology. In the VM’s XML, modify the <cpu> block to include:
<topology sockets='1' dies='1' cores='12' threads='2'/> <feature policy='require' name='topoext'/>
Adjust cores/threads to match your CPU.
What CPU tweaks are needed for Intel CPUs?
For some Intel CPUs, you may need to disable SMEP (Supervisor Mode Execution Prevention). In the VM’s XML, add:
<feature policy='disable' name='smep'/>
within the <cpu> block. Adjust the topology to match your CPU.
Why must I run virt‑manager with sudo before launching the VM?
Running sudo virt-manager ensures that the libvirt hooks (installed by the script) have the necessary permissions to manage the GPU during VM startup and shutdown. Starting as a regular user may cause the hooks to fail, leaving the host with a black screen after the VM exits.
What are the most common mistakes when setting up single‑GPU passthrough?
- Not enabling IOMMU in BIOS or bootloader.
- Forgetting to add VFIO modules to initramfs.
- Not removing virtual display devices before attaching the GPU.
- Passing only part of the GPU’s IOMMU group (must pass the whole group).
- Failing to install the libvirt hooks or not running virt‑manager with sudo.
- Using an incorrect CPU topology that doesn’t match the physical CPU.
- Not attaching the GPU’s audio controller along with the GPU.
🎉 Congratulations, your Single-GPU Passthrough VM is now up and running!
You have achieved near-native performance for your virtual machine. To maintain peak efficiency, ensure you have the VirtIO drivers installed for audio, network, and storage. You can also pass through physical disks for even faster I/O, just remember to unmount them from the host first to avoid data corruption.
If this guide helped you, subscribe for more advanced Linux and virtualization tutorials.
101 out, I’ll see you in the next one! 🚀