Lab Entry — Day 0:
“Windows update ran. Fedora didn’t boot. Data recovered. Sanity: debatable.”
The Break That Started Everything
I didn’t plan to go deep into virtualisation. It happened because something broke.
What Actually Happened (The Timeline)
Here’s the sequence, because the specifics matter:
| Time | Event |
|---|---|
| T+0 | Windows runs an update in the background on the shared drive |
| T+0 | Windows update rewrites the bootloader, blows past GRUB |
| T+1 min | Reboot. GRUB gone. Fedora doesn’t boot. |
| T+2 hrs | Live USB recovery attempt. Chroot. grub2-install. Partial recovery. |
| T+4 hrs | Most data recovered. Configs partially intact. Some work lost. |
| T+1 day | Full reinstall. Reconfiguration. Environment rebuilt from scratch. |
The data loss was manageable. The downtime was the actual problem. A full day of rebuilding an environment that should have never broken in the first place.
That was the moment I stopped trusting bare metal for my daily workflow.
Not dual boot. Not “I’ll be careful next time.” I wanted isolation. Proper virtualisation. A setup where Windows can do whatever it wants inside a box, and my host stays clean no matter what.
This is the story of building that setup — mistakes, surprises, dead ends, and all.
The Machine (Context Matters Here)
Before anything, here’s what I’m working with:
| Component | Details |
|---|---|
| Machine | Acer Predator PHN16-71 |
| CPU | Intel Core i7-13th Gen (HX series) |
| GPU | NVIDIA RTX 4070 Max-Q (Optimus — this matters later) |
| RAM | 32 GB |
| Display | 2560×1600 internal + 2560×1440 external (G27q-20) |
| Host OS | Fedora Silverblue (migrated from standard Fedora) |
The GPU situation becomes relevant when I get to the GPU passthrough section. Spoiler: Optimus is the villain.
Why Fedora Silverblue?
After the dual-boot incident, I made a deliberate switch from standard Fedora to Fedora Silverblue.
The reasoning was simple:
- Immutable base OS — I can’t accidentally break the system layer
- Layered packages and containers for dev work
- VMs for everything OS-level — Windows, distro testing, lab stuff
- Atomic updates — rollback if something goes wrong
The tradeoff? Documentation is thinner. Some guides assume a mutable system. Paths and tooling work slightly differently, and you end up spending time figuring out where things live rather than how they work.
Getting the Virtualisation Stack onto Silverblue
On standard Fedora you’d dnf install everything and move on. On Silverblue, /usr is read-only. You can’t install packages the normal way. Everything that touches the base system goes through rpm-ostree.
Here’s exactly what I layered:
# Layer the full virtualisation stack onto the immutable base
rpm-ostree install \
qemu-kvm \
libvirt \
libvirt-daemon-config-network \
libvirt-daemon-kvm \
virt-manager \
virt-install \
virt-viewer \
edk2-ovmf \
swtpm \
swtpm-tools
# Reboot to apply the layered packages (required on Silverblue)
systemctl rebootAfter the reboot, enable and start the daemon:
sudo systemctl enable --now libvirtd
sudo usermod -aG libvirt $USER
# Log out and back in for the group to take effectrpm-ostree vs Toolbox — The Split
This is the decision you make on every tool on Silverblue:
| Tool | Where it goes | Why |
|---|---|---|
qemu-kvm, libvirt, virt-manager | rpm-ostree install | Needs to be on the host, interacts with the kernel |
swtpm, edk2-ovmf | rpm-ostree install | VM firmware — host-level |
virsh, virt-install | rpm-ostree install | Talks directly to libvirt on host |
| Dev tools, compilers, CLIs | toolbox container | Don’t pollute the base |
| GUI apps | Flatpak | Sandboxed, clean |
The rule of thumb: if it needs to talk to the kernel or a system daemon, it goes in rpm-ostree. Everything else stays out of the base layer.
The SELinux Wall on /var/mnt/vms/
This one took time to figure out and most guides don’t mention it.
I store VM images on a separate partition mounted at /var/mnt/vms/. When libvirt tries to access files there, SELinux blocks it — because the directory doesn’t have the right context label that libvirt expects.
The fix:
# Check what context libvirt expects
ls -laZ /var/lib/libvirt/images/
# Apply the correct SELinux context to your custom storage path
sudo semanage fcontext -a -t virt_image_t "/var/mnt/vms(/.*)?"
sudo restorecon -Rv /var/mnt/vms/Without this, you get a cryptic permission denied error when trying to start the VM — even as root. SELinux is doing its job, but the error message doesn’t tell you that.
Alternatively, you can tell libvirt about the storage path properly through a storage pool, which also gets the context right:
virsh pool-define-as --name vms --type dir --target /var/mnt/vms
virsh pool-autostart vms
virsh pool-start vmsUsing a defined storage pool is the cleaner approach — libvirt manages the path, SELinux labels are handled, and you get pool-level management on top.
The Read-Only /usr Reality
On Silverblue, /usr is a read-only bind mount from the ostree image. This trips people up in a few specific ways with virtualisation:
- QEMU binary path is
/usr/bin/qemu-system-x86_64— fine, it’s there after rpm-ostree install, but you can’t manually drop binaries here - OVMF firmware lives in
/usr/share/edk2/ovmf/— accessible, but if you need a custom firmware build, you’re placing it elsewhere and pointing libvirt at it manually - libvirt configs live in
/etc/libvirt/— this is writable (it’s in the layered overlay), so VM XML configs and network definitions work fine
In practice, once you understand the split — read-only base, writable /etc and /var, overlays for layered packages — it stops being confusing and starts being predictable.
Why Virtualisation Over Dual Boot?
Let me be real about this.
I wasn’t moving to Windows. I was putting Windows inside a cage on Linux. My actual needs were:
- Microsoft Office (occasional, can’t avoid it)
- AutoCAD (rare, one-time project use)
- Isolation — keep Windows from touching my host
- Homelab experimentation — load balancers, multi-VM setups, distro testing
- Performance curiosity — I wanted to see how good KVM actually is
Dual boot fails at one fundamental thing: context switching has a real cost. Reboot cycles. Different states. No snapshots. No “undo.”
Virtualisation gives you:
- ✅ Isolation by default
- ✅ Snapshots before risky operations
- ✅ Reproducible environments
- ✅ Fast recovery when things break
- ✅ The host stays untouched
The Stack I Landed On
I kept it close to native Linux tooling. No exotic orchestration, no Proxmox-style hypervisor layer.
Host: Fedora Silverblue
Hypervisor: QEMU + KVM
Management: libvirt
GUI: virt-manager
Display: SPICE (local) / VNC (remote via Tailscale)
VM Storage: Separate partition (/var/mnt/vms/)The architecture looks like this:
(Immutable Base)"] --> B["libvirt / QEMU-KVM"] B --> C["Windows 11 VM"] B --> D["Linux Test VMs"] C --> E["SPICE Display
(Local)"] C --> F["VNC
(Remote via Tailscale)"] C --> G["Virtio Network
(NAT - default)"] A --> H["Toolbox Containers
(Dev Work)"] A --> I["Flatpaks
(Apps)"]
No external management layer. Just the standard QEMU/KVM/libvirt stack that ships with Fedora.
Building the Windows 11 VM
The Config Decisions
Windows 11 has specific requirements — UEFI, Secure Boot, TPM 2.0. These aren’t optional. The key decisions I landed on:
- 24 GB RAM allocated — generous, but Windows 11 with Office open eats memory fast
- 8 vCPUs — half the physical core count, leaves headroom for the host
- CPU mode:
host-passthrough— the VM sees your actual CPU features, not an emulated subset. This is the single biggest performance win you can make without touching anything else. The VM scheduler, branch prediction, and instruction extensions all work as if Windows is running on bare metal.
Machine Type: pc-q35 vs i440fx
The machine type setting is easy to overlook in virt-manager — it’s tucked in the overview tab. You have two main choices:
| Machine Type | Based on | Use case |
|---|---|---|
pc-i440fx | 1990s Intel 440FX chipset | Legacy, broad compatibility |
pc-q35 | Intel ICH9 chipset (2007+) | Modern, PCIe, required for Secure Boot |
Always use pc-q35 for modern guests. Here’s why it matters:
i440fxuses an ISA bridge and legacy PCI — no proper PCIe topologypc-q35gives you a real PCIe root complex, PCIe root ports, and proper PCIe device hierarchy- Secure Boot + OVMF requires Q35
- Windows 11 TPM passthrough behaves better on Q35
- Any PCIe device passthrough (including eventual GPU passthrough attempts) needs Q35
The 10.1 suffix is the QEMU machine version — it pins the emulated hardware to a specific QEMU version’s feature set so the VM stays consistent across QEMU updates.
Firmware: UEFI + Secure Boot + TPM
In virt-manager, this means selecting UEFI firmware (the OVMF_CODE_4M.secboot.qcow2 variant from the edk2 package) and enabling Secure Boot with enrolled keys. The firmware loader path points to /usr/share/edk2/ovmf/ — that’s where the edk2-ovmf package installs it after your rpm-ostree install.
For TPM, the emulated tpm-crb model backed by swtpm handles it in software — no physical TPM chip needed. This satisfies Windows 11’s TPM 2.0 requirement cleanly.
Windows 11 needs all three — UEFI, Secure Boot, TPM. Miss one and the installer refuses to proceed. Set these up through virt-manager before you even boot the ISO.
Hypervisor Enlightenments (The Hidden Performance Layer)
This section is underrated in most guides. In virt-manager, it’s under the CPU section — enable “Copy host CPU configuration” and then check the HyperV features list. I turned on the full set: relaxed timing, virtual APIC, spinlock optimisation, virtual processor index, synthetic timer, frequency reporting, TLB flush, IPI optimisations, EVMCS, and AVIC.
What this does: it tells Windows it’s running on Hyper-V compatible hardware. Windows then optimises its own scheduler, memory management, and timer handling specifically for the virtualised environment. You’re not tricking Windows — you’re giving it accurate information so it can make better decisions. The result is smoother performance without touching CPU pinning or hugepages.
Storage: qcow2 on a Dedicated Partition
The disk is a qcow2 file at /var/mnt/vms/windows.qcow2, attached to the VM via a virtio bus. That last part matters — see the virtio section below.
| Feature | qcow2 | raw |
|---|---|---|
| Snapshots | ✅ Yes | ❌ No |
| Thin provisioning | ✅ Yes | ❌ No |
| Portable | ✅ Easy | ⚠️ Larger |
| Performance ceiling | Slightly lower | Higher |
Raw is faster. qcow2 is practical. For a daily-driver VM with snapshots as your safety net, qcow2 wins.
The image lives on /var/mnt/vms/ — a separate partition mounted outside the root filesystem. This keeps the host clean and makes backups straightforward.
Networking (The One Thing That Just Worked)
Default NAT via libvirt. That’s it. The network is set to default (libvirt’s built-in NAT network), with a virtio model NIC.
The VM gets an IP in the 192.168.122.0/24 range, NATs through the host, and has full internet access. No bridging, no manual config, no fiddling.
This was the most pleasant surprise of the whole setup.
virtio — Why Every Device Uses It
Every device in this VM — disk, network, display — uses virtio. This is deliberate.
QEMU can emulate real hardware — an Intel e1000 NIC, an IDE controller, a Realtek audio chip. That sounds useful, but it means the VM runs the actual device driver against an emulated hardware stack. Every I/O operation has to go: guest driver → emulated hardware → QEMU userspace → kernel KVM → host hardware.
virtio is a paravirtualised interface. The guest driver knows it’s in a VM and communicates directly with the hypervisor through a shared memory ring buffer. The stack becomes: guest driver → KVM → host hardware. One layer removed.
The practical difference:
| Bus | Mechanism | Performance | Driver needed |
|---|---|---|---|
IDE / SATA | Full emulation | Slowest | Built-in |
virtio | Paravirtualised | Fastest | virtio-win (included in modern Windows) |
e1000 NIC | Full emulation | Slow | Built-in |
virtio-net | Paravirtualised | ~line speed | virtio-win |
For Windows, the virtio drivers ship in a separate ISO — virtio-win.iso from Fedora’s repos. Mount it alongside the Windows installer and install the drivers during setup, or grab them afterward from Device Manager.
# Get the virtio-win ISO on Silverblue
rpm-ostree install virtio-win
# ISO lands at: /usr/share/virtio-win/virtio-win.isoSPICE — What the Config Actually Does
There are a few specific decisions in the SPICE setup worth calling out:
Image compression is off — SPICE supports on-the-fly image compression (like JPEG) to reduce bandwidth. On a local connection through virt-manager on the same machine, compression adds CPU overhead and degrades image quality for zero benefit. Turn it off. If you were doing remote SPICE over a slow link, you’d enable it.
The spicevmc channel — This is a virtio serial channel that SPICE uses for guest agent communication. It’s what enables clipboard sharing between host and guest, automatic display resolution adjustment when you resize the virt-manager window, and mouse cursor integration (no capture/release required). Without the QEMU Guest Agent installed in Windows and this channel present, you lose all of that.
virtio video with no 3D acceleration — The display adapter is the paravirtualised virtio-vga, not a real GPU. It handles 2D display output through SPICE well enough for desktop use. 3D acceleration through virtio-vga without GPU passthrough is limited and often causes more issues than it solves, so it’s off.
USB redirection slots — The config includes two SPICE USB redirection devices. These let you forward a USB peripheral from the host into the VM through the SPICE connection, without full USB passthrough. Useful for one-off dongles or devices you don’t want permanently assigned to the VM.
VM Management: Snapshots, Autostart, Backup
This is the part that makes the whole setup worth it.
Snapshots — your safety net:
# Take a snapshot before anything risky
virsh snapshot-create-as win11 \
--name "before-office-install" \
--description "Clean Windows 11, drivers installed" \
--atomic
# List snapshots
virsh snapshot-list win11
# Restore if things go wrong
virsh snapshot-revert win11 before-office-install
# Delete when you no longer need it
virsh snapshot-delete win11 before-office-installThe --atomic flag ensures the snapshot is consistent — QEMU quiesces the disk before snapping. This is the feature that makes qcow2 worth using over raw.
Autostart (optional — I leave it off):
# Start the VM automatically when libvirtd starts
virsh autostart win11
# Disable autostart
virsh autostart win11 --disableI keep autostart off. I start the VM when I need it. Keeps resources free when I’m not using Windows.
Backup — the dead simple approach:
# Shut down the VM first (or use a snapshot for live backup)
virsh shutdown win11
# Copy the qcow2 — that's it
cp /var/mnt/vms/windows.qcow2 /path/to/backup/windows-$(date +%Y%m%d).qcow2
# The XML config lives here — back this up too
virsh dumpxml win11 > ~/backups/win11.xmlThe entire VM is one file. Backup is cp. Restore is cp back and virsh define win11.xml. This simplicity is why I chose virtualisation over any other approach.
1. Fedora Silverblue Documentation Gap
Silverblue is genuinely great, but the virtualisation documentation lags behind. Most guides assume you’re on a mutable system. The specific friction points I hit:
- Most
libvirt+virt-managerguides saysudo dnf install— that doesn’t work on Silverblue, and the error message doesn’t tell you why - SELinux blocking access to the custom VM storage path (covered above) — not documented in the Silverblue virtualisation guides
rpm-ostree installrequires a reboot to activate layered packages, so you install everything, try to startlibvirtd, and it’s not there yet- The Cockpit virtualisation plugin (
cockpit-machines) behaves differently on Silverblue because Cockpit itself is a layered package
You piece it together from multiple sources. It’s manageable, but factor in extra time for “why doesn’t this path exist” debugging.
2. GPU Passthrough — The Wall I Ran Into
This is the section I was most excited to write about. Not because it worked, but because of how it failed.
The goal was to pass the RTX 4070 Max-Q directly into the Windows VM. Full GPU access. Native performance.
Here’s what the hardware architecture looks like on an Optimus laptop:
(Integrated GPU)"] -->|"Display Output"| Screen["Internal Display"] CPU <-->|"PCIe"| NVGPU["RTX 4070 Max-Q
(Discrete GPU)"] NVGPU -->|"Optimus - frames routed via iGPU"| Screen
The problem is Optimus.
On a desktop, you’d isolate the discrete GPU using VFIO, bind it to the vfio-pci driver, and pass it cleanly to a VM. The iGPU handles the host display. The dGPU goes entirely to the guest.
On an Optimus laptop, there’s no clean isolation. The discrete GPU’s output is routed through the integrated GPU before reaching the display. There’s no native connector on the dGPU to the screen — it depends on the iGPU for output.
How Far I Actually Got
I didn’t just read about this problem. I went through the steps:
Step 1 — Find the GPU’s PCI address:
lspci -nn | grep -i nvidia
# Output: 01:00.0 VGA compatible controller [0300]: NVIDIA ... RTX 4070 ... [10de:2786]
# PCI ID: 10de:2786Step 2 — Enable IOMMU in the kernel:
# Add to /etc/default/grub (on standard Fedora — on Silverblue this is different)
# On Silverblue, kernel args go through rpm-ostree:
sudo rpm-ostree kargs --append=intel_iommu=on --append=iommu=ptAfter reboot, verify:
dmesg | grep -i iommu
# Should show: DMAR: IOMMU enabledStep 3 — Check the IOMMU group:
for d in /sys/kernel/iommu_groups/*/devices/*; do
n=${d#*/iommu_groups/*}; n=${n%%/*}
printf 'IOMMU Group %s ' "$n"
lspci -nns "${d##*/}"
done | grep -i nvidiaThis is where the problem became concrete. The RTX 4070 Max-Q was sharing an IOMMU group with other devices — specifically the PCIe root port it connects through, and the Intel iGPU. On a desktop with proper PCIe slot isolation, each GPU typically lands in its own IOMMU group. On this laptop, they’re grouped together.
Step 4 — Attempt VFIO binding anyway:
# Identify the GPU's vendor:device IDs
# RTX 4070 Max-Q: 10de:2786
# NVIDIA Audio: 10de:22be
# Add VFIO kernel module config
echo "options vfio-pci ids=10de:2786,10de:22be" | sudo tee /etc/modprobe.d/vfio.confThe issue: binding vfio-pci to the dGPU while the iGPU (which the display depends on) is still in the same IOMMU group causes the display to go dark. The host loses output because you’ve effectively yanked the device the display pipeline depends on.
Step 5 — The MUX switch dead end:
Some Predator models have a MUX switch that lets you switch between iGPU-only and dGPU-direct mode at the BIOS level. This bypasses Optimus entirely — the dGPU connects directly to the display, the iGPU is off, and you can isolate the dGPU in its own IOMMU group cleanly.
My PHN16-71 doesn’t expose this in firmware. The BIOS has no MUX/Optimus toggle.
Where it ended:
No MUX switch + shared IOMMU group + Optimus = no clean path to GPU passthrough without risking display stability on the host. I could have pursued ACS override patches (which force IOMMU group splits at kernel level), but that’s a security regression and known to cause instability.
The honest summary: I got IOMMU enabled, identified the group, attempted the bind, lost display output, rebooted. The hardware doesn’t support clean isolation on this config.
What you need for clean passthrough on a laptop:
- MUX switch — a hardware switch that can route the dGPU directly to the display, bypassing Optimus
- Or an external monitor connected to a port that’s wired directly off the dGPU (some Thunderbolt ports qualify)
Lesson: GPU passthrough on Optimus laptops without a MUX switch is not worth the pain. Save it for desktops or laptops with explicit MUX/hybrid mode support.
I dropped it. The VM runs on the virtio virtual GPU with SPICE. Not ideal for gaming. Perfectly fine for everything else.
3. SPICE Over Tailscale — Didn’t Happen
The plan was clean: use SPICE for remote access to the VM over Tailscale.
The reality: I couldn’t find a reliable SPICE client for Windows that handled this cleanly. SPICE works great when you’re on the same machine running virt-manager. Remote access over the network is a different story — client support is patchy, and the setup friction wasn’t worth it for my use case.
What I use instead: VNC over Tailscale.
Less elegant. Works every time.
What Actually Worked Well
Performance (The Real Surprise)
I expected the VM to feel like a VM — sluggish, slightly off, that classic “something’s wrong” feeling.
What I got instead: a Windows 11 environment that felt faster than my old ultrabook running Windows natively.
That’s not a typo.
host-passthrough CPU mode + virtio drivers + HyperV enlightenments = a VM that genuinely doesn’t feel like one for normal workloads. Office, browser, AutoCAD — all usable. The RTX 4070 still handles host-side GPU tasks, so the Linux side has full GPU acceleration while the VM runs on the virtual display.
Isolation (The Actual Goal)
This is where virtualisation pays off in ways dual boot never can:
- Windows lives inside
windows.qcow2— one file - My Fedora host is completely untouched by anything Windows does
- Snapshots before updates, installs, or experiments
- If Windows breaks: restore snapshot, or spin up fresh. Minutes, not hours.
The peace of mind is real.
Architecture: Full Picture
Here’s the complete picture of where the setup landed:
i7-13HX · RTX 4070 Max-Q · 32GB RAM"] end subgraph "Host Layer" SB["Fedora Silverblue
(Immutable Base OS)"] TB["Toolbox Containers
(Dev / CLI Tools)"] FP["Flatpaks
(GUI Apps)"] end subgraph "Virtualisation Layer" LV["libvirt + QEMU/KVM"] WIN["Windows 11 VM
24GB RAM · 8 vCPU · qcow2"] LX1["Linux VM 1
(Distro Testing)"] LX2["Linux VM 2
(Load Balancer Lab - WIP)"] end subgraph "Access" SP["SPICE
(Local Access)"] VNC["VNC via Tailscale
(Remote Access)"] end HW --> SB SB --> TB SB --> FP SB --> LV LV --> WIN LV --> LX1 LV --> LX2 WIN --> SP WIN --> VNC
Lessons Learned (What I’d Tell Past Me)
1. Virtualisation > Dual Boot for Daily Workflows
You trade a small performance ceiling for massive stability gains. On modern hardware with KVM, that ceiling is high enough that you won’t notice it for most workloads.
2. Start Simple, Then Optimise
Don’t open with hugepages, CPU pinning, or NUMA tuning. Start here:
✅ host-passthrough CPU
✅ virtio disk and network
✅ HyperV enlightenments
✅ Default NAT networking
✅ qcow2 storageGet a working, stable VM first. Optimise after you understand the baseline.
3. GPU Passthrough on Optimus Laptops: Don’t Start There
Unless you have a MUX switch or an external monitor on a dedicated dGPU output, you’re fighting hardware constraints, not software ones. Skip it for now.
4. Silverblue Changes Your Mental Model
You stop trying to fix the base OS. You start isolating problems by layer:
- Container for a dev tool that needs weird dependencies
- VM for an OS-level experiment
- Host stays as clean as the day you installed it
This is a different way of thinking about a workstation, and it’s genuinely better once it clicks.
5. VNC Is Underrated for Remote Access
SPICE has a reputation as the “better” remote display protocol for VMs. And it is — locally. For remote access via Tailscale, VNC is simpler to set up, more widely supported, and it just works.
Current State & What’s Next
Where I landed:
- ✅ Fedora Silverblue — stable, daily driver
- ✅ Windows 11 VM — usable for Office, AutoCAD, testing
- ✅ Remote access via VNC + Tailscale
- ✅ Multiple Linux VMs for distro testing
- ❌ GPU passthrough — abandoned (Optimus constraint)
What’s coming:
- Load balancer lab (HAProxy / Nginx across VMs)
- Multi-VM orchestration experiments
- Better remote workflow — possibly RDP inside the VM instead of VNC
- More Linux VMs: different distros, different roles
This has shifted from “VM to run Windows” to an actual homelab setup. That wasn’t the plan, but it’s a better one.
Conclusion
This started as damage control after a bad Windows update broke my Fedora install.
It became something more useful: a setup where I can break things freely, recover fast, and keep my host OS clean no matter what happens inside any VM.
Virtualisation on Linux — specifically QEMU/KVM on Fedora Silverblue — isn’t as hard as the documentation gap makes it seem. It’s a weekend of setup, some reading, and a few failed experiments (GPU passthrough, I see you). What you get out of it is worth far more than dual boot ever offered.
If you’re still dual booting as your “Linux + Windows” solution, I’d genuinely recommend trying this instead. The barrier is lower than you think, and the upside is real.
Try It Yourself
If this sparked something, here’s a starting point:
# Install the stack on Fedora (rpm-ostree for Silverblue)
rpm-ostree install qemu-kvm libvirt virt-manager
# Enable and start libvirt
sudo systemctl enable --now libvirtd
# Add yourself to the libvirt group
sudo usermod -aG libvirt $USERThen open virt-manager, create a new VM, and start from there. The GUI is intuitive. Use host-passthrough for CPU mode. Enable all virtio devices. Let NAT handle networking.
Break things inside the VM. Not on your machine.
Running Fedora Silverblue on an Acer Predator PHN16-71. All experiments documented as they happen.
Next entry: Load balancer lab — spinning up HAProxy across multiple VMs.
Tags: qemu kvm fedora-silverblue virtualisation homelab linux virt-manager windows-vm gpu-passthrough optimus
