-
-
#1
Hello, I am having difficulties installing the VirtIO drivers when I try to install Windows Server 2012 r2, I always get the message «no signed device drivers were found. Make sure that the installation media contains the correct drivers, and the click OK «.
I followed the steps of both the official proxmox guide and in several forum posts, I tried everything and I still can’t get it to work.
surely it will be some very absurd mistake or I don’t know, seriously if someone would know what I could try I would appreciate it.
Windows Server 2012 r2 x64
VirtIO 1.221
SCSI Controller: VirtIO SCSI
Qemu Agent: Enabled
Memory: 4gb
Processors: 2 (1 socket, 2 cores)
Hard disk: 64gb — cache: writeback — discard: enable
Network: VirtIO(paravirtualized)
Note: Qemu Agent is disabled because then it does not allow the VM to be stoped or Shutdown
-
1.png
44.6 KB
· Views: 152
-
2.png
46.6 KB
· Views: 145
-
-
#2
Hello,
May you try an older version of VirtIO?
-
-
#3
Hi Moayad, I don’t think that is the problem (I saw that in forums they use the last version and it works perfectly), but could I try, do I try with any version or do I try a specific version?
-
-
#4
The windows install is very picky on the location of drivers.
You can’t just set e:\ as driver location, you’ll have to define the full path like e:\virtiostor\server2012\amd64\ (or something like that).
-
-
#5
I tried again to install with version 1.217 of VirtIO, the problem when searching for the drivers persists, I have tried folder by folder and always the same error… but if I let it scan, instead of searching manually by folder, it finds these drivers (photo ).
If I install these drivers will it work the same way? And which one would help me to correctly install Windows Server 2012 r2? since it does not appear as an option
-
3.png
66.5 KB
· Views: 177
-
-
#6
I ran into similar odd issues but can confirm no issues using the Virtio-win-0.1.215 version of the ISO
Only weird thing to note is even after running the installer on the ISO, I had to manually install the correct SPICE driver (if in use) from device manager.
-
-
#7
Hi, so with this version you said the WS2012r2 drivers appear? if so, I will try again.
I don’t know if the use of the SPICE driver is essential for the VM to work correctly, what would be the effect of not installing it?
I ran into similar odd issues but can confirm no issues using the Virtio-win-0.1.215 version of the ISO
Only weird thing to note is even after running the installer on the ISO, I had to manually install the correct SPICE driver (if in use) from device manager.
-
-
#8
Hi, so with this version you said the WS2012r2 drivers appear? if so, I will try again.
I don’t know if the use of the SPICE driver is essential for the VM to work correctly, what would be the effect of not installing it?
Correct, the auto-loading of the CD drivers was detecting normally during install for the disk drivers. The SPICE driver is only needed if you use the SPICE access from proxmox GUI for a smoother console experience, for RDP/other remote access tools I don’t believe it’s needed.
-
-
#9
I ran into similar odd issues but can confirm no issues using the Virtio-win-0.1.215 version of the ISO
Only weird thing to note is even after running the installer on the ISO, I had to manually install the correct SPICE driver (if in use) from device manager.
Thank you @bleglord
Under Windows 8.1 Virtio-win-0.1.215 work also (in the install process of the OS)
I tried the virtio-win-0.1.221 before but they gave
no signed device drivers error
Cheers,
-
-
#10
Here to report v0.1.225-2 is still broken. v0.1.215-2 /viostor/2k12R2/amd64 worked for me as well.
-
-
#13
I’ve same issue on 2 VM based on Win2012R2. I try latest version (0.1.240) but I can’t install the SCSI/QEMU driver, but version 0.1.189 works fine
-
-
#14
I’ve the same issue today. I used the 0.1.240.iso with no success switch to 0.1.248 no success as well 0.1.189 works like a charm.
-
-
#15
I ran into similar odd issues but can confirm no issues using the Virtio-win-0.1.215 version of the ISO
Only weird thing to note is even after running the installer on the ISO, I had to manually install the correct SPICE driver (if in use) from device manager.
I had the same Problem and Version 0.1.221 did also work just fine!
-
-
#16
hello, keen to contribute my ten cents worth to this knowledge byte facilitating the migration of windows server 2012R2 from the now less favored vmware ESXI (ancient version 6.0 in my case) to proxmox.
This is in the context of having suffered an exacerbation of RSI in my neck and mouse-clicking hand in the course of understanding and performing the delicate manipulation.
Using the .189 iso as advised above seems to have been the deal maker, ie virtio-win-0.1.189.iso, rather than more recent ISO versions of .240 and .248 (even though the directory structure and path to the vioscsi.sys driver in the ISO seems to be the same ie D:vioscsi\2k12\amd64\vioscsi.inf).
To summarise, use the proxmox built in import ESXI tool. In my case I accepted the proxmox ESXI importing defaults, including the e1000e network driver, scsi0 for the HDD, sata0 when attaching the .189 ISO and correcting for the windows installation (8/8.x/2012/2012RS) in the importing configuration window.
This completed without incident.
Then when setting this up as a proxmox VM, for the SCSI controller I chose the virtiO SCSI single option rather than the default, for the HDD options I chose ‘no cache’, Async IO ‘native’ and ‘Discard’ was not selected. The network device remained as the e1000e. Ensure the virtio .189 ISO remains attached.
Booting will stall as described above, which is when you need the command line to manually install the vioscsi driver. There is currently an excellent guide for this at https://hull.au/blog/install-virtio-drivers-from-windows-recovery/ which in summary involves these commands:
Code:
drvload D:vioscsi\2k12\amd64\vioscsi.inf
diskpart
list disk
list volume (to find the windows 2012 Server R2 drive letter eg E:)
E: ie type 'E:' to switch to the windows drive being E:
dism /Image:E:\ /Add-Driver:D:\vioscsi\2k19\amd64\vioscsi.inf (to insert the vioscsi.inf driver to the windows server installation (E:))
exit (you should be taken back to the blue screen to then resume installation)
in the proxmox video https://www.proxmox.com/en/services…vironment/proxmox-ve-import-wizard-for-vmware there is post installation advice but I don’t think this is needed here as the virtiO driver is already installed. But I am happy to take advice on this
HTH!
MSandy
-
-
#18
Info for someone that not already know:
Keep attention to virtio versions > 0.1.208 and < 0.1.266 that have important issue on virtio scsi driver
Introduction
VirtIO Drivers are paravirtualized drivers for kvm/Linux (see http://www.linux-kvm.org/page/Virtio). In short, they enable direct (paravirtualized) access to devices and peripherals for virtual machines using them, instead of slower, emulated, ones.
A quite extended explanation about VirtIO drivers can be found here http://www.ibm.com/developerworks/library/l-virtio.
At the moment these kind of devices are supported:
- block (disks drives), see Paravirtualized Block Drivers for Windows
- network (ethernet cards), see Paravirtualized Network Drivers for Windows
- balloon (dynamic memory management), see Dynamic Memory Management
You can maximize performances by using VirtIO drivers. The availability and status of the VirtIO drivers depends on the guest OS and platform.
Windows OS Support
Windows does not have native support for VirtIO devices included.
But, there is excellent external support through opensource drivers, which are available compiled and signed for Windows:
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/?C=M;O=D
Note that this repository provides not only the most recent, but also many older versions.
Those older versions can still be useful when a Windows VM shows instability or incompatibility with a newer driver version.
The binary drivers are digitally signed by Red Hat, and will work on 32-bit and 64-bit versions of Windows
Installation
Using the ISO
You can download the latest stable or you can download the most recent build of the ISO.
Normally the drivers are pretty stable, so one should try out the most recent release first.
You can access the ISO in a VM by mounting the ISO with a virtual CD-ROM/DVD drive on that VM.
Wizard Installation
You can use an easy wizard to install all, or a selection, of VirtIO drivers.
- Open the Windows Explorer and navigate to the CD-ROM drive.
- Simply execute (double-click on) virtio-win-gt-x64
- Follow its instructions.
- (Optional) use the virtio-win-guest-tools wizard to install the QEMU Guest Agent and the SPICE agent for an improved remote-viewer experience.
- Reboot VM
Manual Installation
- Open the Windows Explorer and navigate to the CD-ROM drive.
- There you can see that the ISO consists of several directories, each having sub-directories for supported OS version (for example, 2k19, 2k12R2, w7, w8.1, w10, …).
- Balloon
- guest-agent
- NetKVM
- qxl
- vioscsi
- …
- Navigate to the desired driver directories and respective Windows Version
- Right-click on the file with type «Setup Information»
- A context menu opens, select «Install» here.
- Repeat that process for all desired drivers
- Reboot VM.
Downloading the Wizard in the VM
You can also just download the most recent virtio-win-gt-x64.msi or virtio-win-gt-x86.msi from inside the VM, if you have already network access.
Then just execute it and follow the installation process.
Troubleshooting
Try an older version of the drivers first, if that does not help ask in one of our support channels:
https://pve.proxmox.com/wiki/Get_support
Further Reading
https://docs.fedoraproject.org/en-US/quick-docs/creating-windows-virtual-machines-using-virtio-drivers/index.html
http://www.linux-kvm.org/page/WindowsGuestDrivers
The source code of those drivers can be found here: https://github.com/virtio-win/kvm-guest-drivers-windows
http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers
See also
- Paravirtualized Block Drivers for Windows
- Paravirtualized Network Drivers for Windows
- Dynamic Memory Management
Estimated reading time: 1 min
Install VirtIO drivers for Windows Server 2012
In this article, we’re explaining how to install the VirtIO drivers on Windows Server 2012.
Prerequisites
- VPS with Windows Server 2012 installed.
- You have to be logged in as an administrator.
Step 1: Log in with RDP into Windows Server 2012
Connect to your server with the login credentials which you can find in your client area.
Step 2: Downloading the Virtio Installer
Go to https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/
and click on the most recent map. Find the .exe file and click on it. In our case it’s Virtio-Win-0.1.189
. This will start the download.
When it’s done downloading, run the application. You can do that by clicking on the File Explorer and go to Downloads
.
Then click on the download.
Step 3: The installation
Agree to the license terms and conditions and click Install
.
Click Next
.
Accept the terms in the License Agreement and click Next
.
The features list contains: Balloon, Network, Pypanic, Qemufwfcg, Qemupciserial, Viorng, Vioscsi, Vioserial, Viostor.
In this menu, you can choose if and where features will be installed. In this case, we’re keeping it how it is. Click Next
to continue.
Now click Install
to start the installation.
When the installation is done, click on Finish
to finish the installation.
Conclusion
Congratulations, you have successfully installed the VirtIO Drivers on Windows Server 2012.
Was this article helpful?
Like
3
Dislike
1
hi,
Tried to install the latest 225 on windows server 2012 R2 standard but could not install and ended up with error.
System is up to date.
Tried on 4 different vm’s with the same operating system and could not install. Network card is disabled after the installation has failed so i had to swith to e1000.
Attached logs.
thanks
Virtio-win-guest-tools_20221211182121.log
Virtio-win-guest-tools_20221211182121_000_virtio_win_gt_x64.msi.log
Posted December 10, 2021 at 12:50 PM MST
by
Updated June 02, 2022 at 7:26 AM
by Kevin Locke
I recently configured a Windows 11 guest virtual machine on
libvirt with the VirtIO
drivers. This post is a
collection of my notes for how to configure the host and guest. Most are
applicable to any recent version of Windows.
For the impatient, just use my libvirt domain XML.
Host Configuration
Hyper-threading/Simultaneous Multithreading (SMT)
Many configuration guides recommend disabling hyper-threading on Intel
chipsets before Sandy
Bridge for performance
reasons. Additionally, if the VM may run untrusted code, it is recommended to
disable SMT on processors vulnerable to Microarchitectural Data Sampling
(MDS).
RTC Synchronization
To keep RTC time in the guest
accurate across suspend/resume, it is advisable to set SYNC_TIME=1
in
/etc/default/libvirt-guests
, which calls virsh domtime
after the guest
--sync
is resumed. This causes the QEMU Guest
Agent to call w32tm /resync
/nowait
in the guest, which synchronizes the clock with the configured w32time
provider (usually NTP, although VMICTimeProvider could be used to sync with
the Hyper-V
host).
Ignore the comment in older libvirt versions that SYNC_TIME is not supported
on Windows, which was fixed in
qemu/qemu@105fad6bb22.
Wayland Keyboard Inhibit
To send keyboard shortcuts (i.e. key combinations) to the virtual machine
viewer that has focus, rather than sending them to the Wayland compositor, the
compositor must support the Wayland keyboard shortcut inhibition
protocol.
For example, Sway gained support for for this protocol
in Sway 1.5 (swaywm/sway#5021).
When using Sway 1.4 or earlier in the default configuration, pressing
Win + d would invoke
dmenu rather than display or hide
the
desktop
in the focused Windows VM.
Guest Configuration
BIOS vs UEFI (with SecureBoot)
There are trade-offs to consider when choosing between BIOS and UEFI:
-
Windows 11 Requires UEFI which is Secure Boot
capable.
Although the secure boot check can be bypassed, allowing Windows 11 to be
installed, it is an unsupported configuration. - Libvirt forbids internal snapshots with pflash
firmware,
which is used for UEFI variable storage, thus preventing internal snapshots
(RH Bug 1881850). Libvirt also lacks
support for basic features with external snapshots (RH Bug
1519002) such as reverting or deleting
external snapshots. This means snapshots for guests with UEFI may not be
supported for a
while.
Which was true in 2017 and is still true in 2021. There are some partial
workarounds, such as libvirt disk-only snapshots or QEMU disk snapshots
managed manually, as described by Chris
Siebenmann. - The Windows Driver Signing
Policy
requires drivers to be WHQL-signed signed if Secure Boot is enabled on Windows
8 and later. It will refuse to boot with unsigned drivers if Secure Boot is
enabled. This is problematic for the VirtIO drivers, for which Red Hat
donates non-WHQL signed binaries, but only provides WHQL-signed drivers to
customers (Bug 1844726). (Note: As of
0.1.204 and later, most drivers are signed, excluding ivshmem, pvpanic, and
possibly others.)
If UEFI is selected, an image must be chosen for the pflash device firmware. I
recommend OVMF_CODE_4M.ms.fd
(which pairs with OVMF_VARS_4M.ms.fd
which
enables Secure Boot and includes Microsoft keys in KEK/DB) or
OVMF_CODE_4M.fd
if Secure Boot is not desired. See
ovmf.README.Debian
for details.
Note: Be aware that UEFI does not support the QEMU -boot order=
option.
It does support the bootindex
properties. For
example, to boot from win10.iso
, use -drive
instead of
id=drive0,file=win10.iso,format=raw,if=none,media=cdrom,readonly=on -device
ide-cd,drive=drive0,bootindex=1-cdrom win10.iso -boot order=d
.
CPU Model
It may be preferable to choose a CPU model which satisfies the Windows
Processor Requirements for the Windows edition which
will be installed on the guest. As of this writing, the choices are Skylake, Cascadelake, Icelake, Snowridge, Cooperlake, and EPYC.
If the VM may be migrated to a different machine, consider setting
check='full'
on <cpu/>
so enforce
will be added to the QEMU -cpu
option and the domain will not start if the created vCPU doesn’t match the
requested configuration. This is not currently set by default. (Bug
822148)
CPU Topology
If topology is not specified, libvirt instructs QEMU to add a socket for each
vCPU (e.g. <vcpu placement="static">4</vcpu>
results in -smp
). It may be preferable to change this for
4,sockets=4,cores=1,threads=1
several reasons:
First, as Jared Epp pointed out to me via email, for licensing reasons
Windows 10 Home and Pro are limited to 2 CPUs
(sockets),
while Pro for Workstations and Enterprise are limited to 4 (possibly
requiring build 1903 or later to use more than
2). Similarly, Windows 11 Home is limited
to 1 CPU while 11 Pro is limited to
2.
Therefore, limiting sockets to 1 or two on these systems is strongly
recommended.
Additionally, it may be useful, particularly on a NUMA system, to specify a
topology matching (a subset of) the host and pin vCPUs to the matching
elements (e.g. virtual cores on physical cores). See KVM Windows 10 Guest —
CPU Pinning Recommended? on Reddit and PCI
passthrough via OVMF: CPU pinning on
ArchWiki
Be aware that, on my single-socket i5-3320M system, the matching
configurations I tried performed worse than the default. Some expertise is
likely required to get this right.
It may be possible to reduce jitter by pinning vCPUs to host cores, emulator
and iothreads to other host cores and using a hook script with cset shield
to ensure host processes don’t run on the vCPU cores. See Performance of
your gaming VM.
Note that it is possible to set max CPUs in excess of current CPUs for CPU
hotplug. See Linux KVM – How to add /Remove vCPU to Guest on fly ? Part
9.
Hyper-V Enlightenments
QEMU supports several Hyper-V
Enlightenments for
Windows guests. virt-manager/virt-install enables some Hyper-V Enlightenments
by default, but is missing several useful recent additions
(virt-manager/virt-manager#154).
I recommend editing the libvirt domain XML to enable Hyper-V
enlightenments
which are not described as “nested specific”. In particular, hv_stimer
,
which reduces CPU usage when the guest is
paused.
Memory Size
When configuring the memory size, be aware of the system requirements (4GB
for Windows
11,
1GB for 32-bit, 2GB for 64-bit Windows
10)
and Memory Limits for Windows and Windows Server
Releases
which vary by edition.
Memory Backing
If shared memory will be used (e.g. for virtio-fs discussed
below), define a (virtual) NUMA zone and memory backing. The memory backing
can be backed by files (which are flexible, but can have performance issues if
not on hugetlbfs/tmpfs) or memfd (since QEMU 4.0, libvirt 4.10.0). The memory
can be Huge
Pages
(which have lower overhead, but can’t be swapped) or regular pages. (Note: If
hugepages are not configured, Transparent
Hugepages
may still be used, if THP is enabled
system-wide
on the host system. This may be advantageous, since it reduces translation
overhead for merged pages while still allowing swapping. Alternatively, it
may be disadvantageous due to increased CPU use for defrag/compact/reclaim
operations.)
Memory Ballooning
If memory ballooning will be used, set current memory to the initial amount
and max memory to the upper limit. Be aware that the balloon size is not
automatically managed by KVM. There was an Automatic
Ballooning project
which has not been merged. Unless a separate tool, such as oVirt Memory
Overcommitment Manager, is
used, the balloon size must be changed manually (e.g. using virsh --hmp
) for the guest to
"balloon $size"
use more than “current memory”. Also be aware that when the balloon is
inflated, the guest shows the memory as “in
use”
which may be counter-intuitive.
Machine Type
The Q35 Machine Type adds support for
PCI-E, AHCI, PCI hotplug, and probably many other features, while removing
legacy features such as the ISA bus. Historically it may have been preferable
to use i440FX for stability and bug avoidance, but my experience is that it’s
generally preferable to use the latest Q35 version (e.g. pc-q35-6.1
for QEMU
6.1).
Storage Controller
Paravirtualized storage can be implemented using either SCSI with
virtio-scsi
and the vioscsi
driver or bulk storage with virtio-blk
with
the viostor
driver. The choice is not obvious. In general, virtio-blk
may be faster while virtio-scsi
supports more
features (e.g. pass-through,
multiple LUNs, CD-ROMs, more than 28 disks). Citations:
- QEMU Configuring virtio-blk and virtio-scsi
Devices
has a detailed comparison. -
virtio-blk
is faster thanvirtio-scsi
in Fam Zheng’s LC3-2018
presentation. - The QEMU wiki VirtioSCSI page
notesvirtio-scsi
“rough numbers: 6% slower [thanvirtio-blk
] on iozone
with a tmpfs-backed disk”. -
Paolo Bonzini (in 2017)
thinks
“long-term virtio-blk should only be used for high-performance scenarios
where the guest SCSI layer slows down things sensibly.” - Proxmox recommends SCSI and states “VirtIO block may get deprecated in the
future.” -
vioscsi
has supported discard for long time (pre 2015, when changelog
starts?).viostor
only added support for discard recently (in
virtio-win/kvm-guest-drivers-windows#399
for 0.1.172-1). Although #399 is described as “preliminary support” the
author clarified that it is now full support on par with
vioscsi
.
Virtual Disk
Format
When choosing a format for the virtual disk, note that qcow2
supports
snapshots. raw
does not. However, raw
is likely to have better
performance due to less overhead.
Alberto Garcia added support for Subcluster allocation for qcow2
images
in QEMU 5.2. When using 5.2 or later, it may be prudent to create qcow2
disk images with extended_l2=on,cluster_size=128k
to reduce wasted space and
write amplification. Note that extended L2 always uses 32 sub-clusters, so
cluster_size
should be 32 times the filesystem cluster size (4k for NTFS
created by the Windows installer).
Discard
I find it generally preferable to set discard
to unmap
so that guest
discard/trim requests are passed through to the disk image on the host
filesystem, reducing its size. For Windows guests, discard/trim requests are
normally only issued when Defragment and Optimize
Drives
is run. It is scheduled to run weekly by default.
I do not recommend enabling detect_zeroes
to detect write requests with all
zero bytes and optionally unmap the zeroed areas in the disk image. As the
libvirt docs note:
“enabling the detection is a compute intensive operation, but can save file
space and/or time on slow media”.
Discard Granularity or SSD
Jared Epp also informed me of an incompatibility between the virtio drivers
and defrag
in Windows 10 and 11
(virtio-win/kvm-guest-drivers-windows#666)
which causes defragment and optimize to take a long time and write a lot of
data. There are two workarounds suggested:
-
Use a recent version of the virtio-win drivers (0.1.225-1 or later?) which
includes
virtio-win/kvm-guest-drivers-windows#824
and setdiscard_granularity
to a large
value
(Hyper-V uses 32M).For libvirt,
discard_granularity
can be set using
<qemu:property>
on libvirt 8.2 and later, or
<qemu:arg>
on earlier versions, as demonstrated by Pau
Rodriguez-Estivill.
Note: There was a patch to adddiscard_granularity
to
<blockio>
but it was never merged, as far as I can tell. -
Emulate an SSD rather than a Thin Volume, as suggested by Pau
Rodriguez-Estivill
by settingrotation_rate=1
(for SSD
detection)
anddiscard_granularity=0
(to change the MODE PAGE POLICY to
“Obsolete”?).
These settings were inferred from QEMU
behavior.
It’s not clear to me why this avoids the slowness issue.For libvirt,
rotation_rate
can be set on<target>
of
<disk>
.
As above,discard_granularity
can be set using
<qemu:property>
on libvirt 8.2 and later, or
<qemu:arg>
on earlier versions.
I am unsure which approach to recommend, although I am currently using
discard_granularity=32M
. Stewart Bright noted some differences between SSD
and Thin Provisioning behavior in Windows
guests.
In particular, I’m curious how slabs and slab consolidation behave. Interested
readers are encouraged to investigate further and report their findings.
Video
There are several options for graphics
cards. VGA and
other display devices in qemu by Gerd
Hoffmann has
practical descriptions and recommendations (kraxel’s
news is great for following progress).
virtio-drivers 0.1.208 and later include the viogpudo
driver for
virtio-vga
. (Bug 1861229)
Unfortunately, it has some limitations:
- It is limited to
height x width <=
.
4x1024x1024 - It requires additional work to configure automatic resolution switching,
which is not done by the installer
(virtio-win/virtio-win-guest-tools-installer#32).
From Bug 1923886:- Copy
viogpuap.exe
andvgpusrv.exe
to a permanent location. - Run
vgpusrv.exe -i
as Administrator to register the “VioGpu Resolution Service” Windows Service.
- Copy
- It doesn’t support Windows 7
(virtio-win/kvm-guest-drivers-windows#591) - It is currently a WDDM Display Only
Driver
without support for 2-D or 3-D rendering. (Same as the QXL-WDDM-DOD driver
for QXL.) This may be added in the future with Virgil
3d similarly to Linux guests. - It doesn’t currently provide any advantages over
QXL.
However, unless the above limitations are critical for a particular use case,
I would recommend virtio-vga
over QXL based on the understanding that it is
a better and more promising approach on technical grounds and that it is where
most current development effort is directed.
Warning: When using BIOS firmware, the video device should be connected to
the PCI Express Root Complex (i.e. <address type='pci'
) in order
bus='0x00'>
to access the VESA BIOS Extensions (VBE)
registers.
Without VBE modes, the Windows installer is limited to grayscale at
640×480,
which is not pleasant.
Note that QEMU and libvirt connect video devices to the Root Complex by
default, so no additional configuration is required. However, if a second
video device is added using virt-manager or virt-xml, it is connected to a
Root Port or PCIe-to-PCI bridge, which creates problems if the first device is
removed
(virt-manager/virt-manager#402).
Note: If 3D acceleration is enabled for virtio-vga
, the VM must have a
Spice display device with OpenGL enabled to avoid an “opengl is not available”
error when the VM is started. Since the viogpudo
driver does not support 3D
acceleration, I recommend disabling both.
Keyboard and Mouse
I recommend adding a “Virtio Keyboard” and “Virtio Tablet” device in addition
to the default USB or PS/2 Keyboard and Mouse devices. These are “basically
sending linux evdev events over
virtio”,
which can be useful for keyboard or mouse with special features (e.g.
keys/buttons not supported by PS/2). Possibly also a latency or performance
advantage.
Note that it is not necessary to remove the USB or PS/2 devices, since QEMU
will route input events to virtio-input devices if they have been initialized
by the guest and virtio input
devices are not supported without drivers, which can make setup and recovery
more difficult if the PS/2 devices are not present.
TPM
Windows 11 requires TPM
2.0.
Therefore, I recommend adding a QEMU TPM
Device to provide one.
Either TIS or CRB can be used. “TPM CRB interface is a simpler interface
than the TPM TIS and is only available for TPM
2.”
If emulated, swtpm must be installed
and configured on the host. Note: swtpm was packaged for Debian in 2022
(Bug 941199), so it is not available in
Debian 11 (Bullseye) or earlier releases.
RNG
It may be useful to add a
virtio-rng
device to provide
entropy to the guest. This is particularly true if the vCPU does not support
the RDRAND
instruction or if it is
not trusted.
File/Folder Sharing
There are several options for sharing files between the host and guest with
various trade-offs. Some common options are discussed below. My
recommendation is to use SMB/CIFS unless you need the feature or performance
offered by virtio-fs (and like living on the bleeding edge).
Virtio-fs
Libvirt supports sharing virtual
filesystems using
a protocol similar to
FUSE over
virtio. It is a great option if the host and guest can support it (QEMU 5.0,
libvirt 6.2, Linux 5.4, Windows virtio-drivers 0.1.187). It has very high
performance and supports many of the filesystem features and behaviors of a
local filesystem. Unfortunately, it has several significant issues including
configuration difficulty, lack of support for migration or snapshot, and
Windows driver issues, each explained below:
Virtio-fs requires shared memory between the host and guest, which in turn
requires configuring a (virtual) NUMA topology with shared memory backing: See
Sharing files with Virtio-FS. Also
ensure you are using a version of libvirt which includes the apparmor policy
patch to allow libvirtd to call
virtiofsd
(6.7.0 or later).
Migration with virtiofs device is not
supported
by libvirt, which also prevents saving and creating snapshots while the VM is
running. This is difficult to work around since live detach of device
‘filesystem’ is not
supported
by libvirt for QEMU.
The Windows driver has released with several severe known bugs, such as:
- Can’t copy files larger than 2MiB (fixed in 0.1.190)
- Can’t remove empty directories (fixed in 0.1.190)
- Symlinks appear as files in Windows
guest - Doesn’t work with
iommu_platform=on
My offer to assist with adding tests
(virtio-win/kvm-guest-drivers-windows#531)
has seen very little interest or action. It’s not clear to me who’s working
on virtio-fs and how much interest it has at the moment.
Virtio-9p
Although it is not an option for Windows guests due to lack of a driver
(virtio-win/kvm-guest-drivers-windows#126),
it’s worth nothing that virtio-9p is similar to virtio-fs except that it uses
the 9P distributed file system protocol which is
supported by older versions of Linux and QEMU and has the advantage of being
used and supported outside of virtualization contexts. For a comparison of
virtio-fs and virtio-9p, see the virtio-fs patchset on
LKML.
SPICE Folder Sharing (WebDAV)
SPICE Folder Sharing is a relatively easy way to share directories from the
host to the guest using the WebDAV protocol
over the org.spice-space.webdav.0
virtio channel. Many libvirt viewers
(remote-viewer, virt-viewer, Gnome Boxes) provide built-in support. Although
virt-manager does not
(virt-manager/virt-manager#156),
it can be used to configure folder
sharing
(by adding a org.spice-space.webdav.0
channel) and other viewers used for
running the VM and serving files. Note that users have reported performance
is not great and the SPICE WebDAV
Daemon must be installed in the guest to share files.
SMB/CIFS
Since Windows supports SMB/CIFS (aka “Windows File Sharing Protocol”)
natively, it is relatively easy to share files between the host and guest if
networking is configured on the guest. Either the host (with
Samba or
KSMBD)
or the guest can act as the server. For a Linux server, see Setting up Samba
as a Standalone
Server.
For Windows, see File sharing over a network in Windows
10.
Be aware that, depending on the network topology, file shares may be exposed
to other hosts on the network. Be sure to adjust the server configuration and
add firewall rules as appropriate.
Channels
I recommend adding the following Channel
Devices:
- com.redhat.spice.0 (spicevmc) for the SPICE Agent
- org.qemu.guest_agent.0 (unix) for the QEMU Guest Agent
- org.spice-space.webdav.0 (spiceport) for SPICE Folder Sharing (WebDAV), if using.
Notes
There are some differences between the “legacy” 0.9/0.95 version of the virtio
protocol and the “modern” 1.0 version. Recent versions (post-2016) of QEMU
and libvirt use 1.0 by default. For older versions, it may be necessary to
specify disable-legacy=on,disable-modern=off
to force the modern version.
For details and steps to confirm which version is being used, see Virtio 1.0
and Windows
Guests.
Guest OS Installation
I recommend configuring the guest with two SATA CD-ROM devices during
installation: One for the Windows 10
ISO or Windows 11
ISO, and one for the
virtio-win
ISO.
At the “Where would you like to install Windows?” screen, click “Load Driver”
then select the appropriate driver as described in How to Install virtio
Drivers on KVM-QEMU Windows Virtual
Machines.
Bypass Hardware Checks
If the guest does not satisfy the Windows 11 System
Requirements,
you can bypass the checks by:
- Press Shift-F10 to open Command Prompt.
- Run
regedit
. - Create key
HKEY_LOCAL_MACHINE\SYSTEM\Setup\LabConfig
with one or more of
the following DWORD values:-
BypassRAMCheck
set to 1 to skip memory size checks. -
BypassSecureBootCheck
set to 1 to skip SecureBoot checks. -
BypassTPMCheck
set to 1 to skip TPM 2.0 checks.
-
- Close
regedit
. - If the “This PC can’t run Windows 11” screen is displayed, press the back
button. - Proceed with installation as normal.
Be aware that Windows 11 is not supported in this scenario and doing so may
prevent some features from working.
virtio-win Drivers
Drivers for VirtIO devices can be installed by running the
virtio-win-drivers-installer,
virtio-win-gt-x64.msi
(Source),
available on the virtio-win ISO) or by using Device Manager to search for
device drivers on the virtio-win ISO.
The memory ballooning service is installed by virtio-win-drivers-installer.
To install it manually (for troubleshooting or other purposes):
- Copy
blnsrv.exe
from virtio-win.iso to somewhere permanent (since install
command defines service using current location of exe). - Run
blnsrv.exe -i
as Administrator - Reboot (Necessary, per Bug 612801)
Note that the virtio-win-drivers-installer does not currently support Windows
11/Server 2022 (Bug 1995479). However,
it appears to work correctly for me. It also does not support Windows 7 and
earlier
(#9).
For these systems, the drivers must be installed manually.
virtio-fs
To use virtio-fs for file sharing, in addition to installing the viofs
driver, complete the following steps (based on a comment by
@FailSpy):
- Install WinFSP.
- Copy
winfsp-x64.dll
fromC:\Program Files (x86)\WinFSP\bin
to
C:\Program Files\Virtio-Win\VioFS
. - Ensure the
VirtioFSService
created by virtio-win-drivers-installer is
stopped and has Startup Type: Manual or Disabled. (Enabling this service
would work, but would make shared files only accessible to elevated
processes). - Create a scheduled task to run
virtiofs.exe
at logon using the following
PowerShell:$action = New-ScheduledTaskAction -Execute 'C:\Program Files\Virtio-Win\VioFS\virtiofs.exe' $trigger = New-ScheduledTaskTrigger -AtLogon $principal = New-ScheduledTaskPrincipal 'NT AUTHORITY\SYSTEM' $settings = New-ScheduledTaskSettingsSet -AllowStartIfOnBatteries -ExecutionTimeLimit 0 $task = New-ScheduledTask -Action $action -Principal $principal -Trigger $trigger -Settings $settings Register-ScheduledTask Virtio-FS -InputObject $task
QEMU Guest Agent
The QEMU Guest Agent can be
used to coordinate snapshot, suspend, and shutdown operations with the
guest,
including post-resume RTC synchronization. Install it
by running
qemu-ga-x86_64.msi
(available in the guest-agent
directory of the virtio-win ISO).
QXL Driver
If the virtual machine is configured with QXL graphics instead of
virtio-vga
, as discussed in the Video section, a QXL driver should
be installed. For Windows 8 and later, install the QXL-WDDM-DOD
driver
(Source). On
Windows 7 and earlier, the QXL
driver
(Source) can be used. The
driver can be installed from the linked MSI, or from the qxldod
/qxl
directory of the virtio-win ISO.
SPICE Agent
For clipboard sharing and display size changes, install the SPICE
Agent
(Source).
Note: Some users have reported problems on Windows 11
(spice/win32#11).
However, it has been working without issue for me.
SPICE WebDAV Daemon
To use SPICE folder
sharing,
install the SPICE WebDAV
daemon
(Source).
SPICE Guest Tools
Instead of installing the drivers/agents separately, you may prefer to install
the SPICE Guest
Tools
(Source) which
bundles the virtio-win Drivers, QXL
Driver, and SPICE Agent into a single installer.
Warning: It does not include the QEMU Guest Agent and
is several years out of date at the time of this writing (last updated on
2018-01-04 as of 2021-12-05).
QEMU Guest Tools
Another alternative to installing drivers/agents separately is to install the
QEMU Guest
Tools
(Source)
which bundles the virtio-win Drivers, QXL
Driver, SPICE Agent, and QEMU Guest
Agent into a single installer.
virtio-win-guest-tools.exe
is available in the virtio-win ISO.
Post-Installation Tasks
Remove CD-ROMs
Once Windows is installed, one or both CD-ROM drives can be removed. If both
are removed, the SATA Controller may also be removed.
virtio-scsi CD-ROM
For a low-overhead CD-ROM drive, a virtio-scsi
drive can be added by adding
a VirtIO SCSI controller (if one is not already present) then a CD-ROM on the
SCSI bus.
Defragment and Optimize Drives
If discard was enabled for the virtual disk, Defragment and
Optimize
Drives
in the Windows guest should show the drive with media type “Thin provisioned
drive”
(or “SSD”, if configured with rotation_rate=1
). It may be useful to
configure a disk optimization schedule to trim/discard unused space in the
disk image.
Additional Resources
- QEMU: Preparing a Windows Guest on ArchWiki
- libvirt: Domain XML format
- Tuning KVM
ChangeLog
2022-10-22
- Expand discussion of defrag issue and add new
discard_granularity=32M
workaround based on updates in
virtio-win/kvm-guest-drivers-windows#666.
Move it to a new subsection of Discard.
2022-06-12
- Add a link to
(virt-manager/virt-manager#402)
in the VBE warning in the Video section.
2022-06-02
- Add warning to Video about missing VBE modes when the video device
is connected to a PCIe Root Port rather than the Root Complex. - Improve discussion of UEFI firmware images.
- Add note about
-boot order=
, bootindex, and UEFI.
2022-05-09
- Add link to Chris
Siebenmann’s post about workarounds for snapshots of libvirt-based
VMs. - Note that swtpm is now packaged for Debian.
2022-05-06
- Discuss Windows licensing limits on sockets in CPU Topology section, thanks
to Jared Epp. - Discuss slow operation and excessive writes performed by defrag on Windows
10 and 11, also thanks to Jared Epp. - Add Memory Size section to note minimum and maximum size limits for
different Windows editions. - Add quote from Paolo Bonzini about virtio-blk use for high-performance.
2022-03-19
- Fix broken link to my example libvirt domain XML. Thanks to Peter
Greenwood for notifying me. - Rewrite the “Wayland Keyboard Inhibit” section to improve clarity.
2022-01-13
- Recommend
virtio-vga
with theviogpudo
driver instead of QXL with the
qxldod
orqxl
driver.