KVM

How to make Windows VM guest recognize and run on more than 2 cores under KVM

Windows guests do not recognize all of the available cores under KVM and will usually detect only 2 cores. This happens since KVM exposes the available virtual CPU cores as physical CPUs (sockets). So, if the physical host running KVM has 2 CPUs with 4 cores each (a total of 8 cores), and the guest is configured for 8 CPUs, Windows will see 8 physical CPUs and will run only on 2, due to the hard coded limits in some of the editions of Windows Server.

To make Windows use all available cores, we need to configure the guest to expose the CPUs as cores and not as physical CPUs (sockets):

In virt-manager:

  1. Open the guest configuration screen
  2. Select Processor options tab
  3. Expand the “Topology” setting
  4. Set the sockets to 2
  5. Set the cores to 4 (for the guest to have a total of 8 cores) or 3 (for the guest to have a total of 6 cores).
  6. You can also expand the “Configration” settings and click on “copy host CPU configuration” to make the guest fully use all of the physical host’s CPU capabilities.

If you run the guest from command line, the KVM option for setting the CPU topology is: -smp 8,socket=2,cores=4

KVM 備份還原程序教學

KVM(Kernel-based Virtual Machine) 虛擬架構真的是一個很好的使用環境,不論是開發測試或是佈署成正式環境都很妥當,連 GCP 的 CE 也是採用 KVM 的架構。Guest 端可以支援不論是 Linux、FreeBSD、Solaris 甚至是 Microsoft Windows。

當然東西用下去正式跑的時候,備份就很重要了!

KVM 備份程序

先列出要備份的虛擬機器

ubuntu@host:/$ virsh list – all
 Id    Name                           State
----------------------------------------------------
 1     kudocker                       running
 2     nextcloud                      running

停止 kvm – nextcloud

ubuntu@host:/$ virsh shutdown nextcloud
Domain nextcloud is being shutdown

接下來將 VM 的資料備份成 XML 檔案

ubuntu@host:/$ virsh dumpxml nextcloud > /kvm_backup/nextcloud.xml
ubuntu@host:/$ ll /kvm_backup/nextcloud.xml
-rw-rw-r – 1 ubuntu ubuntu 4943 Jan 19 16:40 /kvm_backup/nextcloud.xml

再來將 image 檔備份出來

sudo cp /var/lib/libvirt/images/nextcloud.qcow2 /kvm_backup

備份打完收工!

KVM 還原程序

virsh undefine nextcloud
sudo cp /kvm_backup/nextcloud.qcow2 /var/lib/libvirt/images
virsh define -file /kvm_backup/nextcloud.xml
virsh start nextcloud
  • 先將原本的 VM undefine
  • 將 image 檔還原
  • 重新載入定義檔
  • 開啟 VM

設定 Ubuntu 下 KVM 的 Bridge Networking

預設的 KVM 下,提供一個類 NAT 方式的 private network bridge,讓 VMs 可以跟外頭溝通,但是無法跟 host 溝通。但畢竟如果要方便的話當然是 guest VMs 可以跟 Host OS 直接透過 bridge 抽取(大量備份),速度跟方便性才會好。

我的環境是一台 Router,然後下面是 host OS(固定 ip),然後 Router 提供 DHCP 或固定 IP 給 guest VMs 使用,這樣網內互打速度才會快,被侷限的只有 Router 或是 Switch 的 backend 而已。以下有圖有真相,host 是 Ubuntu 18.04、guest VM 是 Ubuntu 20.04、網卡 virtio,選擇 e1000 或是 rtl8139 就不可能有這個速度了。

基本套件確認

sudo apt install qemu-system-x86 qemu-utils qemu-efi ovmf libvirt-clients libvirt-daemon-system virtinst bridge-utils

備註:qemu-efi 跟 ovmf 是如果 guest VMs 需要用 EFI 模式安裝(如 Windows 10)才使用。

關閉 bridge 的 netfilter

畢竟是要提昇內部網路的速度及降低 CPU 的使用量,所以把 bridge 的 netfilter 關閉。編輯或新增 /etc/sysctl.d/bridge.conf ,內容下:

net.bridge.bridge-nf-call-iptables=0
net.bridge.bridge-nf-call-ip6tables=0
net.bridge.bridge-nf-call-arptables=0

新增一個 udev 定義檔,告訴 kernel 說這個 bridge 不用 netfilter,定義檔:/etc/udev/rules.d/99-bridge.rules ,內容如下(整行沒斷句):

ACTION=="add", SUBSYSTEM=="module", KERNEL=="br_netfilter", RUN+="/sbin/sysctl -p /etc/sysctl.d/bridge.conf"

砍光原本 KVM 的網路界面

virsh net-destroy default
virsh net-undefine default

或是使用 ip 指令,以下預設安裝好的時候兩個界面分別是 virbr0 跟 virbr0-nic 。

ip link delete virbr0 type brigde
ip link delete virbr0-nic

建立新的 bridge 給 KVM 內的 guest VMs 用

我的環境如下:

編輯或新增 /etc/netplan/00-installer-config.yaml ,內容如下:

network:                                                                                                                        
    ethernets:
        enp4s0:
            dhcp4: false
            dhcp6: false
    bridges:
        br0:
            interfaces: [ enp4s0 ]
            addresses: [192.168.1.1/24]
            gateway4: 192.168.1.254
            mtu: 1500
            nameservers:
                addresses: [127.0.0.1]
            parameters:
                stp: true
                forward-delay: 4
            dhcp4: false
            dhcp6: true
    version: 2

接著使用 netplan 建立起這個 network bridge

sudo netplan apply

告訴 KVM 有這個 br0 可以當 Network Bridge 使用了

首先建立一個 XML 檔案,標注清楚要通知 KVM 哪些資訊,host-bridge.xml 檔案如下:

<network>
  <name>host-bridge</name>
  <forward mode="bridge"/>
  <bridge name="br0"/>
</network>

接著使用 virsh 通知 KVM 啟用這個 Network Bridge

sudo virsh net-define host-bridge.xml
sudo virsh net-start bridge
sudo virsh net-autostart bridge

查詢是否已啟用

j7@hostOS:~$ virsh net-list – all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 host-bridge               active     yes           y

KVM 內 Guest OS 開啟後查看有哪些網路界面

j7@hostOS:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:0a brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
    link/ether 00:00:00:00:00:0a brd ff:ff:ff:ff:ff:ff
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:00:00:00:00:0a brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 brd 192.168.1.255 scope global noprefixroute br0
       valid_lft forever preferred_lft forever
    inet6 2001:b011:****:****:****:****:****:d3da/64 scope global temporary dynamic 
       valid_lft 598sec preferred_lft 598sec
    inet6 fe80::2d8:61ff:fe2c:d70a/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
8: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq master br0 state UNKNOWN group default qlen 1000
    link/ether a00:00:00:0a:0a:0a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe77:3785/64 scope link 

KVM 使用的 Guest Virtual Interfce

8: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq master br0 state UNKNOWN group default qlen 1000
link/ether a00:00:00:0a:0a:0a brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe77:3785/64 scope link

收割 virtio-net 威能

ubuntu@guestOS:~$ iperf3 -c 192.168.1.1
Connecting to host 192.168.1.1, port 5201
[  5] local 192.168.1.30 port 60208 connected to 192.168.1.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  5.35 GBytes  46.0 Gbits/sec    0   3.14 MBytes       
[  5]   1.00-2.00   sec  5.38 GBytes  46.3 Gbits/sec    0   3.14 MBytes       
[  5]   2.00-3.00   sec  5.54 GBytes  47.5 Gbits/sec    0   3.14 MBytes       
[  5]   3.00-4.00   sec  5.36 GBytes  46.0 Gbits/sec    0   3.14 MBytes       
[  5]   4.00-5.00   sec  5.49 GBytes  47.1 Gbits/sec    0   3.14 MBytes       
[  5]   5.00-6.00   sec  5.62 GBytes  48.2 Gbits/sec    0   3.14 MBytes       
[  5]   6.00-7.00   sec  5.47 GBytes  47.0 Gbits/sec    0   3.14 MBytes       
[  5]   7.00-8.00   sec  5.29 GBytes  45.4 Gbits/sec    0   3.14 MBytes       
[  5]   8.00-9.00   sec  5.46 GBytes  46.9 Gbits/sec    0   3.14 MBytes       
[  5]   9.00-10.00  sec  5.33 GBytes  45.8 Gbits/sec    0   3.14 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  54.3 GBytes  46.6 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  54.3 GBytes  46.6 Gbits/sec                  receiver

參考文章:

Bridge Networking with KVM on Ubuntu

KVM: Creating a bridged network with NetPlan on Ubuntu bionic
How to Setup Bridge Networking with KVM on Ubuntu 20.04

回到頂端