From: "Laszlo Ersek" <lersek@redhat.com>
To: "Wu, Jiaxin" <jiaxin.wu@intel.com>
Cc: "devel@edk2.groups.io" <devel@edk2.groups.io>
Subject: Re: [edk2-devel] [PATCH 00/16] OvmfPkg: support VCPU hotplug with -D SMM_REQUIRE
Date: Fri, 24 Jul 2020 18:01:37 +0200 [thread overview]
Message-ID: <2679d4af-4034-525e-8189-2d795794ced1@redhat.com> (raw)
In-Reply-To: <DM5PR11MB1372A9D151062EC43A72233CFE770@DM5PR11MB1372.namprd11.prod.outlook.com>
On 07/24/20 08:26, Wu, Jiaxin wrote:
> Hi Laszlo,
>
> Looks OVMF supports the CPU hotplug with those series patches.
>
> Could you provide some guide how to enable the OVMF CPU hotplug
> verification? Is there any general work flow introduction how it
> works? For example, how to do the hot add CPU initialization (e.g.
> Register setting / Microcode update, etc.)? I'm very interested in
> this feature on OVMF.
Long version:
-------------
(1) There are three pieces missing:
(1a) The QEMU side changes for the ACPI (DSDT) content that QEMU
generates for the OS.
The ACPI GPE handler for CPU hotplug is being modified by my colleague
Igor Mammedov to raise the SMI (command value 4) on CPU hotplug.
For developing the OVMF series for TianoCore#1512 (which is now merged),
I used a prototype QEMU patch, from Igor. But that patch is not suitable
for upstreaming to QEMU. SO Igor is now developing the real patches for
QEMU's ACPI generator.
(1b) The related feature negotiation patches in QEMU.
In order for "CPU hotplug with SMM" to work, both OVMF and QEMU need to
perform specific things. In order to deal with cross-version
compatibility problems, the "CPU hotplug with SMI" feature is
dynamically negotiated between OVMF and QEMU. For this negotiation, both
QEMU and OVMF need additional patches. These patches are not related to
the actual plugging activities; instead they control whether plugging is
permitted at all, or not.
Igor's QEMU series covers both purposes (1a) and (1b). It's work in
progress. The first posting was an RFC series:
(1b1) [RFC 0/3] x86: fix cpu hotplug with secure boot
https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg03746.html
http://mid.mail-archive.com/20200710161704.309824-1-imammedo@redhat.com
The latest posting has been a PATCH series:
(1b2) [qemu-devel] [PATCH 0/6] x86: fix cpu hotplug with secure boot
https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg05850.html
http://mid.mail-archive.com/20200720141610.574308-1-imammedo@redhat.com
(1c) The feature negotiation patch for OVMF is here:
* [edk2-devel] [PATCH] OvmfPkg/SmmControl2Dxe: negotiate ICH9_LPC_SMI_F_CPU_HOTPLUG
https://edk2.groups.io/g/devel/message/62561
http://mid.mail-archive.com/20200714184305.9814-1-lersek@redhat.com
(2) Special register setting and microcode stuff are not needed.
(3) As I mentioned before, I strongly suggest using QEMU and OVMF with
libvirt. I had written an article about that here:
https://github.com/tianocore/tianocore.github.io/wiki/Testing-SMM-with-QEMU,-KVM-and-libvirt
I wrote this article specifically for "Windows-based" developers. The
article is written from such a perspective that you don't need a
personal Linux workstation, only a single Linux workstation *per team*.
So you can continue using a Windows workstation, just set up one Linux
box for your team (if you don't yet have one).
This article remains relevant.
(3a) In order to set up a guest for VCPU hotplug, simply go through the
article, initially.
(3b) Once you're done with that, power down the guest, and modify the
domain XML as follows:
virsh edit <DOMAIN_NAME>
(3b1) replace the "pc-q35-2.9" machine type with "pc-q35-5.1"
(3b2) replace the following stanza:
<vcpu placement='static'>4</vcpu>
with:
<vcpu placement='static' current='2'>4</vcpu>
<vcpus>
<vcpu id='0' enabled='yes' hotpluggable='no'/>
<vcpu id='1' enabled='no' hotpluggable='yes'/>
<vcpu id='2' enabled='yes' hotpluggable='yes'/>
<vcpu id='3' enabled='no' hotpluggable='yes'/>
</vcpus>
This will create a VCPU topology where:
- CPU#0 is present up-front, and is not hot-pluggable (this is a QEMU
requirement),
- CPU#1, CPU#2, and CPU#3 are hot-pluggable,
- CPU#2 is present up-front ("cold-plugged"), while CPU#1 and CPU#3 are
absent initially.
(4) Boot the guest. Once you have a root prompt in the guest, you can
use one of two libvirt commands for hot-plugging a CPU:
(4a) the singular "virsh setvcpu" command:
virsh setvcpu <DOMAIN_NAME> <PROCESSOR_ID> --enable --live
where you can pass in 1 or 3 for <PROCESSOR_ID>.
This command lets you specify the precise ID of the processor to be
hot-plugged; IOW, the command lets you control topology.
(4b) the plural "virsh setvcpus" command:
virsh setvcpus <GUEST_NAME> <PROCESSOR_COUNT> --live
This command lets you specify the desired number of total active CPUs.
It does not let you control topology. (My understanding is that it keeps
the topology populated at the "front".)
Regarding the current QEMU status, we need to do more work for
supporting (4b). The RFC series (1b1) enables (4a) to work. The PATCH
series (1b2) intended to make (4b) work, but unfortunately it broke even
(4a). So now we need at least one more version of the QEMU series (I've
given my feedback to Igor already, on qemu-devel).
(4c) Dependent on guest OS configuration, you might have to manually
online the newly plugged CPUs in the guest:
echo 1 > /sys/devices/system/cpu/cpu2/online
echo 1 > /sys/devices/system/cpu/cpu3/online
Note that the "cpuN" identifiers seen here are *neither* APIC IDs *nor*
the same IDs as seen in the libvirt domain XML. Instead, these IDs are
assigned in the order the Linux kernel learns about the CPUs (if I
understand correctly).
Short version:
--------------
- apply (1b1) on top of latest QEMU master from git, and build and
install it,
- apply (1c) on latest edk2, and build OVMF with "-D SMM_REQUIRE",
- install a Linux guest on a Linux host (using KVM!) as described in my
Wiki article (3),
- modify the domain XML for the guest as described in (3b),
- use the singular "virsh setvcpu" command (4a) for hot-plugging VCPU#1
and/or VCPU#3,
- if necessary, use (4c) in the guest.
You can do the same with Windows Server guests too, although I'm not
exactly sure what versions support CPU hotplug. For testing I've used
Windows Server 2012 R2. The Wiki article at (3) has a section dedicated
to installing Windows guests too.
Thanks,
Laszlo
next prev parent reply other threads:[~2020-07-24 16:01 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-23 17:25 [PATCH 00/16] OvmfPkg: support VCPU hotplug with -D SMM_REQUIRE Laszlo Ersek
2020-02-23 17:25 ` [PATCH 01/16] MdeModulePkg/PiSmmCore: log SMM image start failure Laszlo Ersek
2020-02-23 17:25 ` [PATCH 02/16] UefiCpuPkg/PiSmmCpuDxeSmm: fix S3 Resume for CPU hotplug Laszlo Ersek
2020-02-23 17:25 ` [PATCH 03/16] OvmfPkg: clone SmmCpuPlatformHookLib from UefiCpuPkg Laszlo Ersek
2020-02-23 17:25 ` [PATCH 04/16] OvmfPkg: enable SMM Monarch Election in PiSmmCpuDxeSmm Laszlo Ersek
2020-02-23 17:25 ` [PATCH 05/16] OvmfPkg: enable CPU hotplug support " Laszlo Ersek
2020-02-23 17:25 ` [PATCH 06/16] OvmfPkg/CpuHotplugSmm: introduce skeleton for CPU Hotplug SMM driver Laszlo Ersek
2020-02-23 17:25 ` [PATCH 07/16] OvmfPkg/CpuHotplugSmm: add hotplug register block helper functions Laszlo Ersek
2020-02-23 17:25 ` [PATCH 08/16] OvmfPkg/CpuHotplugSmm: define the QEMU_CPUHP_CMD_GET_ARCH_ID macro Laszlo Ersek
2020-02-23 17:25 ` [PATCH 09/16] OvmfPkg/CpuHotplugSmm: add function for collecting CPUs with events Laszlo Ersek
2020-02-23 17:25 ` [PATCH 10/16] OvmfPkg/CpuHotplugSmm: collect " Laszlo Ersek
2020-02-23 17:25 ` [PATCH 11/16] OvmfPkg/CpuHotplugSmm: introduce Post-SMM Pen for hot-added CPUs Laszlo Ersek
2020-02-23 17:25 ` [PATCH 12/16] OvmfPkg/CpuHotplugSmm: introduce First SMI Handler " Laszlo Ersek
2020-02-24 9:10 ` [edk2-devel] " Laszlo Ersek
2020-02-26 21:22 ` Laszlo Ersek
2020-02-23 17:25 ` [PATCH 13/16] OvmfPkg/CpuHotplugSmm: complete root MMI handler for CPU hotplug Laszlo Ersek
2020-02-23 17:25 ` [PATCH 14/16] OvmfPkg: clone CpuS3DataDxe from UefiCpuPkg Laszlo Ersek
2020-02-23 17:25 ` [PATCH 15/16] OvmfPkg/CpuS3DataDxe: superficial cleanups Laszlo Ersek
2020-02-23 17:25 ` [PATCH 16/16] OvmfPkg/CpuS3DataDxe: enable S3 resume after CPU hotplug Laszlo Ersek
2020-02-24 16:31 ` [PATCH 00/16] OvmfPkg: support VCPU hotplug with -D SMM_REQUIRE Ard Biesheuvel
2020-07-24 6:26 ` [edk2-devel] " Wu, Jiaxin
2020-07-24 16:01 ` Laszlo Ersek [this message]
2020-07-29 8:37 ` Wu, Jiaxin
2020-10-01 9:59 ` [ann] " Laszlo Ersek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2679d4af-4034-525e-8189-2d795794ced1@redhat.com \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox