From: Igor Mammedov <imammedo@redhat.com>
To: Laszlo Ersek <lersek@redhat.com>
Cc: "Yao, Jiewen" <jiewen.yao@intel.com>,
"Kinney, Michael D" <michael.d.kinney@intel.com>,
Paolo Bonzini <pbonzini@redhat.com>,
"rfc@edk2.groups.io" <rfc@edk2.groups.io>,
Alex Williamson <alex.williamson@redhat.com>,
"devel@edk2.groups.io" <devel@edk2.groups.io>,
qemu devel list <qemu-devel@nongnu.org>,
"Chen, Yingwen" <yingwen.chen@intel.com>,
"Nakajima, Jun" <jun.nakajima@intel.com>,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
Joao Marcal Lemos Martins <joao.m.martins@oracle.com>,
Phillip Goerl <phillip.goerl@oracle.com>
Subject: Re: [edk2-rfc] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
Date: Fri, 30 Aug 2019 16:48:02 +0200 [thread overview]
Message-ID: <20190830164802.1b17ff26@redhat.com> (raw)
In-Reply-To: <033ced1a-1399-968e-cce6-6b15a20b0baf@redhat.com>
On Thu, 29 Aug 2019 19:01:35 +0200
Laszlo Ersek <lersek@redhat.com> wrote:
> On 08/27/19 20:31, Igor Mammedov wrote:
> > On Sat, 24 Aug 2019 01:48:09 +0000
> > "Yao, Jiewen" <jiewen.yao@intel.com> wrote:
>
> >> (05) Host CPU: (OS) Port 0xB2 write, all CPUs enter SMM (NOTE: New CPU
> >> will not enter CPU because SMI is disabled)
> > I think only CPU that does the write will enter SMM
>
> That used to be the case (and it is still the default QEMU behavior, if
> broadcast SMI is not negotiated). However, OVMF does negotiate broadcast
> SMI whenever QEMU offers the feature. Broadcast SMI is important for the
> stability of the edk2 SMM infrastructure on QEMU/KVM, we've found.
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1412313
> https://bugzilla.redhat.com/show_bug.cgi?id=1412327
>
> > and we might not need to pull in all already initialized CPUs into SMM.
>
> That, on the other hand, could be a valid idea. But then the CPU should
> use a different method for raising a synchronous SMI for itself (not a
> write to IO port 0xB2). Is a "directed SMI for self" possible?
theoretically depending on argument in 0xb3, it should be possible to
rise directed SMI even if broadcast ones are negotiated.
> > [...]
>
> I've tried to read through the procedure with your suggested changes,
> but I'm failing at composing a coherent mental image, in this email
> response format.
>
> If you have the time, can you write up the suggested list of steps in a
> "flat" format? (I believe you are suggesting to eliminate some steps
> completely.)
if I'd sum it up:
(01) On boot firmware maps and initializes SMI handler at default SMBASE (30000)
(using dedicated SMRAM at 30000 would allow us to avoid save/restore
steps and make SMM handler pointer not vulnerable to DMA attacks)
(02) QEMU hotplugs a new CPU in reset-ed state and sends SCI
(03) on receiving SCI, host CPU calls GPE cpu hotplug handler
which writes to IO port 0xB2 (broadcast SMI)
(04) firmware waits for all existing CPUs rendezvous in SMM mode,
new CPU(s) have SMI pending but does nothing yet
(05) host CPU wakes up one new CPU (INIT-INIT-SIPI)
SIPI vector points to RO flash HLT loop.
(how host CPU will know which new CPUs to relocate?
possibly reuse QEMU CPU hotplug MMIO interface???)
(06) new CPU does relocation.
(in case of attacker sends SIPI to several new CPUs,
open question how to detect collision of several CPUs at the same default SMBASE)
(07) once new CPU relocated host CPU completes initialization, returns
from IO port write and executes the rest of GPE handler, telling OS
to online new CPU.
> ... jumping to another point:
>
> >> 2) Let trusted software (SMM and init code) guarantee SMREBASE one by one (include any code runs before SMREBASE)
> > that would mean pulling all present CPUs into SMM mode so no attack
> > code could be executing before doing hotplug. With a lot of present CPUs
> > it could be quite expensive and unlike physical hardware, guest's CPUs
> > could be preempted arbitrarily long causing long delays.
>
> I agree with your analysis, but I slightly disagree about the impact:
>
> - CPU hotplug is not a frequent administrative action, so the CPU load
> should be temporary (it should be a spike). I don't worry that it would
> trip up OS kernel code. (SMI handling is known to take long on physical
> platforms oo.) In practice, all "normal" SMIs are broadcast already (for
> example when calling the runtime UEFI variable services from the OS kernel).
>
> - The fact that QEMU/KVM introduces some jitter into the execution of
> multi-core code (including SMM code) has proved useful in the past, for
> catching edk2 regressions.
>
> Again, this is not a strong disagreement from my side. I'm open to
> better ways for synching CPUs during muti-CPU-hotplug.
>
> (Digression:
>
> I expect someone could be curious why (a) I find it acceptable (even
> beneficial) that "some jitter" injected by the QEMU/KVM scheduling
> exposes multi-core regressions in edk2, but at the same time (b) I found
> it really important to add broadcast SMI to QEMU and OVMF. After all,
> both "jitter" and "unicast SMIs" are QEMU/KVM platform specifics, so why
> the different treatment?
>
> The reason is that the "jitter" does not interfere with normal
> operation, and it has been good for catching *regressions*. IOW, there
> is a working edk2 state, someone posts a patch, works on physical
> hardware, but breaks on QEMU/KVM --> then we can still reject or rework
> or revert the patch. And we're back to a working state again (in the
> best case, with a fixed feature patch).
>
> With the unicast SMIs however, it was impossible to enable the SMM stack
> reliably in the first place. There was no functional state to return to.
I don't really get the last statement, but the I know nothing about OVMF.
I don't insist on unicast SMI being used, it's just some ideas about what
we could do. It could be done later, broadcast SMI (might be not the best)
is sufficient to implement CPU hotplug.
> Digression ends.)
>
> > lets first see if if we can ignore race
>
> Makes me uncomfortable, but if this is the consensus, I'll go along.
same here, as mentioned in another reply as it's only possible in
attack case (multiple SMIs + multiple SIPI) so it could be fine to just
explode in case it happens (point is fw in not leaking anything from SMRAM
and OS did something illegeal).
> > and if it's not then
> > we probably end up with implementing some form of #1
>
> OK.
>
> Thanks!
> Laszlo
next prev parent reply other threads:[~2019-08-30 14:48 UTC|newest]
Thread overview: 69+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-13 14:16 CPU hotplug using SMM with QEMU+OVMF Laszlo Ersek
2019-08-13 16:09 ` Laszlo Ersek
2019-08-13 16:18 ` Laszlo Ersek
2019-08-14 13:20 ` Yao, Jiewen
2019-08-14 14:04 ` Paolo Bonzini
2019-08-15 9:55 ` Yao, Jiewen
2019-08-15 16:04 ` Paolo Bonzini
2019-08-15 15:00 ` [edk2-devel] " Laszlo Ersek
2019-08-15 16:16 ` Igor Mammedov
2019-08-15 16:21 ` Paolo Bonzini
2019-08-16 2:46 ` Yao, Jiewen
2019-08-16 7:20 ` Paolo Bonzini
2019-08-16 7:49 ` Yao, Jiewen
2019-08-16 20:15 ` Laszlo Ersek
2019-08-16 22:19 ` Alex Williamson
2019-08-17 0:20 ` Yao, Jiewen
2019-08-18 19:50 ` Paolo Bonzini
2019-08-18 23:00 ` Yao, Jiewen
2019-08-19 14:10 ` Paolo Bonzini
2019-08-21 12:07 ` Laszlo Ersek
2019-08-21 15:48 ` [edk2-rfc] " Michael D Kinney
2019-08-21 17:05 ` Paolo Bonzini
2019-08-21 17:25 ` Michael D Kinney
2019-08-21 17:39 ` Paolo Bonzini
2019-08-21 20:17 ` Michael D Kinney
2019-08-22 6:18 ` Paolo Bonzini
2019-08-22 18:29 ` Laszlo Ersek
2019-08-22 18:51 ` Paolo Bonzini
2019-08-23 14:53 ` Laszlo Ersek
2019-08-22 20:13 ` Michael D Kinney
2019-08-22 17:59 ` Laszlo Ersek
2019-08-22 18:43 ` Paolo Bonzini
2019-08-22 20:06 ` Michael D Kinney
2019-08-22 22:18 ` Paolo Bonzini
2019-08-22 22:32 ` Michael D Kinney
2019-08-22 23:11 ` Paolo Bonzini
2019-08-23 1:02 ` Michael D Kinney
2019-08-23 5:00 ` Yao, Jiewen
2019-08-23 15:25 ` Michael D Kinney
2019-08-24 1:48 ` Yao, Jiewen
2019-08-27 18:31 ` Igor Mammedov
2019-08-29 17:01 ` Laszlo Ersek
2019-08-30 14:48 ` Igor Mammedov [this message]
2019-08-30 18:46 ` Laszlo Ersek
2019-09-02 8:45 ` Igor Mammedov
2019-09-02 19:09 ` Laszlo Ersek
2019-09-03 14:53 ` [Qemu-devel] " Igor Mammedov
2019-09-03 17:20 ` Laszlo Ersek
2019-09-04 9:52 ` imammedo
2019-09-05 13:08 ` Laszlo Ersek
2019-09-05 15:45 ` Igor Mammedov
2019-09-05 15:49 ` [PATCH] q35: lpc: allow to lock down 128K RAM at default SMBASE address Igor Mammedov
2019-09-09 19:15 ` Laszlo Ersek
2019-09-09 19:20 ` Laszlo Ersek
2019-09-10 15:58 ` Igor Mammedov
2019-09-11 17:30 ` Laszlo Ersek
2019-09-17 13:11 ` [edk2-devel] " Igor Mammedov
2019-09-17 14:38 ` [staging/branch]: CdePkg - C Development Environment Package Minnow Ware
2019-08-26 15:30 ` [edk2-rfc] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF Laszlo Ersek
2019-08-27 16:23 ` Igor Mammedov
2019-08-27 20:11 ` Laszlo Ersek
2019-08-28 12:01 ` Igor Mammedov
2019-08-29 16:25 ` Laszlo Ersek
2019-08-30 13:49 ` [Qemu-devel] " Igor Mammedov
2019-08-22 17:53 ` Laszlo Ersek
2019-08-16 20:00 ` Laszlo Ersek
2019-08-15 16:07 ` Igor Mammedov
2019-08-15 16:24 ` Paolo Bonzini
2019-08-16 7:42 ` Igor Mammedov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190830164802.1b17ff26@redhat.com \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox