public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Laszlo Ersek" <lersek@redhat.com>
To: edk2-devel-groups-io <devel@edk2.groups.io>
Cc: "Ard Biesheuvel" <ard.biesheuvel@linaro.org>,
	"Igor Mammedov" <imammedo@redhat.com>,
	"Jiewen Yao" <jiewen.yao@intel.com>,
	"Jordan Justen" <jordan.l.justen@intel.com>,
	"Michael Kinney" <michael.d.kinney@intel.com>,
	"Philippe Mathieu-Daudé" <philmd@redhat.com>
Subject: Re: [edk2-devel] [PATCH 12/16] OvmfPkg/CpuHotplugSmm: introduce First SMI Handler for hot-added CPUs
Date: Wed, 26 Feb 2020 22:22:57 +0100	[thread overview]
Message-ID: <2fefc402-2331-ebc9-4024-f99aa36082f0@redhat.com> (raw)
In-Reply-To: <111145fc-be3d-2a9a-a126-c14345a8a8a4@redhat.com>

On 02/24/20 10:10, Laszlo Ersek wrote:

> Overnight I managed to think up an attack, from the OS, against the
> "SmmVacated" byte (the last byte of the reserved page, i.e. the last
> byte of the Post-SMM Pen).
> 
> Here's how:
> 
> There are three CPUs being hotplugged in one SMI, CPU#1..CPU#3. The OS
> boots them all before raising the SMI (i.e. before it allows the ACPI
> GPE handler to take effect). After the first CPU (let's say CPU#1)
> returns to the OS via the RSM, the OS uses it (=CPU#1) to attack
> "SmmVacated", writing 1 to it in a loop.
> 
> Meanwhile CPU#2 and CPU#3 are still in SMM; let's say CPU#2 is
> relocating SMBASE, while CPU#3 is spinning on the APIC ID gate. And the
> SMM Monarch (CPU#0) is waiting for CPU#2 to report back in through
> "SmmVacated", from the Post-SMM Pen.
> 
> Now, the OS writes 1 to "SmmVacated" early, pretending to be CPU#2. This
> causes the SMM Monarch (CPU#0) to open the APIC ID gate for CPU#3 before
> CPU#2 actually reaches the RSM. This may cause CPU#2 and CPU#3 to both
> reach RSM with the same SMBASE value.
> 
> So why did I think to put SmmVacated in normal RAM (in the Post-SMM Pen
> reserved page?) Because in the following message:
> 
>   http://mid.mail-archive.com/20191004160948.72097f6c@redhat.com
>   (alt. link: <https://edk2.groups.io/g/devel/message/48475>)
> 
> Igor showed that putting "SmmVacated" in SMRAM is racy, even without a
> malicious OS. The problem is that there is no way to flip a byte in
> SMRAM *atomically* with RSM. So the CPU that has just completed SMBASE
> relocation can only flip the byte before RSM (in SMRAM) or after RSM (in
> normal RAM). In the latter case, the OS can attack SmmVacated -- but in
> the former case, we get the same race *without* any OS involvement
> (because the CPU just about to leave SMM via RSM actively allows the SMM
> Monarch to ungate the relocation code for a different CPU).
> 
> So I don't think there's a 100% safe & secure way to do this. One thing
> we could try -- I could update the code -- is to *combine* both
> approaches; use two "SmmVacated" flags: one in SMRAM, set to 1 just
> before the RSM instruction, and the other one in normal RAM (reserved
> page) that this patch set already introduces. The normal RAM flag would
> avoid the race completely for benign OSes (like the present patchset
> already does), and the SMRAM flag would keep the racy window to a single
> instruction when the OS is malicious and the normal RAM flag cannot be
> trusted.

I've implemented and tested this "combined" approach.

Thanks
Laszlo

> 
> I'd appreciate feedback on this; I don't know how in physical firmware,
> the initial SMI handler for hot-added CPUs deals with the same problem.
> 
> I guess on physical platforms, the platform manual could simply say,
> "only hot-plug CPUs one by one". That eliminates the whole problem. In
> such a case, we could safely stick with the current patchset, too.
> 
> --*--
> 
> BTW, I did try hot-plugging multiple CPUs at once, with libvirt:
> 
>> virsh setvcpu ovmf.fedora.q35 1,3 --enable --live
>>
>> error: Operation not supported: only one hotpluggable entity can be
>> selected
> 
> I think this is because it would require multiple "device-add" commands
> to be sent at the same time over the QMP monitor -- and that's something
> that QEMU doesn't support. (Put alternatively, "device-add" does not
> take a list of objects to create.) In that regard, the problem described
> above is academic, because QEMU already seems like a platform that can
> only hotplug one CPU at a time. In that sense, using APIC ID *arrays*
> and *loops* per hotplug SMI is not really needed; I did that because we
> had discussed this feature in terms of multiple CPUs from the beginning,
> and because QEMU's ACPI GPE handler (the CSCN AML method) also loops
> over multiple processors.
> 
> Again, comments would be most welcome... I wouldn't like to complicate
> the SMI handler if it's not absolutely necessary.


  reply	other threads:[~2020-02-26 21:23 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-23 17:25 [PATCH 00/16] OvmfPkg: support VCPU hotplug with -D SMM_REQUIRE Laszlo Ersek
2020-02-23 17:25 ` [PATCH 01/16] MdeModulePkg/PiSmmCore: log SMM image start failure Laszlo Ersek
2020-02-23 17:25 ` [PATCH 02/16] UefiCpuPkg/PiSmmCpuDxeSmm: fix S3 Resume for CPU hotplug Laszlo Ersek
2020-02-23 17:25 ` [PATCH 03/16] OvmfPkg: clone SmmCpuPlatformHookLib from UefiCpuPkg Laszlo Ersek
2020-02-23 17:25 ` [PATCH 04/16] OvmfPkg: enable SMM Monarch Election in PiSmmCpuDxeSmm Laszlo Ersek
2020-02-23 17:25 ` [PATCH 05/16] OvmfPkg: enable CPU hotplug support " Laszlo Ersek
2020-02-23 17:25 ` [PATCH 06/16] OvmfPkg/CpuHotplugSmm: introduce skeleton for CPU Hotplug SMM driver Laszlo Ersek
2020-02-23 17:25 ` [PATCH 07/16] OvmfPkg/CpuHotplugSmm: add hotplug register block helper functions Laszlo Ersek
2020-02-23 17:25 ` [PATCH 08/16] OvmfPkg/CpuHotplugSmm: define the QEMU_CPUHP_CMD_GET_ARCH_ID macro Laszlo Ersek
2020-02-23 17:25 ` [PATCH 09/16] OvmfPkg/CpuHotplugSmm: add function for collecting CPUs with events Laszlo Ersek
2020-02-23 17:25 ` [PATCH 10/16] OvmfPkg/CpuHotplugSmm: collect " Laszlo Ersek
2020-02-23 17:25 ` [PATCH 11/16] OvmfPkg/CpuHotplugSmm: introduce Post-SMM Pen for hot-added CPUs Laszlo Ersek
2020-02-23 17:25 ` [PATCH 12/16] OvmfPkg/CpuHotplugSmm: introduce First SMI Handler " Laszlo Ersek
2020-02-24  9:10   ` [edk2-devel] " Laszlo Ersek
2020-02-26 21:22     ` Laszlo Ersek [this message]
2020-02-23 17:25 ` [PATCH 13/16] OvmfPkg/CpuHotplugSmm: complete root MMI handler for CPU hotplug Laszlo Ersek
2020-02-23 17:25 ` [PATCH 14/16] OvmfPkg: clone CpuS3DataDxe from UefiCpuPkg Laszlo Ersek
2020-02-23 17:25 ` [PATCH 15/16] OvmfPkg/CpuS3DataDxe: superficial cleanups Laszlo Ersek
2020-02-23 17:25 ` [PATCH 16/16] OvmfPkg/CpuS3DataDxe: enable S3 resume after CPU hotplug Laszlo Ersek
2020-02-24 16:31 ` [PATCH 00/16] OvmfPkg: support VCPU hotplug with -D SMM_REQUIRE Ard Biesheuvel
2020-07-24  6:26 ` [edk2-devel] " Wu, Jiaxin
2020-07-24 16:01   ` Laszlo Ersek
2020-07-29  8:37     ` Wu, Jiaxin
2020-10-01  9:59       ` [ann] " Laszlo Ersek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2fefc402-2331-ebc9-4024-f99aa36082f0@redhat.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox