From: "Laszlo Ersek" <lersek@redhat.com>
To: devel@edk2.groups.io, ankur.a.arora@oracle.com
Cc: imammedo@redhat.com, boris.ostrovsky@oracle.com,
Jordan Justen <jordan.l.justen@intel.com>,
Ard Biesheuvel <ard.biesheuvel@arm.com>,
Aaron Young <aaron.young@oracle.com>
Subject: Re: [edk2-devel] [PATCH v6 7/9] OvmfPkg/CpuHotplugSmm: add CpuEject()
Date: Fri, 5 Feb 2021 17:06:29 +0100 [thread overview]
Message-ID: <35f1e98b-7a6a-ca94-881a-59cf49015be7@redhat.com> (raw)
In-Reply-To: <039138dc-c20b-39fb-8c14-953c090b8961@oracle.com>
Hi Ankur,
I figure it's prudent for me to follow up here too:
On 02/04/21 03:49, Ankur Arora wrote:
> On 2021-02-03 12:58 p.m., Laszlo Ersek wrote:
>> On 02/03/21 07:45, Ankur Arora wrote:
>>> On 2021-02-02 6:15 a.m., Laszlo Ersek wrote:
>>>> On 02/02/21 15:00, Laszlo Ersek wrote:
>>>>
>>>>> ... I guess that volatile-qualifying both CPU_HOT_EJECT_DATA, and the
>>>>> array pointed-to by CPU_HOT_EJECT_DATA.ApicIdMap, should suffice. In
>>>>> combination with the sync-up point that you quoted. This seems to
>>>>> match
>>>>> existing practice in PiSmmCpuDxeSmm -- there are no concurrent
>>>>> accesses,
>>>>> so atomicity is not a concern, and serializing the instruction streams
>>>>> coarsely, with the sync-up, in combination with volatile accesses,
>>>>> should presumably guarantee visibility (on x86 anyway).
>>>>
>>>> To summarize, this is what I would ask for:
>>>>
>>>> - make CPU_HOT_EJECT_DATA volatile
>>>>
>>>> - make (*CPU_HOT_EJECT_DATA.ApicIdMap) volatile
>>>>
>>>> - after storing something to CPU_HOT_EJECT_DATA or
>>>> CPU_HOT_EJECT_DATA.ApicIdMap on the BSP, execute a MemoryFence()
>>>>
>>>> - before fetching something from CPU_HOT_EJECT_DATA or
>>>> CPU_HOT_EJECT_DATA.ApicIdMap on an AP, execute a MemoryFence()
>>>>
>>>>
>>>> Except: MemoryFence() isn't a *memory fence* in fact.
>>>>
>>>> See "MdePkg/Library/BaseLib/X64/GccInline.c".
>>>>
>>>> It's just a compiler barrier, which may not add anything beyond what
>>>> we'd already have from "volatile".
>>>>
>>>> Case in point: PiSmmCpuDxeSmm performs heavy multi-processing, but does
>>>> not contain a single invocation of MemoryFence(). It uses volatile
>>>> objects, and a handful of InterlockedCompareExchangeXx() calls, for
>>>> implementing semaphores. (NB: there is no 8-bit variant of
>>>> InterlockedCompareExchange(), as "volatile UINT8" is considered atomic
>>>> in itself, and a suitable basis for a sempahore too.) And given the
>>>> synchronization from those semaphores, PiSmmCpuDpxeSmm trusts that
>>>> updates to the *other* volatile objects are both atomic and visible.
>>>>
>>>> I'm pretty sure this only works because x86 is in-order. There are
>>>> instruction stream barriers in place, and compiler barriers too, but no
>>>> actual memory barriers.
>>>
>>> Right and just to separate them explicitly, there are two problems:
>>>
>>> - compiler: where the compiler caches stuff in or looks at stale
>>> memory
>>> locations. Now, AFAICS, this should not happen because the ApicIdMap
>>> would
>>> never change once set so the compiler should reasonably be able to cache
>>> the address of ApicIdMap and dereference it (thus obviating the need for
>>> volatile.)
>>
>> (CPU_HOT_EJECT_DATA.Handler does change though.)
>
> Yeah, I did kinda elide over that. Let me think this through in v7
> and add more explicit comments and then we can see if it still looks
> fishy?
>
> Thanks
> Ankur
>
>>
>>> The compiler could, however, cache any assignments to ApicIdMap[Idx]
>>> (especially given LTO) and we could use a MemoryFence() (as the compiler
>>> barrier that it is) to force the store.
>>>
>>> - CPU pipeline: as you said, right now we basically depend on x86
>>> store
>>> order semantics (and the CpuPause() loop around AllCpusInSync, kinda
>>> provides
>>> a barrier.)
>>>
>>> So the BSP writes in this order:
>>> ApicIdMap[Idx]=x; ... ->AllCpusInSync = true
>>>
>>> And whenever the AP sees ->AllCpusInSync == True, it has to now see
>>> ApicIdMap[Idx] == x.
>>
>> Yes.
>>
>>>
>>> Maybe the thing to do is to detail this barrier in a commit
>>> note/comment?
>>
>> That would be nice.
>>
>>> And add the MemoryFence() but not the volatile.
>>
>> Yes, that should work.
Please *do* add the volatile, and also the MemoryFence(). When built
with Visual Studio, MemoryFence() does nothing at all (at least when LTO
is in effect -- which it almost always is). So we should have the
volatile for making things work, and MemoryFence() as a conceptual
reminder, so we know where to fix up things, when (if!) we come around
fixing this mess with MemoryFence(). Reference:
https://edk2.groups.io/g/rfc/message/500
https://edk2.groups.io/g/rfc/message/501
https://edk2.groups.io/g/rfc/message/502
https://edk2.groups.io/g/rfc/message/503
Thanks!
Laszlo
next prev parent reply other threads:[~2021-02-05 16:06 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-29 0:59 [PATCH v6 0/9] support CPU hot-unplug Ankur Arora
2021-01-29 0:59 ` [PATCH v6 1/9] OvmfPkg/CpuHotplugSmm: refactor hotplug logic Ankur Arora
2021-01-30 1:15 ` [edk2-devel] " Laszlo Ersek
2021-02-02 6:19 ` Ankur Arora
2021-02-01 2:59 ` Laszlo Ersek
2021-01-29 0:59 ` [PATCH v6 2/9] OvmfPkg/CpuHotplugSmm: collect hot-unplug events Ankur Arora
2021-01-30 2:18 ` Laszlo Ersek
2021-01-30 2:23 ` Laszlo Ersek
2021-02-02 6:03 ` Ankur Arora
2021-01-29 0:59 ` [PATCH v6 3/9] OvmfPkg/CpuHotplugSmm: add Qemu Cpu Status helper Ankur Arora
2021-01-30 2:36 ` Laszlo Ersek
2021-02-02 6:04 ` Ankur Arora
2021-01-29 0:59 ` [PATCH v6 4/9] OvmfPkg/CpuHotplugSmm: introduce UnplugCpus() Ankur Arora
2021-01-30 2:37 ` Laszlo Ersek
2021-02-01 3:13 ` Laszlo Ersek
2021-02-03 4:28 ` Ankur Arora
2021-02-03 19:20 ` Laszlo Ersek
2021-01-29 0:59 ` [PATCH v6 5/9] OvmfPkg/CpuHotplugSmm: define CPU_HOT_EJECT_DATA Ankur Arora
2021-02-01 4:53 ` Laszlo Ersek
2021-02-02 6:15 ` Ankur Arora
2021-01-29 0:59 ` [PATCH v6 6/9] OvmfPkg/SmmCpuFeaturesLib: init CPU ejection state Ankur Arora
2021-02-01 13:36 ` Laszlo Ersek
2021-02-03 5:20 ` Ankur Arora
2021-02-03 20:36 ` Laszlo Ersek
2021-02-04 2:58 ` Ankur Arora
2021-01-29 0:59 ` [PATCH v6 7/9] OvmfPkg/CpuHotplugSmm: add CpuEject() Ankur Arora
2021-02-01 16:11 ` Laszlo Ersek
2021-02-01 19:08 ` Laszlo Ersek
2021-02-01 20:12 ` Ankur Arora
2021-02-02 14:00 ` Laszlo Ersek
2021-02-02 14:15 ` Laszlo Ersek
2021-02-03 6:45 ` Ankur Arora
2021-02-03 20:58 ` Laszlo Ersek
2021-02-04 2:49 ` Ankur Arora
2021-02-04 8:58 ` Laszlo Ersek
2021-02-05 16:06 ` Laszlo Ersek [this message]
2021-02-08 5:04 ` [edk2-devel] " Ankur Arora
2021-02-03 6:13 ` Ankur Arora
2021-02-03 20:55 ` Laszlo Ersek
2021-02-04 2:57 ` Ankur Arora
2021-01-29 0:59 ` [PATCH v6 8/9] OvmfPkg/CpuHotplugSmm: add worker to do CPU ejection Ankur Arora
2021-02-01 17:22 ` Laszlo Ersek
2021-02-01 19:21 ` Ankur Arora
2021-02-02 13:23 ` Laszlo Ersek
2021-02-03 5:41 ` Ankur Arora
2021-01-29 0:59 ` [PATCH v6 9/9] OvmfPkg/SmmControl2Dxe: negotiate CPU hot-unplug Ankur Arora
2021-02-01 17:37 ` Laszlo Ersek
2021-02-01 17:40 ` Laszlo Ersek
2021-02-01 17:48 ` Laszlo Ersek
2021-02-03 5:46 ` Ankur Arora
2021-02-03 20:45 ` Laszlo Ersek
2021-02-04 3:04 ` Ankur Arora
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=35f1e98b-7a6a-ca94-881a-59cf49015be7@redhat.com \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox