From: "Laszlo Ersek" <lersek@redhat.com>
To: devel@edk2.groups.io, jiaxin.wu@intel.com
Cc: Eric Dong <eric.dong@intel.com>, Ray Ni <ray.ni@intel.com>,
Zeng Star <star.zeng@intel.com>,
Gerd Hoffmann <kraxel@redhat.com>,
Rahul Kumar <rahul1.kumar@intel.com>
Subject: Re: [edk2-devel] [PATCH v1 4/7] UefiCpuPkg: Implements SmmCpuSyncLib library instance
Date: Tue, 7 Nov 2023 11:46:49 +0100 [thread overview]
Message-ID: <12ae85b8-4539-6225-a9c4-868f919eb7f4@redhat.com> (raw)
In-Reply-To: <20231103153012.3704-5-jiaxin.wu@intel.com>
On 11/3/23 16:30, Wu, Jiaxin wrote:
> Implements SmmCpuSyncLib Library class. The instance follows the
> existing SMM CPU driver (PiSmmCpuDxeSmm) sync implementation:
> 1.Abstract Counter and Run semaphores into SmmCpuSyncCtx.
> 2.Abstract CPU arrival count operation to
> SmmCpuSyncGetArrivedCpuCount(), SmmCpuSyncCheckInCpu(),
> SmmCpuSyncCheckOutCpu(), SmmCpuSyncLockDoor().
> Implementation is aligned with existing SMM CPU driver.
> 3. Abstract SMM CPU Sync flow to:
> BSP: SmmCpuSyncReleaseOneAp --> AP: SmmCpuSyncWaitForBsp
> BSP: SmmCpuSyncWaitForAllAPs <-- AP: SmmCpuSyncReleaseBsp
> Semaphores release & wait during sync flow is same as existing SMM
> CPU driver.
> 4.Same operation to Counter and Run semaphores by leverage the atomic
> compare exchange.
>
> Change-Id: I5a004637f8b24a90594a794092548b850b187493
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Zeng Star <star.zeng@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
> ---
> UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c | 481 +++++++++++++++++++++
> UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf | 38 ++
> UefiCpuPkg/UefiCpuLibs.dsc.inc | 15 +
> UefiCpuPkg/UefiCpuPkg.dsc | 1 +
> 4 files changed, 535 insertions(+)
> create mode 100644 UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c
> create mode 100644 UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
> create mode 100644 UefiCpuPkg/UefiCpuLibs.dsc.inc
I won't review this library instance in detail until the lib class
header is cleaned up. Just some high-level comments:
- Please use SafeIntLib for all calculations, to prevent overflows. This
means primarily (but not exclusively) memory size calculations, for
allocations
- Implement proper error checking. ASSERT()s are unacceptable for
catching errors, even if the existent drivers do that.
- Semaphore primitives should call CpuDeadLoop() whenever they detect --
and they *should* detect -- counter underflow or overflow.
At the start of my career at Red Hat, one of the bugs I worked on was a
bug in the Xen hypervisor, in SMP initialization. Xen wanted to program
the MTRRs on all processors at the same time (quite funnily, it's the
exact same topic as your patch #2), and it used counters (semaphores,
spinlocks) for BSP-AP synchronization. Unfortunately, it used single
byte, signed counters. The logic worked fine until you only had 128
processors. When you had more processors, like 160 or 192, the SMP
bringup seemingly completed, but then Xen would crash with very weird
symptoms, later. The problem was of course that the BSP and the APs got
out of sync, the MTRR programming was inconsistent (which is undefined
behavior in itself, per Intel SDM), and then the APs were running around
wherever like a loose herd of cats.
Moral of the story: your synchronization primitives are *never
permitted* to fail. If they fail, you must hang *immediately*, to
contain the damage.
Note that the InternalWaitForSemaphore() and InternalReleaseSemaphore()
functions already detect integer overflow to an extent -- they don't
overwrite the counter with the overflowed / underflowed value, and they
properly output the error as well ("release" returning 0 is clearly an
overflow that was caught, and "wait" returning MAX_UINT32 is clearly an
underflow that was caught).
The problem is that these error conditions are never checked by the
callers; what's more, whenever such an error occurs, it's effectively
impossible for the caller to do anything -- it's guaranteed to be a
consequence of a programming error somewhere. So it's best not to litter
the call sites with error checks, but to call CpuDeadLoop() inside the
primitives whenever a semaphore is busted.
I understand that I'm asking for things that differ from the original
code, but every time we abstract something, we should make an honest
effort to clean up and fix existent bugs.
Laszlo
>
> diff --git a/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c b/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c
> new file mode 100644
> index 0000000000..3bc3ebe49a
> --- /dev/null
> +++ b/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c
> @@ -0,0 +1,481 @@
> +/** @file
> + SMM CPU Sync lib implementation.
> +
> + Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
> + SPDX-License-Identifier: BSD-2-Clause-Patent
> +
> +**/
> +
> +#include <Base.h>
> +#include <Uefi.h>
> +#include <Library/UefiLib.h>
> +#include <Library/BaseLib.h>
> +#include <Library/DebugLib.h>
> +#include <Library/SynchronizationLib.h>
> +#include <Library/DebugLib.h>
> +#include <Library/BaseMemoryLib.h>
> +#include <Library/SmmServicesTableLib.h>
> +#include <Library/MemoryAllocationLib.h>
> +#include <Library/SmmCpuSyncLib.h>
> +
> +typedef struct {
> + ///
> + /// Indicate how many CPU entered SMM.
> + ///
> + volatile UINT32 *Counter;
> +} SMM_CPU_SYNC_SEMAPHORE_GLOBAL;
> +
> +typedef struct {
> + ///
> + /// Used for control each CPU continue run or wait for signal
> + ///
> + volatile UINT32 *Run;
> +} SMM_CPU_SYNC_SEMAPHORE_CPU;
> +
> +typedef struct {
> + ///
> + /// All global semaphores' pointer in SMM CPU Sync
> + ///
> + SMM_CPU_SYNC_SEMAPHORE_GLOBAL *GlobalSem;
> + ///
> + /// All semaphores for each processor in SMM CPU Sync
> + ///
> + SMM_CPU_SYNC_SEMAPHORE_CPU *CpuSem;
> + ///
> + /// The number of processors in the system.
> + /// This does not indicate the number of processors that entered SMM.
> + ///
> + UINTN NumberOfCpus;
> + ///
> + /// Address of global and each CPU semaphores
> + ///
> + UINTN *SemBlock;
> + ///
> + /// Size of global and each CPU semaphores
> + ///
> + UINTN SemBlockPages;
> +} SMM_CPU_SYNC_CTX;
> +
> +/**
> + Performs an atomic compare exchange operation to get semaphore.
> + The compare exchange operation must be performed using MP safe
> + mechanisms.
> +
> + @param Sem IN: 32-bit unsigned integer
> + OUT: original integer - 1
> +
> + @return Original integer - 1
> +
> +**/
> +UINT32
> +InternalWaitForSemaphore (
> + IN OUT volatile UINT32 *Sem
> + )
> +{
> + UINT32 Value;
> +
> + for ( ; ;) {
> + Value = *Sem;
> + if ((Value != 0) &&
> + (InterlockedCompareExchange32 (
> + (UINT32 *)Sem,
> + Value,
> + Value - 1
> + ) == Value))
> + {
> + break;
> + }
> +
> + CpuPause ();
> + }
> +
> + return Value - 1;
> +}
> +
> +/**
> + Performs an atomic compare exchange operation to release semaphore.
> + The compare exchange operation must be performed using MP safe
> + mechanisms.
> +
> + @param Sem IN: 32-bit unsigned integer
> + OUT: original integer + 1
> +
> + @return Original integer + 1
> +
> +**/
> +UINT32
> +InternalReleaseSemaphore (
> + IN OUT volatile UINT32 *Sem
> + )
> +{
> + UINT32 Value;
> +
> + do {
> + Value = *Sem;
> + } while (Value + 1 != 0 &&
> + InterlockedCompareExchange32 (
> + (UINT32 *)Sem,
> + Value,
> + Value + 1
> + ) != Value);
> +
> + return Value + 1;
> +}
> +
> +/**
> + Performs an atomic compare exchange operation to lock semaphore.
> + The compare exchange operation must be performed using MP safe
> + mechanisms.
> +
> + @param Sem IN: 32-bit unsigned integer
> + OUT: -1
> +
> + @return Original integer
> +
> +**/
> +UINT32
> +InternalLockdownSemaphore (
> + IN OUT volatile UINT32 *Sem
> + )
> +{
> + UINT32 Value;
> +
> + do {
> + Value = *Sem;
> + } while (InterlockedCompareExchange32 (
> + (UINT32 *)Sem,
> + Value,
> + (UINT32)-1
> + ) != Value);
> +
> + return Value;
> +}
> +
> +/**
> + Creates and Init a new Smm Cpu Sync context.
> +
> + @param[in] NumberOfCpus The number of processors in the system.
> +
> + @return Pointer to an allocated Smm Cpu Sync context object.
> + If the creation failed, returns NULL.
> +
> +**/
> +VOID *
> +EFIAPI
> +SmmCpuSyncContextInit (
> + IN UINTN NumberOfCpus
> + )
> +{
> + SMM_CPU_SYNC_CTX *Ctx;
> + UINTN CtxSize;
> + UINTN OneSemSize;
> + UINTN GlobalSemSize;
> + UINTN CpuSemSize;
> + UINTN TotalSemSize;
> + UINTN SemAddr;
> + UINTN CpuIndex;
> +
> + Ctx = NULL;
> +
> + //
> + // Allocate for the Ctx
> + //
> + CtxSize = sizeof (SMM_CPU_SYNC_CTX) + sizeof (SMM_CPU_SYNC_SEMAPHORE_GLOBAL) + sizeof (SMM_CPU_SYNC_SEMAPHORE_CPU) * NumberOfCpus;
> + Ctx = (SMM_CPU_SYNC_CTX *)AllocatePages (EFI_SIZE_TO_PAGES (CtxSize));
> + ASSERT (Ctx != NULL);
> + Ctx->GlobalSem = (SMM_CPU_SYNC_SEMAPHORE_GLOBAL *)((UINT8 *)Ctx + sizeof (SMM_CPU_SYNC_CTX));
> + Ctx->CpuSem = (SMM_CPU_SYNC_SEMAPHORE_CPU *)((UINT8 *)Ctx + sizeof (SMM_CPU_SYNC_CTX) + sizeof (SMM_CPU_SYNC_SEMAPHORE_GLOBAL));
> + Ctx->NumberOfCpus = NumberOfCpus;
> +
> + //
> + // Allocate for Semaphores in the Ctx
> + //
> + OneSemSize = GetSpinLockProperties ();
> + GlobalSemSize = (sizeof (SMM_CPU_SYNC_SEMAPHORE_GLOBAL) / sizeof (VOID *)) * OneSemSize;
> + CpuSemSize = (sizeof (SMM_CPU_SYNC_SEMAPHORE_CPU) / sizeof (VOID *)) * OneSemSize * NumberOfCpus;
> + TotalSemSize = GlobalSemSize + CpuSemSize;
> + DEBUG ((DEBUG_INFO, "[%a] - One Semaphore Size = 0x%x\n", __FUNCTION__, OneSemSize));
> + DEBUG ((DEBUG_INFO, "[%a] - Total Semaphores Size = 0x%x\n", __FUNCTION__, TotalSemSize));
> + Ctx->SemBlockPages = EFI_SIZE_TO_PAGES (TotalSemSize);
> + Ctx->SemBlock = AllocatePages (Ctx->SemBlockPages);
> + ASSERT (Ctx->SemBlock != NULL);
> + ZeroMem (Ctx->SemBlock, TotalSemSize);
> +
> + SemAddr = (UINTN)Ctx->SemBlock;
> +
> + //
> + // Assign Global Semaphore pointer
> + //
> + Ctx->GlobalSem->Counter = (UINT32 *)SemAddr;
> + *Ctx->GlobalSem->Counter = 0;
> + DEBUG ((DEBUG_INFO, "[%a] - Ctx->GlobalSem->Counter Address: 0x%08x\n", __FUNCTION__, (UINTN)Ctx->GlobalSem->Counter));
> +
> + SemAddr += GlobalSemSize;
> +
> + //
> + // Assign CPU Semaphore pointer
> + //
> + for (CpuIndex = 0; CpuIndex < NumberOfCpus; CpuIndex++) {
> + Ctx->CpuSem[CpuIndex].Run = (UINT32 *)(SemAddr + (CpuSemSize / NumberOfCpus) * CpuIndex);
> + *Ctx->CpuSem[CpuIndex].Run = 0;
> + DEBUG ((DEBUG_INFO, "[%a] - Ctx->CpuSem[%d].Run Address: 0x%08x\n", __FUNCTION__, CpuIndex, (UINTN)Ctx->CpuSem[CpuIndex].Run));
> + }
> +
> + //
> + // Return the new created Smm Cpu Sync context
> + //
> + return (VOID *)Ctx;
> +}
> +
> +/**
> + Deinit an allocated Smm Cpu Sync context object.
> +
> + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context object to be released.
> +
> +**/
> +VOID
> +EFIAPI
> +SmmCpuSyncContextDeinit (
> + IN VOID *SmmCpuSyncCtx
> + )
> +{
> + SMM_CPU_SYNC_CTX *Ctx;
> + UINTN CtxSize;
> +
> + ASSERT (SmmCpuSyncCtx != NULL);
> + Ctx = (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx;
> +
> + CtxSize = sizeof (SMM_CPU_SYNC_CTX) + sizeof (SMM_CPU_SYNC_SEMAPHORE_GLOBAL) + sizeof (SMM_CPU_SYNC_SEMAPHORE_CPU) * (Ctx->NumberOfCpus);
> +
> + FreePages (Ctx->SemBlock, Ctx->SemBlockPages);
> +
> + FreePages (Ctx, EFI_SIZE_TO_PAGES (CtxSize));
> +}
> +
> +/**
> + Reset Smm Cpu Sync context object.
> +
> + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context object to be released.
> +
> +**/
> +VOID
> +EFIAPI
> +SmmCpuSyncContextReset (
> + IN VOID *SmmCpuSyncCtx
> + )
> +{
> + SMM_CPU_SYNC_CTX *Ctx;
> +
> + ASSERT (SmmCpuSyncCtx != NULL);
> + Ctx = (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx;
> +
> + *Ctx->GlobalSem->Counter = 0;
> +}
> +
> +/**
> + Get current arrived CPU count.
> +
> + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context object to be released.
> +
> + @return Current number of arrived CPU count.
> + -1: indicate the door has been locked.
> +
> +**/
> +UINT32
> +EFIAPI
> +SmmCpuSyncGetArrivedCpuCount (
> + IN VOID *SmmCpuSyncCtx
> + )
> +{
> + SMM_CPU_SYNC_CTX *Ctx;
> +
> + ASSERT (SmmCpuSyncCtx != NULL);
> + Ctx = (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx;
> +
> + if (*Ctx->GlobalSem->Counter < 0) {
> + return (UINT32)-1;
> + }
> +
> + return *Ctx->GlobalSem->Counter;
> +}
> +
> +/**
> + Performs an atomic operation to check in CPU.
> + Check in CPU successfully if the returned arrival CPU count value is
> + positive, otherwise indicate the door has been locked, the CPU can
> + not checkin.
> +
> + @param[in] SmmCpuSyncCtx Pointer to the Smm CPU Sync context object to be released.
> + @param[in] CpuIndex Pointer to the CPU Index to checkin.
> +
> + @return Positive value (>0): CPU arrival count number after check in CPU successfully.
> + Nonpositive value (<=0): check in CPU failure.
> +
> +**/
> +INT32
> +EFIAPI
> +SmmCpuSyncCheckInCpu (
> + IN VOID *SmmCpuSyncCtx,
> + IN UINTN CpuIndex
> + )
> +{
> + SMM_CPU_SYNC_CTX *Ctx;
> +
> + ASSERT (SmmCpuSyncCtx != NULL);
> + Ctx = (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx;
> +
> + return (INT32)InternalReleaseSemaphore (Ctx->GlobalSem->Counter);
> +}
> +
> +/**
> + Performs an atomic operation to check out CPU.
> + Check out CPU successfully if the returned arrival CPU count value is
> + nonnegative, otherwise indicate the door has been locked, the CPU can
> + not checkout.
> +
> + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context object to be released.
> + @param[in] CpuIndex Pointer to the Cpu Index to checkout.
> +
> + @return Nonnegative value (>=0): CPU arrival count number after check out CPU successfully.
> + Negative value (<0): Check out CPU failure.
> +
> +
> +**/
> +INT32
> +EFIAPI
> +SmmCpuSyncCheckOutCpu (
> + IN VOID *SmmCpuSyncCtx,
> + IN UINTN CpuIndex
> + )
> +{
> + SMM_CPU_SYNC_CTX *Ctx;
> +
> + ASSERT (SmmCpuSyncCtx != NULL);
> + Ctx = (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx;
> +
> + return (INT32)InternalWaitForSemaphore (Ctx->GlobalSem->Counter);
> +}
> +
> +/**
> + Performs an atomic operation lock door for CPU checkin or checkout.
> + With this function, CPU can not check in via SmmCpuSyncCheckInCpu () or
> + check out via SmmCpuSyncCheckOutCpu ().
> +
> + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context object to be released.
> + @param[in] CpuIndex Pointer to the Cpu Index to lock door.
> +
> + @return CPU arrival count number.
> +
> +**/
> +UINT32
> +EFIAPI
> +SmmCpuSyncLockDoor (
> + IN VOID *SmmCpuSyncCtx,
> + IN UINTN CpuIndex
> + )
> +{
> + SMM_CPU_SYNC_CTX *Ctx;
> +
> + ASSERT (SmmCpuSyncCtx != NULL);
> + Ctx = (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx;
> +
> + return InternalLockdownSemaphore (Ctx->GlobalSem->Counter);
> +}
> +
> +/**
> + Used for BSP to wait all APs.
> +
> + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context object.
> + @param[in] NumberOfAPs Number of APs need to wait.
> + @param[in] BspIndex Pointer to the BSP Index.
> +
> +**/
> +VOID
> +EFIAPI
> +SmmCpuSyncWaitForAllAPs (
> + IN VOID *SmmCpuSyncCtx,
> + IN UINTN NumberOfAPs,
> + IN UINTN BspIndex
> + )
> +{
> + SMM_CPU_SYNC_CTX *Ctx;
> +
> + ASSERT (SmmCpuSyncCtx != NULL);
> + Ctx = (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx;
> +
> + while (NumberOfAPs-- > 0) {
> + InternalWaitForSemaphore (Ctx->CpuSem[BspIndex].Run);
> + }
> +}
> +
> +/**
> + Used for BSP to release one AP.
> +
> + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context object.
> + @param[in] CpuIndex Pointer to the Cpu Index, indicate which AP need to be released.
> + @param[in] BspIndex Pointer to the BSP Index.
> +
> +**/
> +VOID
> +EFIAPI
> +SmmCpuSyncReleaseOneAp (
> + IN VOID *SmmCpuSyncCtx,
> + IN UINTN CpuIndex,
> + IN UINTN BspIndex
> + )
> +{
> + SMM_CPU_SYNC_CTX *Ctx;
> +
> + ASSERT (SmmCpuSyncCtx != NULL);
> + Ctx = (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx;
> +
> + InternalReleaseSemaphore (Ctx->CpuSem[CpuIndex].Run);
> +}
> +
> +/**
> + Used for AP to wait BSP.
> +
> + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context object.
> + @param[in] CpuIndex Pointer to the Cpu Index, indicate which AP wait BSP.
> + @param[in] BspIndex Pointer to the BSP Index.
> +
> +**/
> +VOID
> +EFIAPI
> +SmmCpuSyncWaitForBsp (
> + IN VOID *SmmCpuSyncCtx,
> + IN UINTN CpuIndex,
> + IN UINTN BspIndex
> + )
> +{
> + SMM_CPU_SYNC_CTX *Ctx;
> +
> + ASSERT (SmmCpuSyncCtx != NULL);
> + Ctx = (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx;
> +
> + InternalWaitForSemaphore (Ctx->CpuSem[CpuIndex].Run);
> +}
> +
> +/**
> + Used for AP to release BSP.
> +
> + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context object.
> + @param[in] CpuIndex Pointer to the Cpu Index, indicate which AP release BSP.
> + @param[in] BspIndex Pointer to the BSP Index.
> +
> +**/
> +VOID
> +EFIAPI
> +SmmCpuSyncReleaseBsp (
> + IN VOID *SmmCpuSyncCtx,
> + IN UINTN CpuIndex,
> + IN UINTN BspIndex
> + )
> +{
> + SMM_CPU_SYNC_CTX *Ctx;
> +
> + ASSERT (SmmCpuSyncCtx != NULL);
> + Ctx = (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx;
> +
> + InternalReleaseSemaphore (Ctx->CpuSem[BspIndex].Run);
> +}
> diff --git a/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf b/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
> new file mode 100644
> index 0000000000..86475ce64b
> --- /dev/null
> +++ b/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
> @@ -0,0 +1,38 @@
> +## @file
> +# SMM CPU Synchronization lib.
> +#
> +# This is SMM CPU Synchronization lib used for SMM CPU sync operations.
> +#
> +# Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
> +# SPDX-License-Identifier: BSD-2-Clause-Patent
> +#
> +##
> +
> +[Defines]
> + INF_VERSION = 0x00010005
> + BASE_NAME = SmmCpuSyncLib
> + FILE_GUID = 1ca1bc1a-16a4-46ef-956a-ca500fd3381f
> + MODULE_TYPE = DXE_SMM_DRIVER
> + LIBRARY_CLASS = SmmCpuSyncLib|DXE_SMM_DRIVER
> +
> +[Sources]
> + SmmCpuSyncLib.c
> +
> +[Packages]
> + MdePkg/MdePkg.dec
> + MdeModulePkg/MdeModulePkg.dec
> + UefiCpuPkg/UefiCpuPkg.dec
> +
> +[LibraryClasses]
> + UefiLib
> + BaseLib
> + DebugLib
> + PrintLib
> + SynchronizationLib
> + BaseMemoryLib
> + SmmServicesTableLib
> + MemoryAllocationLib
> +
> +[Pcd]
> +
> +[Protocols]
> diff --git a/UefiCpuPkg/UefiCpuLibs.dsc.inc b/UefiCpuPkg/UefiCpuLibs.dsc.inc
> new file mode 100644
> index 0000000000..6b9b362729
> --- /dev/null
> +++ b/UefiCpuPkg/UefiCpuLibs.dsc.inc
> @@ -0,0 +1,15 @@
> +## @file
> +# UefiCpu DSC include file for [LibraryClasses*] section of all Architectures.
> +#
> +# This file can be included to the [LibraryClasses*] section(s) of a platform DSC file
> +# by using "!include UefiCpuPkg/UefiCpuLibs.dsc.inc" to specify the library instances
> +# of some EDKII basic/common library classes in UefiCpuPkg.
> +#
> +# Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
> +#
> +# SPDX-License-Identifier: BSD-2-Clause-Patent
> +#
> +##
> +
> +[LibraryClasses]
> + SmmCpuSyncLib|UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
> \ No newline at end of file
> diff --git a/UefiCpuPkg/UefiCpuPkg.dsc b/UefiCpuPkg/UefiCpuPkg.dsc
> index 074fd77461..338f18eb98 100644
> --- a/UefiCpuPkg/UefiCpuPkg.dsc
> +++ b/UefiCpuPkg/UefiCpuPkg.dsc
> @@ -21,10 +21,11 @@
> #
> # External libraries to build package
> #
>
> !include MdePkg/MdeLibs.dsc.inc
> +!include UefiCpuPkg/UefiCpuLibs.dsc.inc
>
> [LibraryClasses]
> BaseLib|MdePkg/Library/BaseLib/BaseLib.inf
> BaseMemoryLib|MdePkg/Library/BaseMemoryLib/BaseMemoryLib.inf
> CpuLib|MdePkg/Library/BaseCpuLib/BaseCpuLib.inf
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#110838): https://edk2.groups.io/g/devel/message/110838
Mute This Topic: https://groups.io/mt/102366301/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/1913456212/xyzzy [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
next prev parent reply other threads:[~2023-11-07 10:46 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-03 15:30 [edk2-devel] [PATCH v1 0/7] Refine SMM CPU Sync flow and abstract SmmCpuSyncLib Wu, Jiaxin
2023-11-03 15:30 ` [edk2-devel] [PATCH v1 1/7] UefiCpuPkg/PiSmmCpuDxeSmm: Optimize Semaphore Sync between BSP and AP Wu, Jiaxin
2023-11-07 8:28 ` Laszlo Ersek
2023-11-07 10:27 ` Laszlo Ersek
2023-11-03 15:30 ` [edk2-devel] [PATCH v1 2/7] UefiCpuPkg/PiSmmCpuDxeSmm: Reduce times of BSP and AP Sync for SMM Exit Wu, Jiaxin
2023-11-07 9:47 ` Laszlo Ersek
2023-11-03 15:30 ` [edk2-devel] [PATCH v1 3/7] UefiCpuPkg: Adds SmmCpuSyncLib library class Wu, Jiaxin
2023-11-07 10:26 ` Laszlo Ersek
2023-11-07 10:29 ` Laszlo Ersek
2023-11-13 3:15 ` Ni, Ray
2023-11-13 10:43 ` Laszlo Ersek
2023-11-03 15:30 ` [edk2-devel] [PATCH v1 4/7] UefiCpuPkg: Implements SmmCpuSyncLib library instance Wu, Jiaxin
2023-11-07 10:46 ` Laszlo Ersek [this message]
2023-11-07 10:47 ` Laszlo Ersek
2023-11-03 15:30 ` [edk2-devel] [PATCH v1 5/7] OvmfPkg: Specifies SmmCpuSyncLib instance Wu, Jiaxin
2023-11-07 10:59 ` Laszlo Ersek
2023-11-03 15:30 ` [edk2-devel] [PATCH v1 6/7] UefiPayloadPkg: " Wu, Jiaxin
2023-11-06 1:11 ` Guo, Gua
2023-11-03 15:30 ` [edk2-devel] [PATCH v1 7/7] UefiCpuPkg/PiSmmCpuDxeSmm: Consume SmmCpuSyncLib Wu, Jiaxin
2023-11-07 11:00 ` Laszlo Ersek
2023-11-07 11:47 ` Wu, Jiaxin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=12ae85b8-4539-6225-a9c4-868f919eb7f4@redhat.com \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox