From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail02.groups.io (mail02.groups.io [66.175.222.108]) by spool.mail.gandi.net (Postfix) with ESMTPS id 9B465D80A20 for ; Tue, 7 Nov 2023 10:46:56 +0000 (UTC) DKIM-Signature: a=rsa-sha256; bh=n9y7j29UD5sCtWmc9DdT9RmLCx0aef76mQLP2CcQ0f0=; c=relaxed/simple; d=groups.io; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From:In-Reply-To:Precedence:List-Subscribe:List-Help:Sender:List-Id:Mailing-List:Delivered-To:Reply-To:List-Unsubscribe-Post:List-Unsubscribe:Content-Language:Content-Type:Content-Transfer-Encoding; s=20140610; t=1699354015; v=1; b=XG6ss2GyfSTrMcmxfFK3vMNL8yTeSQiuHyPHYmGdzaSOdBIXSvyerY0mAIHx2cobLWFM80mL I3ztpEe5QNN0zygVsafvMQI8TY43SwzZ0Y6ueNrlTnuDDsHvk/uyu2hj+kv1k/eb9jvVe52ch4T /Xn61bXEZQlrugLa6gMb+XTo= X-Received: by 127.0.0.2 with SMTP id 7fKCYY7687511xYeUeYrhGJE; Tue, 07 Nov 2023 02:46:55 -0800 X-Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mx.groups.io with SMTP id smtpd.web11.7456.1699354014391177979 for ; Tue, 07 Nov 2023 02:46:54 -0800 X-Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-632-7vKkLcTuN868MYqYd9fuGQ-1; Tue, 07 Nov 2023 05:46:52 -0500 X-MC-Unique: 7vKkLcTuN868MYqYd9fuGQ-1 X-Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CE7E3101B044; Tue, 7 Nov 2023 10:46:51 +0000 (UTC) X-Received: from [10.39.193.64] (unknown [10.39.193.64]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 6D625492BFD; Tue, 7 Nov 2023 10:46:50 +0000 (UTC) Message-ID: <12ae85b8-4539-6225-a9c4-868f919eb7f4@redhat.com> Date: Tue, 7 Nov 2023 11:46:49 +0100 MIME-Version: 1.0 Subject: Re: [edk2-devel] [PATCH v1 4/7] UefiCpuPkg: Implements SmmCpuSyncLib library instance To: devel@edk2.groups.io, jiaxin.wu@intel.com Cc: Eric Dong , Ray Ni , Zeng Star , Gerd Hoffmann , Rahul Kumar References: <20231103153012.3704-1-jiaxin.wu@intel.com> <20231103153012.3704-5-jiaxin.wu@intel.com> From: "Laszlo Ersek" In-Reply-To: <20231103153012.3704-5-jiaxin.wu@intel.com> X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Precedence: Bulk List-Subscribe: List-Help: Sender: devel@edk2.groups.io List-Id: Mailing-List: list devel@edk2.groups.io; contact devel+owner@edk2.groups.io Reply-To: devel@edk2.groups.io,lersek@redhat.com List-Unsubscribe-Post: List-Unsubscribe=One-Click List-Unsubscribe: X-Gm-Message-State: qJgG0dozuB9JBdfwXOuIRbsgx7686176AA= Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-GND-Status: LEGIT Authentication-Results: spool.mail.gandi.net; dkim=pass header.d=groups.io header.s=20140610 header.b=XG6ss2Gy; spf=pass (spool.mail.gandi.net: domain of bounce@groups.io designates 66.175.222.108 as permitted sender) smtp.mailfrom=bounce@groups.io; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=redhat.com (policy=none) On 11/3/23 16:30, Wu, Jiaxin wrote: > Implements SmmCpuSyncLib Library class. The instance follows the > existing SMM CPU driver (PiSmmCpuDxeSmm) sync implementation: > 1.Abstract Counter and Run semaphores into SmmCpuSyncCtx. > 2.Abstract CPU arrival count operation to > SmmCpuSyncGetArrivedCpuCount(), SmmCpuSyncCheckInCpu(), > SmmCpuSyncCheckOutCpu(), SmmCpuSyncLockDoor(). > Implementation is aligned with existing SMM CPU driver. > 3. Abstract SMM CPU Sync flow to: > BSP: SmmCpuSyncReleaseOneAp --> AP: SmmCpuSyncWaitForBsp > BSP: SmmCpuSyncWaitForAllAPs <-- AP: SmmCpuSyncReleaseBsp > Semaphores release & wait during sync flow is same as existing SMM > CPU driver. > 4.Same operation to Counter and Run semaphores by leverage the atomic > compare exchange. >=20 > Change-Id: I5a004637f8b24a90594a794092548b850b187493 > Cc: Eric Dong > Cc: Ray Ni > Cc: Zeng Star > Cc: Gerd Hoffmann > Cc: Rahul Kumar > Signed-off-by: Jiaxin Wu > --- > UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c | 481 +++++++++++++++= ++++++ > UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf | 38 ++ > UefiCpuPkg/UefiCpuLibs.dsc.inc | 15 + > UefiCpuPkg/UefiCpuPkg.dsc | 1 + > 4 files changed, 535 insertions(+) > create mode 100644 UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c > create mode 100644 UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf > create mode 100644 UefiCpuPkg/UefiCpuLibs.dsc.inc I won't review this library instance in detail until the lib class header is cleaned up. Just some high-level comments: - Please use SafeIntLib for all calculations, to prevent overflows. This means primarily (but not exclusively) memory size calculations, for allocations - Implement proper error checking. ASSERT()s are unacceptable for catching errors, even if the existent drivers do that. - Semaphore primitives should call CpuDeadLoop() whenever they detect -- and they *should* detect -- counter underflow or overflow. At the start of my career at Red Hat, one of the bugs I worked on was a bug in the Xen hypervisor, in SMP initialization. Xen wanted to program the MTRRs on all processors at the same time (quite funnily, it's the exact same topic as your patch #2), and it used counters (semaphores, spinlocks) for BSP-AP synchronization. Unfortunately, it used single byte, signed counters. The logic worked fine until you only had 128 processors. When you had more processors, like 160 or 192, the SMP bringup seemingly completed, but then Xen would crash with very weird symptoms, later. The problem was of course that the BSP and the APs got out of sync, the MTRR programming was inconsistent (which is undefined behavior in itself, per Intel SDM), and then the APs were running around wherever like a loose herd of cats. Moral of the story: your synchronization primitives are *never permitted* to fail. If they fail, you must hang *immediately*, to contain the damage. Note that the InternalWaitForSemaphore() and InternalReleaseSemaphore() functions already detect integer overflow to an extent -- they don't overwrite the counter with the overflowed / underflowed value, and they properly output the error as well ("release" returning 0 is clearly an overflow that was caught, and "wait" returning MAX_UINT32 is clearly an underflow that was caught). The problem is that these error conditions are never checked by the callers; what's more, whenever such an error occurs, it's effectively impossible for the caller to do anything -- it's guaranteed to be a consequence of a programming error somewhere. So it's best not to litter the call sites with error checks, but to call CpuDeadLoop() inside the primitives whenever a semaphore is busted. I understand that I'm asking for things that differ from the original code, but every time we abstract something, we should make an honest effort to clean up and fix existent bugs. Laszlo >=20 > diff --git a/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c b/UefiCpuPk= g/Library/SmmCpuSyncLib/SmmCpuSyncLib.c > new file mode 100644 > index 0000000000..3bc3ebe49a > --- /dev/null > +++ b/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c > @@ -0,0 +1,481 @@ > +/** @file > + SMM CPU Sync lib implementation. > + > + Copyright (c) 2023, Intel Corporation. All rights reserved.
> + SPDX-License-Identifier: BSD-2-Clause-Patent > + > +**/ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +typedef struct { > + /// > + /// Indicate how many CPU entered SMM. > + /// > + volatile UINT32 *Counter; > +} SMM_CPU_SYNC_SEMAPHORE_GLOBAL; > + > +typedef struct { > + /// > + /// Used for control each CPU continue run or wait for signal > + /// > + volatile UINT32 *Run; > +} SMM_CPU_SYNC_SEMAPHORE_CPU; > + > +typedef struct { > + /// > + /// All global semaphores' pointer in SMM CPU Sync > + /// > + SMM_CPU_SYNC_SEMAPHORE_GLOBAL *GlobalSem; > + /// > + /// All semaphores for each processor in SMM CPU Sync > + /// > + SMM_CPU_SYNC_SEMAPHORE_CPU *CpuSem; > + /// > + /// The number of processors in the system. > + /// This does not indicate the number of processors that entered SMM. > + /// > + UINTN NumberOfCpus; > + /// > + /// Address of global and each CPU semaphores > + /// > + UINTN *SemBlock; > + /// > + /// Size of global and each CPU semaphores > + /// > + UINTN SemBlockPages; > +} SMM_CPU_SYNC_CTX; > + > +/** > + Performs an atomic compare exchange operation to get semaphore. > + The compare exchange operation must be performed using MP safe > + mechanisms. > + > + @param Sem IN: 32-bit unsigned integer > + OUT: original integer - 1 > + > + @return Original integer - 1 > + > +**/ > +UINT32 > +InternalWaitForSemaphore ( > + IN OUT volatile UINT32 *Sem > + ) > +{ > + UINT32 Value; > + > + for ( ; ;) { > + Value =3D *Sem; > + if ((Value !=3D 0) && > + (InterlockedCompareExchange32 ( > + (UINT32 *)Sem, > + Value, > + Value - 1 > + ) =3D=3D Value)) > + { > + break; > + } > + > + CpuPause (); > + } > + > + return Value - 1; > +} > + > +/** > + Performs an atomic compare exchange operation to release semaphore. > + The compare exchange operation must be performed using MP safe > + mechanisms. > + > + @param Sem IN: 32-bit unsigned integer > + OUT: original integer + 1 > + > + @return Original integer + 1 > + > +**/ > +UINT32 > +InternalReleaseSemaphore ( > + IN OUT volatile UINT32 *Sem > + ) > +{ > + UINT32 Value; > + > + do { > + Value =3D *Sem; > + } while (Value + 1 !=3D 0 && > + InterlockedCompareExchange32 ( > + (UINT32 *)Sem, > + Value, > + Value + 1 > + ) !=3D Value); > + > + return Value + 1; > +} > + > +/** > + Performs an atomic compare exchange operation to lock semaphore. > + The compare exchange operation must be performed using MP safe > + mechanisms. > + > + @param Sem IN: 32-bit unsigned integer > + OUT: -1 > + > + @return Original integer > + > +**/ > +UINT32 > +InternalLockdownSemaphore ( > + IN OUT volatile UINT32 *Sem > + ) > +{ > + UINT32 Value; > + > + do { > + Value =3D *Sem; > + } while (InterlockedCompareExchange32 ( > + (UINT32 *)Sem, > + Value, > + (UINT32)-1 > + ) !=3D Value); > + > + return Value; > +} > + > +/** > + Creates and Init a new Smm Cpu Sync context. > + > + @param[in] NumberOfCpus The number of processors in the syste= m. > + > + @return Pointer to an allocated Smm Cpu Sync context object. > + If the creation failed, returns NULL. > + > +**/ > +VOID * > +EFIAPI > +SmmCpuSyncContextInit ( > + IN UINTN NumberOfCpus > + ) > +{ > + SMM_CPU_SYNC_CTX *Ctx; > + UINTN CtxSize; > + UINTN OneSemSize; > + UINTN GlobalSemSize; > + UINTN CpuSemSize; > + UINTN TotalSemSize; > + UINTN SemAddr; > + UINTN CpuIndex; > + > + Ctx =3D NULL; > + > + // > + // Allocate for the Ctx > + // > + CtxSize =3D sizeof (SMM_CPU_SYNC_CTX) + sizeof (SMM_CPU_SYNC_SEMAPHORE= _GLOBAL) + sizeof (SMM_CPU_SYNC_SEMAPHORE_CPU) * NumberOfCpus; > + Ctx =3D (SMM_CPU_SYNC_CTX *)AllocatePages (EFI_SIZE_TO_PAGES (CtxS= ize)); > + ASSERT (Ctx !=3D NULL); > + Ctx->GlobalSem =3D (SMM_CPU_SYNC_SEMAPHORE_GLOBAL *)((UINT8 *)Ctx += sizeof (SMM_CPU_SYNC_CTX)); > + Ctx->CpuSem =3D (SMM_CPU_SYNC_SEMAPHORE_CPU *)((UINT8 *)Ctx + si= zeof (SMM_CPU_SYNC_CTX) + sizeof (SMM_CPU_SYNC_SEMAPHORE_GLOBAL)); > + Ctx->NumberOfCpus =3D NumberOfCpus; > + > + // > + // Allocate for Semaphores in the Ctx > + // > + OneSemSize =3D GetSpinLockProperties (); > + GlobalSemSize =3D (sizeof (SMM_CPU_SYNC_SEMAPHORE_GLOBAL) / sizeof (VO= ID *)) * OneSemSize; > + CpuSemSize =3D (sizeof (SMM_CPU_SYNC_SEMAPHORE_CPU) / sizeof (VOID = *)) * OneSemSize * NumberOfCpus; > + TotalSemSize =3D GlobalSemSize + CpuSemSize; > + DEBUG ((DEBUG_INFO, "[%a] - One Semaphore Size =3D 0x%x\n", __FUNCT= ION__, OneSemSize)); > + DEBUG ((DEBUG_INFO, "[%a] - Total Semaphores Size =3D 0x%x\n", __FUNCT= ION__, TotalSemSize)); > + Ctx->SemBlockPages =3D EFI_SIZE_TO_PAGES (TotalSemSize); > + Ctx->SemBlock =3D AllocatePages (Ctx->SemBlockPages); > + ASSERT (Ctx->SemBlock !=3D NULL); > + ZeroMem (Ctx->SemBlock, TotalSemSize); > + > + SemAddr =3D (UINTN)Ctx->SemBlock; > + > + // > + // Assign Global Semaphore pointer > + // > + Ctx->GlobalSem->Counter =3D (UINT32 *)SemAddr; > + *Ctx->GlobalSem->Counter =3D 0; > + DEBUG ((DEBUG_INFO, "[%a] - Ctx->GlobalSem->Counter Address: 0x%08x\n"= , __FUNCTION__, (UINTN)Ctx->GlobalSem->Counter)); > + > + SemAddr +=3D GlobalSemSize; > + > + // > + // Assign CPU Semaphore pointer > + // > + for (CpuIndex =3D 0; CpuIndex < NumberOfCpus; CpuIndex++) { > + Ctx->CpuSem[CpuIndex].Run =3D (UINT32 *)(SemAddr + (CpuSemSize / Nu= mberOfCpus) * CpuIndex); > + *Ctx->CpuSem[CpuIndex].Run =3D 0; > + DEBUG ((DEBUG_INFO, "[%a] - Ctx->CpuSem[%d].Run Address: 0x%08x\n", = __FUNCTION__, CpuIndex, (UINTN)Ctx->CpuSem[CpuIndex].Run)); > + } > + > + // > + // Return the new created Smm Cpu Sync context > + // > + return (VOID *)Ctx; > +} > + > +/** > + Deinit an allocated Smm Cpu Sync context object. > + > + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context objec= t to be released. > + > +**/ > +VOID > +EFIAPI > +SmmCpuSyncContextDeinit ( > + IN VOID *SmmCpuSyncCtx > + ) > +{ > + SMM_CPU_SYNC_CTX *Ctx; > + UINTN CtxSize; > + > + ASSERT (SmmCpuSyncCtx !=3D NULL); > + Ctx =3D (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx; > + > + CtxSize =3D sizeof (SMM_CPU_SYNC_CTX) + sizeof (SMM_CPU_SYNC_SEMAPHORE= _GLOBAL) + sizeof (SMM_CPU_SYNC_SEMAPHORE_CPU) * (Ctx->NumberOfCpus); > + > + FreePages (Ctx->SemBlock, Ctx->SemBlockPages); > + > + FreePages (Ctx, EFI_SIZE_TO_PAGES (CtxSize)); > +} > + > +/** > + Reset Smm Cpu Sync context object. > + > + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context objec= t to be released. > + > +**/ > +VOID > +EFIAPI > +SmmCpuSyncContextReset ( > + IN VOID *SmmCpuSyncCtx > + ) > +{ > + SMM_CPU_SYNC_CTX *Ctx; > + > + ASSERT (SmmCpuSyncCtx !=3D NULL); > + Ctx =3D (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx; > + > + *Ctx->GlobalSem->Counter =3D 0; > +} > + > +/** > + Get current arrived CPU count. > + > + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context objec= t to be released. > + > + @return Current number of arrived CPU count. > + -1: indicate the door has been locked. > + > +**/ > +UINT32 > +EFIAPI > +SmmCpuSyncGetArrivedCpuCount ( > + IN VOID *SmmCpuSyncCtx > + ) > +{ > + SMM_CPU_SYNC_CTX *Ctx; > + > + ASSERT (SmmCpuSyncCtx !=3D NULL); > + Ctx =3D (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx; > + > + if (*Ctx->GlobalSem->Counter < 0) { > + return (UINT32)-1; > + } > + > + return *Ctx->GlobalSem->Counter; > +} > + > +/** > + Performs an atomic operation to check in CPU. > + Check in CPU successfully if the returned arrival CPU count value is > + positive, otherwise indicate the door has been locked, the CPU can > + not checkin. > + > + @param[in] SmmCpuSyncCtx Pointer to the Smm CPU Sync context objec= t to be released. > + @param[in] CpuIndex Pointer to the CPU Index to checkin. > + > + @return Positive value (>0): CPU arrival count number after ch= eck in CPU successfully. > + Nonpositive value (<=3D0): check in CPU failure. > + > +**/ > +INT32 > +EFIAPI > +SmmCpuSyncCheckInCpu ( > + IN VOID *SmmCpuSyncCtx, > + IN UINTN CpuIndex > + ) > +{ > + SMM_CPU_SYNC_CTX *Ctx; > + > + ASSERT (SmmCpuSyncCtx !=3D NULL); > + Ctx =3D (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx; > + > + return (INT32)InternalReleaseSemaphore (Ctx->GlobalSem->Counter); > +} > + > +/** > + Performs an atomic operation to check out CPU. > + Check out CPU successfully if the returned arrival CPU count value is > + nonnegative, otherwise indicate the door has been locked, the CPU can > + not checkout. > + > + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context objec= t to be released. > + @param[in] CpuIndex Pointer to the Cpu Index to checkout. > + > + @return Nonnegative value (>=3D0): CPU arrival count number after = check out CPU successfully. > + Negative value (<0): Check out CPU failure. > + > + > +**/ > +INT32 > +EFIAPI > +SmmCpuSyncCheckOutCpu ( > + IN VOID *SmmCpuSyncCtx, > + IN UINTN CpuIndex > + ) > +{ > + SMM_CPU_SYNC_CTX *Ctx; > + > + ASSERT (SmmCpuSyncCtx !=3D NULL); > + Ctx =3D (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx; > + > + return (INT32)InternalWaitForSemaphore (Ctx->GlobalSem->Counter); > +} > + > +/** > + Performs an atomic operation lock door for CPU checkin or checkout. > + With this function, CPU can not check in via SmmCpuSyncCheckInCpu () o= r > + check out via SmmCpuSyncCheckOutCpu (). > + > + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context objec= t to be released. > + @param[in] CpuIndex Pointer to the Cpu Index to lock door. > + > + @return CPU arrival count number. > + > +**/ > +UINT32 > +EFIAPI > +SmmCpuSyncLockDoor ( > + IN VOID *SmmCpuSyncCtx, > + IN UINTN CpuIndex > + ) > +{ > + SMM_CPU_SYNC_CTX *Ctx; > + > + ASSERT (SmmCpuSyncCtx !=3D NULL); > + Ctx =3D (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx; > + > + return InternalLockdownSemaphore (Ctx->GlobalSem->Counter); > +} > + > +/** > + Used for BSP to wait all APs. > + > + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context objec= t. > + @param[in] NumberOfAPs Number of APs need to wait. > + @param[in] BspIndex Pointer to the BSP Index. > + > +**/ > +VOID > +EFIAPI > +SmmCpuSyncWaitForAllAPs ( > + IN VOID *SmmCpuSyncCtx, > + IN UINTN NumberOfAPs, > + IN UINTN BspIndex > + ) > +{ > + SMM_CPU_SYNC_CTX *Ctx; > + > + ASSERT (SmmCpuSyncCtx !=3D NULL); > + Ctx =3D (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx; > + > + while (NumberOfAPs-- > 0) { > + InternalWaitForSemaphore (Ctx->CpuSem[BspIndex].Run); > + } > +} > + > +/** > + Used for BSP to release one AP. > + > + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context objec= t. > + @param[in] CpuIndex Pointer to the Cpu Index, indicate which = AP need to be released. > + @param[in] BspIndex Pointer to the BSP Index. > + > +**/ > +VOID > +EFIAPI > +SmmCpuSyncReleaseOneAp ( > + IN VOID *SmmCpuSyncCtx, > + IN UINTN CpuIndex, > + IN UINTN BspIndex > + ) > +{ > + SMM_CPU_SYNC_CTX *Ctx; > + > + ASSERT (SmmCpuSyncCtx !=3D NULL); > + Ctx =3D (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx; > + > + InternalReleaseSemaphore (Ctx->CpuSem[CpuIndex].Run); > +} > + > +/** > + Used for AP to wait BSP. > + > + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context objec= t. > + @param[in] CpuIndex Pointer to the Cpu Index, indicate which = AP wait BSP. > + @param[in] BspIndex Pointer to the BSP Index. > + > +**/ > +VOID > +EFIAPI > +SmmCpuSyncWaitForBsp ( > + IN VOID *SmmCpuSyncCtx, > + IN UINTN CpuIndex, > + IN UINTN BspIndex > + ) > +{ > + SMM_CPU_SYNC_CTX *Ctx; > + > + ASSERT (SmmCpuSyncCtx !=3D NULL); > + Ctx =3D (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx; > + > + InternalWaitForSemaphore (Ctx->CpuSem[CpuIndex].Run); > +} > + > +/** > + Used for AP to release BSP. > + > + @param[in] SmmCpuSyncCtx Pointer to the Smm Cpu Sync context objec= t. > + @param[in] CpuIndex Pointer to the Cpu Index, indicate which = AP release BSP. > + @param[in] BspIndex Pointer to the BSP Index. > + > +**/ > +VOID > +EFIAPI > +SmmCpuSyncReleaseBsp ( > + IN VOID *SmmCpuSyncCtx, > + IN UINTN CpuIndex, > + IN UINTN BspIndex > + ) > +{ > + SMM_CPU_SYNC_CTX *Ctx; > + > + ASSERT (SmmCpuSyncCtx !=3D NULL); > + Ctx =3D (SMM_CPU_SYNC_CTX *)SmmCpuSyncCtx; > + > + InternalReleaseSemaphore (Ctx->CpuSem[BspIndex].Run); > +} > diff --git a/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf b/UefiCpu= Pkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf > new file mode 100644 > index 0000000000..86475ce64b > --- /dev/null > +++ b/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf > @@ -0,0 +1,38 @@ > +## @file > +# SMM CPU Synchronization lib. > +# > +# This is SMM CPU Synchronization lib used for SMM CPU sync operations. > +# > +# Copyright (c) 2023, Intel Corporation. All rights reserved.
> +# SPDX-License-Identifier: BSD-2-Clause-Patent > +# > +## > + > +[Defines] > + INF_VERSION =3D 0x00010005 > + BASE_NAME =3D SmmCpuSyncLib > + FILE_GUID =3D 1ca1bc1a-16a4-46ef-956a-ca500fd3381= f > + MODULE_TYPE =3D DXE_SMM_DRIVER > + LIBRARY_CLASS =3D SmmCpuSyncLib|DXE_SMM_DRIVER > + > +[Sources] > + SmmCpuSyncLib.c > + > +[Packages] > + MdePkg/MdePkg.dec > + MdeModulePkg/MdeModulePkg.dec > + UefiCpuPkg/UefiCpuPkg.dec > + > +[LibraryClasses] > + UefiLib > + BaseLib > + DebugLib > + PrintLib > + SynchronizationLib > + BaseMemoryLib > + SmmServicesTableLib > + MemoryAllocationLib > + > +[Pcd] > + > +[Protocols] > diff --git a/UefiCpuPkg/UefiCpuLibs.dsc.inc b/UefiCpuPkg/UefiCpuLibs.dsc.= inc > new file mode 100644 > index 0000000000..6b9b362729 > --- /dev/null > +++ b/UefiCpuPkg/UefiCpuLibs.dsc.inc > @@ -0,0 +1,15 @@ > +## @file > +# UefiCpu DSC include file for [LibraryClasses*] section of all Architec= tures. > +# > +# This file can be included to the [LibraryClasses*] section(s) of a pla= tform DSC file > +# by using "!include UefiCpuPkg/UefiCpuLibs.dsc.inc" to specify the libr= ary instances > +# of some EDKII basic/common library classes in UefiCpuPkg. > +# > +# Copyright (c) 2023, Intel Corporation. All rights reserved.
> +# > +# SPDX-License-Identifier: BSD-2-Clause-Patent > +# > +## > + > +[LibraryClasses] > + SmmCpuSyncLib|UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf > \ No newline at end of file > diff --git a/UefiCpuPkg/UefiCpuPkg.dsc b/UefiCpuPkg/UefiCpuPkg.dsc > index 074fd77461..338f18eb98 100644 > --- a/UefiCpuPkg/UefiCpuPkg.dsc > +++ b/UefiCpuPkg/UefiCpuPkg.dsc > @@ -21,10 +21,11 @@ > # > # External libraries to build package > # > =20 > !include MdePkg/MdeLibs.dsc.inc > +!include UefiCpuPkg/UefiCpuLibs.dsc.inc > =20 > [LibraryClasses] > BaseLib|MdePkg/Library/BaseLib/BaseLib.inf > BaseMemoryLib|MdePkg/Library/BaseMemoryLib/BaseMemoryLib.inf > CpuLib|MdePkg/Library/BaseCpuLib/BaseCpuLib.inf -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#110838): https://edk2.groups.io/g/devel/message/110838 Mute This Topic: https://groups.io/mt/102366301/7686176 Group Owner: devel+owner@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/19134562= 12/xyzzy [rebecca@openfw.io] -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-