* [edk2-devel] [PATCH v4 0/8] Refine SMM CPU Sync flow and abstract SmmCpuSyncLib
@ 2023-12-15 9:55 Wu, Jiaxin
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 1/8] UefiCpuPkg/PiSmmCpuDxeSmm: Optimize Semaphore Sync between BSP and AP Wu, Jiaxin
` (7 more replies)
0 siblings, 8 replies; 19+ messages in thread
From: Wu, Jiaxin @ 2023-12-15 9:55 UTC (permalink / raw)
To: devel
The series patches are to refine SMM CPU Sync flow.
After the refinement, SmmCpuSyncLib is abstracted for
any user to provide different SMM CPU Sync implementation.
Jiaxin Wu (8):
UefiCpuPkg/PiSmmCpuDxeSmm: Optimize Semaphore Sync between BSP and AP
UefiCpuPkg: Adds SmmCpuSyncLib library class
MdePkg/MdeLibs.dsc.inc: Add SafeIntLib instance
UefiCpuPkg: Implements SmmCpuSyncLib library instance
OvmfPkg: Specifies SmmCpuSyncLib instance
UefiPayloadPkg: Specifies SmmCpuSyncLib instance
UefiCpuPkg/PiSmmCpuDxeSmm: Simplify RunningApCount decrement
UefiCpuPkg/PiSmmCpuDxeSmm: Consume SmmCpuSyncLib
MdePkg/MdeLibs.dsc.inc | 1 +
OvmfPkg/CloudHv/CloudHvX64.dsc | 1 +
OvmfPkg/OvmfPkgIa32.dsc | 1 +
OvmfPkg/OvmfPkgIa32X64.dsc | 1 +
OvmfPkg/OvmfPkgX64.dsc | 1 +
UefiCpuPkg/Include/Library/SmmCpuSyncLib.h | 290 +++++++++
UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c | 651 +++++++++++++++++++++
UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf | 34 ++
UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 234 +++-----
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 6 +-
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 1 +
UefiCpuPkg/UefiCpuPkg.dec | 3 +
UefiCpuPkg/UefiCpuPkg.dsc | 2 +
UefiPayloadPkg/UefiPayloadPkg.dsc | 1 +
14 files changed, 1056 insertions(+), 171 deletions(-)
create mode 100644 UefiCpuPkg/Include/Library/SmmCpuSyncLib.h
create mode 100644 UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c
create mode 100644 UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
--
2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112592): https://edk2.groups.io/g/devel/message/112592
Mute This Topic: https://groups.io/mt/103187890/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 19+ messages in thread
* [edk2-devel] [PATCH v4 1/8] UefiCpuPkg/PiSmmCpuDxeSmm: Optimize Semaphore Sync between BSP and AP
2023-12-15 9:55 [edk2-devel] [PATCH v4 0/8] Refine SMM CPU Sync flow and abstract SmmCpuSyncLib Wu, Jiaxin
@ 2023-12-15 9:55 ` Wu, Jiaxin
2023-12-19 3:55 ` Ni, Ray
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 2/8] UefiCpuPkg: Adds SmmCpuSyncLib library class Wu, Jiaxin
` (6 subsequent siblings)
7 siblings, 1 reply; 19+ messages in thread
From: Wu, Jiaxin @ 2023-12-15 9:55 UTC (permalink / raw)
To: devel
Cc: Laszlo Ersek, Eric Dong, Ray Ni, Zeng Star, Rahul Kumar,
Gerd Hoffmann
This patch is to define 3 new functions (WaitForBsp & ReleaseBsp &
ReleaseOneAp) used for the semaphore sync between BSP & AP. With the
change, BSP and AP Sync flow will be easy understand as below:
BSP: ReleaseAllAPs or ReleaseOneAp --> AP: WaitForBsp
BSP: WaitForAllAPs <-- AP: ReleaseBsp
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 72 ++++++++++++++++++++++++++++-------
1 file changed, 58 insertions(+), 14 deletions(-)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
index b279f5dfcc..54542262a2 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
@@ -120,10 +120,11 @@ LockdownSemaphore (
return Value;
}
/**
+ Used for BSP to wait all APs.
Wait all APs to performs an atomic compare exchange operation to release semaphore.
@param NumberOfAPs AP number
**/
@@ -139,10 +140,11 @@ WaitForAllAPs (
WaitForSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
}
}
/**
+ Used for BSP to release all APs.
Performs an atomic compare exchange operation to release semaphore
for each AP.
**/
VOID
@@ -157,10 +159,52 @@ ReleaseAllAPs (
ReleaseSemaphore (mSmmMpSyncData->CpuData[Index].Run);
}
}
}
+/**
+ Used for BSP to release one AP.
+
+ @param ApSem IN: 32-bit unsigned integer
+ OUT: original integer + 1
+**/
+VOID
+ReleaseOneAp (
+ IN OUT volatile UINT32 *ApSem
+ )
+{
+ ReleaseSemaphore (ApSem);
+}
+
+/**
+ Used for AP to wait BSP.
+
+ @param ApSem IN: 32-bit unsigned integer
+ OUT: original integer - 1
+**/
+VOID
+WaitForBsp (
+ IN OUT volatile UINT32 *ApSem
+ )
+{
+ WaitForSemaphore (ApSem);
+}
+
+/**
+ Used for AP to release BSP.
+
+ @param BspSem IN: 32-bit unsigned integer
+ OUT: original integer + 1
+**/
+VOID
+ReleaseBsp (
+ IN OUT volatile UINT32 *BspSem
+ )
+{
+ ReleaseSemaphore (BspSem);
+}
+
/**
Check whether the index of CPU perform the package level register
programming during System Management Mode initialization.
The index of Processor specified by mPackageFirstThreadIndex[PackageIndex]
@@ -632,11 +676,11 @@ BSPHandler (
// Signal all APs it's time for backup MTRRs
//
ReleaseAllAPs ();
//
- // WaitForSemaphore() may wait for ever if an AP happens to enter SMM at
+ // WaitForAllAPs() may wait for ever if an AP happens to enter SMM at
// exactly this point. Please make sure PcdCpuSmmMaxSyncLoops has been set
// to a large enough value to avoid this situation.
// Note: For HT capable CPUs, threads within a core share the same set of MTRRs.
// We do the backup first and then set MTRR to avoid race condition for threads
// in the same core.
@@ -652,11 +696,11 @@ BSPHandler (
// Let all processors program SMM MTRRs together
//
ReleaseAllAPs ();
//
- // WaitForSemaphore() may wait for ever if an AP happens to enter SMM at
+ // WaitForAllAPs() may wait for ever if an AP happens to enter SMM at
// exactly this point. Please make sure PcdCpuSmmMaxSyncLoops has been set
// to a large enough value to avoid this situation.
//
ReplaceOSMtrrs (CpuIndex);
@@ -898,50 +942,50 @@ APHandler (
if ((SyncMode == SmmCpuSyncModeTradition) || SmmCpuFeaturesNeedConfigureMtrrs ()) {
//
// Notify BSP of arrival at this point
//
- ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
+ ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
}
if (SmmCpuFeaturesNeedConfigureMtrrs ()) {
//
// Wait for the signal from BSP to backup MTRRs
//
- WaitForSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run);
+ WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
//
// Backup OS MTRRs
//
MtrrGetAllMtrrs (&Mtrrs);
//
// Signal BSP the completion of this AP
//
- ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
+ ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
//
// Wait for BSP's signal to program MTRRs
//
- WaitForSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run);
+ WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
//
// Replace OS MTRRs with SMI MTRRs
//
ReplaceOSMtrrs (CpuIndex);
//
// Signal BSP the completion of this AP
//
- ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
+ ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
}
while (TRUE) {
//
// Wait for something to happen
//
- WaitForSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run);
+ WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
//
// Check if BSP wants to exit SMM
//
if (!(*mSmmMpSyncData->InsideSmm)) {
@@ -977,16 +1021,16 @@ APHandler (
if (SmmCpuFeaturesNeedConfigureMtrrs ()) {
//
// Notify BSP the readiness of this AP to program MTRRs
//
- ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
+ ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
//
// Wait for the signal from BSP to program MTRRs
//
- WaitForSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run);
+ WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
//
// Restore OS MTRRs
//
SmmCpuFeaturesReenableSmrr ();
@@ -994,26 +1038,26 @@ APHandler (
}
//
// Notify BSP the readiness of this AP to Reset states/semaphore for this processor
//
- ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
+ ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
//
// Wait for the signal from BSP to Reset states/semaphore for this processor
//
- WaitForSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run);
+ WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
//
// Reset states/semaphore for this processor
//
*(mSmmMpSyncData->CpuData[CpuIndex].Present) = FALSE;
//
// Notify BSP the readiness of this AP to exit SMM
//
- ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
+ ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
}
/**
Checks whether the input token is the current used token.
@@ -1277,11 +1321,11 @@ InternalSmmStartupThisAp (
mSmmMpSyncData->CpuData[CpuIndex].Status = CpuStatus;
if (mSmmMpSyncData->CpuData[CpuIndex].Status != NULL) {
*mSmmMpSyncData->CpuData[CpuIndex].Status = EFI_NOT_READY;
}
- ReleaseSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run);
+ ReleaseOneAp (mSmmMpSyncData->CpuData[CpuIndex].Run);
if (Token == NULL) {
AcquireSpinLock (mSmmMpSyncData->CpuData[CpuIndex].Busy);
ReleaseSpinLock (mSmmMpSyncData->CpuData[CpuIndex].Busy);
}
--
2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112593): https://edk2.groups.io/g/devel/message/112593
Mute This Topic: https://groups.io/mt/103187891/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [edk2-devel] [PATCH v4 2/8] UefiCpuPkg: Adds SmmCpuSyncLib library class
2023-12-15 9:55 [edk2-devel] [PATCH v4 0/8] Refine SMM CPU Sync flow and abstract SmmCpuSyncLib Wu, Jiaxin
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 1/8] UefiCpuPkg/PiSmmCpuDxeSmm: Optimize Semaphore Sync between BSP and AP Wu, Jiaxin
@ 2023-12-15 9:55 ` Wu, Jiaxin
2023-12-19 3:55 ` Ni, Ray
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 3/8] MdePkg/MdeLibs.dsc.inc: Add SafeIntLib instance Wu, Jiaxin
` (5 subsequent siblings)
7 siblings, 1 reply; 19+ messages in thread
From: Wu, Jiaxin @ 2023-12-15 9:55 UTC (permalink / raw)
To: devel
Cc: Laszlo Ersek, Eric Dong, Ray Ni, Zeng Star, Gerd Hoffmann,
Rahul Kumar
Intel is planning to provide different SMM CPU Sync implementation
along with some specific registers to improve the SMI performance,
hence need SmmCpuSyncLib Library for Intel.
This patch is to:
1.Adds SmmCpuSyncLib Library class in UefiCpuPkg.dec.
2.Adds SmmCpuSyncLib.h function declaration header file.
For the new SmmCpuSyncLib, it provides 3 sets of APIs:
1. ContextInit/ContextDeinit/ContextReset:
ContextInit() is called in driver's entrypoint to allocate and
initialize the SMM CPU Sync context. ContextDeinit() is called in
driver's unload function to deinitialize SMM CPU Sync context.
ContextReset() is called before CPU exist SMI, which allows CPU to
check into the next SMI from this point.
2. GetArrivedCpuCount/CheckInCpu/CheckOutCpu/LockDoor:
When SMI happens, all processors including BSP enter to SMM mode by
calling CheckInCpu(). The elected BSP calls LockDoor() so that
CheckInCpu() will return the error code after that. CheckOutCpu() can
be called in error handling flow for the CPU who calls CheckInCpu()
earlier. GetArrivedCpuCount() returns the number of checked-in CPUs.
3. WaitForAPs/ReleaseOneAp/WaitForBsp/ReleaseBsp
WaitForAPs() & ReleaseOneAp() are called from BSP to wait the number
of APs and release one specific AP. WaitForBsp() & ReleaseBsp() are
called from APs to wait and release BSP. The 4 APIs are used to
synchronize the running flow among BSP and APs. BSP and AP Sync flow
can be easy understand as below:
BSP: ReleaseOneAp --> AP: WaitForBsp
BSP: WaitForAPs <-- AP: ReleaseBsp
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
---
UefiCpuPkg/Include/Library/SmmCpuSyncLib.h | 290 +++++++++++++++++++++++++++++
UefiCpuPkg/UefiCpuPkg.dec | 3 +
2 files changed, 293 insertions(+)
create mode 100644 UefiCpuPkg/Include/Library/SmmCpuSyncLib.h
diff --git a/UefiCpuPkg/Include/Library/SmmCpuSyncLib.h b/UefiCpuPkg/Include/Library/SmmCpuSyncLib.h
new file mode 100644
index 0000000000..4d273095c9
--- /dev/null
+++ b/UefiCpuPkg/Include/Library/SmmCpuSyncLib.h
@@ -0,0 +1,290 @@
+/** @file
+ Library that provides SMM CPU Sync related operations.
+
+ The lib provides 3 sets of APIs:
+ 1. ContextInit/ContextDeinit/ContextReset:
+
+ ContextInit() is called in driver's entrypoint to allocate and initialize the SMM CPU Sync context.
+ ContextDeinit() is called in driver's unload function to deinitialize the SMM CPU Sync context.
+ ContextReset() is called by one of CPUs after all CPUs are ready to exit SMI, which allows CPU to
+ check into the next SMI from this point.
+
+ 2. GetArrivedCpuCount/CheckInCpu/CheckOutCpu/LockDoor:
+ When SMI happens, all processors including BSP enter to SMM mode by calling CheckInCpu().
+ CheckOutCpu() can be called in error handling flow for the CPU who calls CheckInCpu() earlier.
+ The elected BSP calls LockDoor() so that CheckInCpu() and CheckOutCpu() will return the error code after that.
+ GetArrivedCpuCount() returns the number of checked-in CPUs.
+
+ 3. WaitForAPs/ReleaseOneAp/WaitForBsp/ReleaseBsp
+ WaitForAPs() & ReleaseOneAp() are called from BSP to wait the number of APs and release one specific AP.
+ WaitForBsp() & ReleaseBsp() are called from APs to wait and release BSP.
+ The 4 APIs are used to synchronize the running flow among BSP and APs.
+ BSP and AP Sync flow can be easy understand as below:
+ BSP: ReleaseOneAp --> AP: WaitForBsp
+ BSP: WaitForAPs <-- AP: ReleaseBsp
+
+ Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
+ SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+#ifndef SMM_CPU_SYNC_LIB_H_
+#define SMM_CPU_SYNC_LIB_H_
+
+#include <Uefi/UefiBaseType.h>
+
+//
+// Opaque structure for SMM CPU Sync context.
+//
+typedef struct SMM_CPU_SYNC_CONTEXT SMM_CPU_SYNC_CONTEXT;
+
+/**
+ Create and initialize the SMM CPU Sync context. It is to allocate and initialize the
+ SMM CPU Sync context.
+
+ If Context is NULL, then ASSERT().
+
+ @param[in] NumberOfCpus The number of Logical Processors in the system.
+ @param[out] Context Pointer to the new created and initialized SMM CPU Sync context object.
+ NULL will be returned if any error happen during init.
+
+ @retval RETURN_SUCCESS The SMM CPU Sync context was successful created and initialized.
+ @retval RETURN_OUT_OF_RESOURCES There are not enough resources available to create and initialize SMM CPU Sync context.
+ @retval RETURN_BUFFER_TOO_SMALL Overflow happen
+
+**/
+RETURN_STATUS
+EFIAPI
+SmmCpuSyncContextInit (
+ IN UINTN NumberOfCpus,
+ OUT SMM_CPU_SYNC_CONTEXT **Context
+ );
+
+/**
+ Deinit an allocated SMM CPU Sync context. The resources allocated in SmmCpuSyncContextInit() will
+ be freed.
+
+ If Context is NULL, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object to be deinitialized.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncContextDeinit (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context
+ );
+
+/**
+ Reset SMM CPU Sync context. SMM CPU Sync context will be reset to the initialized state.
+
+ This function is called by one of CPUs after all CPUs are ready to exit SMI, which allows CPU to
+ check into the next SMI from this point.
+
+ If Context is NULL, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object to be reset.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncContextReset (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context
+ );
+
+/**
+ Get current number of arrived CPU in SMI.
+
+ BSP might need to know the current number of arrived CPU in SMI to make sure all APs
+ in SMI. This API can be for that purpose.
+
+ If Context is NULL, then ASSERT().
+
+ @param[in] Context Pointer to the SMM CPU Sync context object.
+
+ @retval Current number of arrived CPU in SMI.
+
+**/
+UINTN
+EFIAPI
+SmmCpuSyncGetArrivedCpuCount (
+ IN SMM_CPU_SYNC_CONTEXT *Context
+ );
+
+/**
+ Performs an atomic operation to check in CPU.
+
+ When SMI happens, all processors including BSP enter to SMM mode by calling SmmCpuSyncCheckInCpu().
+
+ If Context is NULL, then ASSERT().
+ If CpuIndex exceeds the range of all CPUs in the system, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] CpuIndex Check in CPU index.
+
+ @retval RETURN_SUCCESS Check in CPU (CpuIndex) successfully.
+ @retval RETURN_ABORTED Check in CPU failed due to SmmCpuSyncLockDoor() has been called by one elected CPU.
+
+**/
+RETURN_STATUS
+EFIAPI
+SmmCpuSyncCheckInCpu (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN CpuIndex
+ );
+
+/**
+ Performs an atomic operation to check out CPU.
+
+ This function can be called in error handling flow for the CPU who calls CheckInCpu() earlier.
+ The caller shall make sure the CPU specified by CpuIndex has already checked-in.
+
+ If Context is NULL, then ASSERT().
+ If CpuIndex exceeds the range of all CPUs in the system, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] CpuIndex Check out CPU index.
+
+ @retval RETURN_SUCCESS Check out CPU (CpuIndex) successfully.
+ @retval RETURN_ABORTED Check out CPU failed due to SmmCpuSyncLockDoor() has been called by one elected CPU.
+
+**/
+RETURN_STATUS
+EFIAPI
+SmmCpuSyncCheckOutCpu (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN CpuIndex
+ );
+
+/**
+ Performs an atomic operation lock door for CPU checkin and checkout. After this function:
+ CPU can not check in via SmmCpuSyncCheckInCpu().
+ CPU can not check out via SmmCpuSyncCheckOutCpu().
+
+ The CPU specified by CpuIndex is elected to lock door. The caller shall make sure the CpuIndex
+ is the actual CPU calling this function to avoid the undefined behavior.
+
+ If Context is NULL, then ASSERT().
+ If CpuCount is NULL, then ASSERT().
+ If CpuIndex exceeds the range of all CPUs in the system, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] CpuIndex Indicate which CPU to lock door.
+ @param[out] CpuCount Number of arrived CPU in SMI after look door.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncLockDoor (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN CpuIndex,
+ OUT UINTN *CpuCount
+ );
+
+/**
+ Used by the BSP to wait for APs.
+
+ The number of APs need to be waited is specified by NumberOfAPs. The BSP is specified by BspIndex.
+ The caller shall make sure the BspIndex is the actual CPU calling this function to avoid the undefined behavior.
+ The caller shall make sure the NumberOfAPs have already checked-in to avoid the undefined behavior.
+
+ If Context is NULL, then ASSERT().
+ If NumberOfAPs >= All CPUs in system, then ASSERT().
+ If BspIndex exceeds the range of all CPUs in the system, then ASSERT().
+
+ Note:
+ This function is blocking mode, and it will return only after the number of APs released by
+ calling SmmCpuSyncReleaseBsp():
+ BSP: WaitForAPs <-- AP: ReleaseBsp
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] NumberOfAPs Number of APs need to be waited by BSP.
+ @param[in] BspIndex The BSP Index to wait for APs.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncWaitForAPs (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN NumberOfAPs,
+ IN UINTN BspIndex
+ );
+
+/**
+ Used by the BSP to release one AP.
+
+ The AP is specified by CpuIndex. The BSP is specified by BspIndex.
+ The caller shall make sure the BspIndex is the actual CPU calling this function to avoid the undefined behavior.
+ The caller shall make sure the CpuIndex has already checked-in to avoid the undefined behavior.
+
+ If Context is NULL, then ASSERT().
+ If CpuIndex == BspIndex, then ASSERT().
+ If BspIndex or CpuIndex exceed the range of all CPUs in the system, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] CpuIndex Indicate which AP need to be released.
+ @param[in] BspIndex The BSP Index to release AP.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncReleaseOneAp (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN CpuIndex,
+ IN UINTN BspIndex
+ );
+
+/**
+ Used by the AP to wait BSP.
+
+ The AP is specified by CpuIndex.
+ The caller shall make sure the CpuIndex is the actual CPU calling this function to avoid the undefined behavior.
+ The BSP is specified by BspIndex.
+
+ If Context is NULL, then ASSERT().
+ If CpuIndex == BspIndex, then ASSERT().
+ If BspIndex or CpuIndex exceed the range of all CPUs in the system, then ASSERT().
+
+ Note:
+ This function is blocking mode, and it will return only after the AP released by
+ calling SmmCpuSyncReleaseOneAp():
+ BSP: ReleaseOneAp --> AP: WaitForBsp
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] CpuIndex Indicate which AP wait BSP.
+ @param[in] BspIndex The BSP Index to be waited.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncWaitForBsp (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN CpuIndex,
+ IN UINTN BspIndex
+ );
+
+/**
+ Used by the AP to release BSP.
+
+ The AP is specified by CpuIndex.
+ The caller shall make sure the CpuIndex is the actual CPU calling this function to avoid the undefined behavior.
+ The BSP is specified by BspIndex.
+
+ If Context is NULL, then ASSERT().
+ If CpuIndex == BspIndex, then ASSERT().
+ If BspIndex or CpuIndex exceed the range of all CPUs in the system, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] CpuIndex Indicate which AP release BSP.
+ @param[in] BspIndex The BSP Index to be released.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncReleaseBsp (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN CpuIndex,
+ IN UINTN BspIndex
+ );
+
+#endif
diff --git a/UefiCpuPkg/UefiCpuPkg.dec b/UefiCpuPkg/UefiCpuPkg.dec
index 61bd34ef17..cc785a3222 100644
--- a/UefiCpuPkg/UefiCpuPkg.dec
+++ b/UefiCpuPkg/UefiCpuPkg.dec
@@ -62,10 +62,13 @@
CpuPageTableLib|Include/Library/CpuPageTableLib.h
## @libraryclass Provides functions for manipulating smram savestate registers.
MmSaveStateLib|Include/Library/MmSaveStateLib.h
+ ## @libraryclass Provides functions for SMM CPU Sync Operation.
+ SmmCpuSyncLib|Include/Library/SmmCpuSyncLib.h
+
[LibraryClasses.RISCV64]
## @libraryclass Provides functions to manage MMU features on RISCV64 CPUs.
##
RiscVMmuLib|Include/Library/BaseRiscVMmuLib.h
--
2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112594): https://edk2.groups.io/g/devel/message/112594
Mute This Topic: https://groups.io/mt/103187892/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [edk2-devel] [PATCH v4 3/8] MdePkg/MdeLibs.dsc.inc: Add SafeIntLib instance
2023-12-15 9:55 [edk2-devel] [PATCH v4 0/8] Refine SMM CPU Sync flow and abstract SmmCpuSyncLib Wu, Jiaxin
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 1/8] UefiCpuPkg/PiSmmCpuDxeSmm: Optimize Semaphore Sync between BSP and AP Wu, Jiaxin
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 2/8] UefiCpuPkg: Adds SmmCpuSyncLib library class Wu, Jiaxin
@ 2023-12-15 9:55 ` Wu, Jiaxin
2023-12-15 16:30 ` Michael D Kinney
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 4/8] UefiCpuPkg: Implements SmmCpuSyncLib library instance Wu, Jiaxin
` (4 subsequent siblings)
7 siblings, 1 reply; 19+ messages in thread
From: Wu, Jiaxin @ 2023-12-15 9:55 UTC (permalink / raw)
To: devel
Cc: Michael D Kinney, Liming Gao, Zhiguang Liu, Laszlo Ersek, Ray Ni,
Zeng Star
This patch is to add SafeIntLib in MdeLibs.dsc.inc
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Zhiguang Liu <zhiguang.liu@intel.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
---
MdePkg/MdeLibs.dsc.inc | 1 +
1 file changed, 1 insertion(+)
diff --git a/MdePkg/MdeLibs.dsc.inc b/MdePkg/MdeLibs.dsc.inc
index 4580481cb5..deb35c1a18 100644
--- a/MdePkg/MdeLibs.dsc.inc
+++ b/MdePkg/MdeLibs.dsc.inc
@@ -14,5 +14,6 @@
[LibraryClasses]
ArmTrngLib|MdePkg/Library/BaseArmTrngLibNull/BaseArmTrngLibNull.inf
RegisterFilterLib|MdePkg/Library/RegisterFilterLibNull/RegisterFilterLibNull.inf
CpuLib|MdePkg/Library/BaseCpuLib/BaseCpuLib.inf
SmmCpuRendezvousLib|MdePkg/Library/SmmCpuRendezvousLibNull/SmmCpuRendezvousLibNull.inf
+ SafeIntLib|MdePkg/Library/BaseSafeIntLib/BaseSafeIntLib.inf
--
2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112595): https://edk2.groups.io/g/devel/message/112595
Mute This Topic: https://groups.io/mt/103187893/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [edk2-devel] [PATCH v4 4/8] UefiCpuPkg: Implements SmmCpuSyncLib library instance
2023-12-15 9:55 [edk2-devel] [PATCH v4 0/8] Refine SMM CPU Sync flow and abstract SmmCpuSyncLib Wu, Jiaxin
` (2 preceding siblings ...)
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 3/8] MdePkg/MdeLibs.dsc.inc: Add SafeIntLib instance Wu, Jiaxin
@ 2023-12-15 9:55 ` Wu, Jiaxin
2023-12-18 9:23 ` Ni, Ray
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 5/8] OvmfPkg: Specifies SmmCpuSyncLib instance Wu, Jiaxin
` (3 subsequent siblings)
7 siblings, 1 reply; 19+ messages in thread
From: Wu, Jiaxin @ 2023-12-15 9:55 UTC (permalink / raw)
To: devel
Cc: Laszlo Ersek, Eric Dong, Ray Ni, Zeng Star, Gerd Hoffmann,
Rahul Kumar
Implements SmmCpuSyncLib Library instance. The instance refers the
existing SMM CPU driver (PiSmmCpuDxeSmm) sync implementation
and behavior:
1.Abstract Counter and Run semaphores into SmmCpuSyncCtx.
2.Abstract CPU arrival count operation to
SmmCpuSyncGetArrivedCpuCount(), SmmCpuSyncCheckInCpu(),
SmmCpuSyncCheckOutCpu(), SmmCpuSyncLockDoor().
Implementation is aligned with existing SMM CPU driver.
3. Abstract SMM CPU Sync flow to:
BSP: SmmCpuSyncReleaseOneAp --> AP: SmmCpuSyncWaitForBsp
BSP: SmmCpuSyncWaitForAPs <-- AP: SmmCpuSyncReleaseBsp
Semaphores release & wait during sync flow is same as existing SMM
CPU driver.
4.Same operation to Counter and Run semaphores by leverage the atomic
compare exchange.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
---
UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c | 651 +++++++++++++++++++++
UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf | 34 ++
UefiCpuPkg/UefiCpuPkg.dsc | 2 +
3 files changed, 687 insertions(+)
create mode 100644 UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c
create mode 100644 UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
diff --git a/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c b/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c
new file mode 100644
index 0000000000..1d03a4e95b
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.c
@@ -0,0 +1,651 @@
+/** @file
+ SMM CPU Sync lib implementation.
+
+ The lib provides 3 sets of APIs:
+ 1. ContextInit/ContextDeinit/ContextReset:
+
+ ContextInit() is called in driver's entrypoint to allocate and initialize the SMM CPU Sync context.
+ ContextDeinit() is called in driver's unload function to deinitialize the SMM CPU Sync context.
+ ContextReset() is called by one of CPUs after all CPUs are ready to exit SMI, which allows CPU to
+ check into the next SMI from this point.
+
+ 2. GetArrivedCpuCount/CheckInCpu/CheckOutCpu/LockDoor:
+ When SMI happens, all processors including BSP enter to SMM mode by calling CheckInCpu().
+ CheckOutCpu() can be called in error handling flow for the CPU who calls CheckInCpu() earlier.
+ The elected BSP calls LockDoor() so that CheckInCpu() and CheckOutCpu() will return the error code after that.
+ GetArrivedCpuCount() returns the number of checked-in CPUs.
+
+ 3. WaitForAPs/ReleaseOneAp/WaitForBsp/ReleaseBsp
+ WaitForAPs() & ReleaseOneAp() are called from BSP to wait the number of APs and release one specific AP.
+ WaitForBsp() & ReleaseBsp() are called from APs to wait and release BSP.
+ The 4 APIs are used to synchronize the running flow among BSP and APs.
+ BSP and AP Sync flow can be easy understand as below:
+ BSP: ReleaseOneAp --> AP: WaitForBsp
+ BSP: WaitForAPs <-- AP: ReleaseBsp
+
+ Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
+ SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+#include <Library/BaseLib.h>
+#include <Library/DebugLib.h>
+#include <Library/MemoryAllocationLib.h>
+#include <Library/SafeIntLib.h>
+#include <Library/SmmCpuSyncLib.h>
+#include <Library/SynchronizationLib.h>
+#include <Uefi.h>
+
+///
+/// The implementation shall place one semaphore on exclusive cache line for good performance.
+///
+typedef volatile UINT32 SMM_CPU_SYNC_SEMAPHORE;
+
+typedef struct {
+ ///
+ /// Used for control each CPU continue run or wait for signal
+ ///
+ SMM_CPU_SYNC_SEMAPHORE *Run;
+} SMM_CPU_SYNC_SEMAPHORE_FOR_EACH_CPU;
+
+struct SMM_CPU_SYNC_CONTEXT {
+ ///
+ /// Indicate all CPUs in the system.
+ ///
+ UINTN NumberOfCpus;
+ ///
+ /// Address of semaphores.
+ ///
+ VOID *SemBuffer;
+ ///
+ /// Size of semaphores.
+ ///
+ UINTN SemBufferPages;
+ ///
+ /// Indicate CPUs entered SMM after lock door.
+ ///
+ UINTN LockedCpuCount;
+ ///
+ /// Indicate CPUs entered SMM before lock door.
+ ///
+ SMM_CPU_SYNC_SEMAPHORE *CpuCount;
+ ///
+ /// Define an array of structure for each CPU semaphore due to the size alignment
+ /// requirement. With the array of structure for each CPU semaphore, it's easy to
+ /// reach the specific CPU with CPU Index for its own semaphore access: CpuSem[CpuIndex].
+ ///
+ SMM_CPU_SYNC_SEMAPHORE_FOR_EACH_CPU CpuSem[];
+};
+
+/**
+ Performs an atomic compare exchange operation to get semaphore.
+ The compare exchange operation must be performed using MP safe
+ mechanisms.
+
+ @param[in,out] Sem IN: 32-bit unsigned integer
+ OUT: original integer - 1 if Sem is not locked.
+ OUT: original integer if Sem is locked (MAX_UINT32).
+
+ @retval Original integer - 1 if Sem is not locked.
+ Original integer if Sem is locked (MAX_UINT32).
+
+**/
+STATIC
+UINT32
+InternalWaitForSemaphore (
+ IN OUT volatile UINT32 *Sem
+ )
+{
+ UINT32 Value;
+
+ for ( ; ;) {
+ Value = *Sem;
+ if (Value == MAX_UINT32) {
+ return Value;
+ }
+
+ if ((Value != 0) &&
+ (InterlockedCompareExchange32 (
+ (UINT32 *)Sem,
+ Value,
+ Value - 1
+ ) == Value))
+ {
+ break;
+ }
+
+ CpuPause ();
+ }
+
+ return Value - 1;
+}
+
+/**
+ Performs an atomic compare exchange operation to release semaphore.
+ The compare exchange operation must be performed using MP safe
+ mechanisms.
+
+ @param[in,out] Sem IN: 32-bit unsigned integer
+ OUT: original integer + 1 if Sem is not locked.
+ OUT: original integer if Sem is locked (MAX_UINT32).
+
+ @retval Original integer + 1 if Sem is not locked.
+ Original integer if Sem is locked (MAX_UINT32).
+
+**/
+STATIC
+UINT32
+InternalReleaseSemaphore (
+ IN OUT volatile UINT32 *Sem
+ )
+{
+ UINT32 Value;
+
+ do {
+ Value = *Sem;
+ } while (Value + 1 != 0 &&
+ InterlockedCompareExchange32 (
+ (UINT32 *)Sem,
+ Value,
+ Value + 1
+ ) != Value);
+
+ if (Value == MAX_UINT32) {
+ return Value;
+ }
+
+ return Value + 1;
+}
+
+/**
+ Performs an atomic compare exchange operation to lock semaphore.
+ The compare exchange operation must be performed using MP safe
+ mechanisms.
+
+ @param[in,out] Sem IN: 32-bit unsigned integer
+ OUT: -1
+
+ @retval Original integer
+
+**/
+STATIC
+UINT32
+InternalLockdownSemaphore (
+ IN OUT volatile UINT32 *Sem
+ )
+{
+ UINT32 Value;
+
+ do {
+ Value = *Sem;
+ } while (InterlockedCompareExchange32 (
+ (UINT32 *)Sem,
+ Value,
+ (UINT32)-1
+ ) != Value);
+
+ return Value;
+}
+
+/**
+ Create and initialize the SMM CPU Sync context. It is to allocate and initialize the
+ SMM CPU Sync context.
+
+ If Context is NULL, then ASSERT().
+
+ @param[in] NumberOfCpus The number of Logical Processors in the system.
+ @param[out] Context Pointer to the new created and initialized SMM CPU Sync context object.
+ NULL will be returned if any error happen during init.
+
+ @retval RETURN_SUCCESS The SMM CPU Sync context was successful created and initialized.
+ @retval RETURN_OUT_OF_RESOURCES There are not enough resources available to create and initialize SMM CPU Sync context.
+ @retval RETURN_BUFFER_TOO_SMALL Overflow happen
+
+**/
+RETURN_STATUS
+EFIAPI
+SmmCpuSyncContextInit (
+ IN UINTN NumberOfCpus,
+ OUT SMM_CPU_SYNC_CONTEXT **Context
+ )
+{
+ RETURN_STATUS Status;
+ UINTN ContextSize;
+ UINTN CacheLineSize;
+ UINTN OneSemSize;
+ UINTN NumSem;
+ UINTN TotalSemSize;
+ UINTN SemAddr;
+ UINTN CpuIndex;
+ SMM_CPU_SYNC_SEMAPHORE_FOR_EACH_CPU *CpuSem;
+
+ ASSERT (Context != NULL);
+
+ //
+ // Calculate ContextSize
+ //
+ Status = SafeUintnMult (NumberOfCpus, sizeof (SMM_CPU_SYNC_SEMAPHORE_FOR_EACH_CPU), &ContextSize);
+ if (RETURN_ERROR (Status)) {
+ return Status;
+ }
+
+ Status = SafeUintnAdd (ContextSize, sizeof (SMM_CPU_SYNC_CONTEXT), &ContextSize);
+ if (RETURN_ERROR (Status)) {
+ return Status;
+ }
+
+ //
+ // Allocate Buffer for Context
+ //
+ *Context = AllocatePool (ContextSize);
+ if (*Context == NULL) {
+ return RETURN_OUT_OF_RESOURCES;
+ }
+
+ (*Context)->LockedCpuCount = 0;
+
+ //
+ // Save NumberOfCpus
+ //
+ (*Context)->NumberOfCpus = NumberOfCpus;
+
+ //
+ // Calculate total semaphore size
+ //
+ CacheLineSize = GetSpinLockProperties ();
+ OneSemSize = ALIGN_VALUE (sizeof (SMM_CPU_SYNC_SEMAPHORE), CacheLineSize);
+
+ Status = SafeUintnAdd (1, NumberOfCpus, &NumSem);
+ if (RETURN_ERROR (Status)) {
+ goto ON_ERROR;
+ }
+
+ Status = SafeUintnMult (NumSem, OneSemSize, &TotalSemSize);
+ if (RETURN_ERROR (Status)) {
+ goto ON_ERROR;
+ }
+
+ //
+ // Allocate for Semaphores in the *Context
+ //
+ (*Context)->SemBufferPages = EFI_SIZE_TO_PAGES (TotalSemSize);
+ (*Context)->SemBuffer = AllocatePages ((*Context)->SemBufferPages);
+ if ((*Context)->SemBuffer == NULL) {
+ Status = RETURN_OUT_OF_RESOURCES;
+ goto ON_ERROR;
+ }
+
+ //
+ // Assign Global Semaphore pointer
+ //
+ SemAddr = (UINTN)(*Context)->SemBuffer;
+ (*Context)->CpuCount = (SMM_CPU_SYNC_SEMAPHORE *)SemAddr;
+ *(*Context)->CpuCount = 0;
+
+ SemAddr += OneSemSize;
+
+ //
+ // Assign CPU Semaphore pointer
+ //
+ CpuSem = (*Context)->CpuSem;
+ for (CpuIndex = 0; CpuIndex < NumberOfCpus; CpuIndex++) {
+ CpuSem->Run = (SMM_CPU_SYNC_SEMAPHORE *)SemAddr;
+ *CpuSem->Run = 0;
+
+ CpuSem++;
+ SemAddr += OneSemSize;
+ }
+
+ return RETURN_SUCCESS;
+
+ON_ERROR:
+ FreePool (*Context);
+ return Status;
+}
+
+/**
+ Deinit an allocated SMM CPU Sync context. The resources allocated in SmmCpuSyncContextInit() will
+ be freed.
+
+ If Context is NULL, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object to be deinitialized.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncContextDeinit (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context
+ )
+{
+ ASSERT (Context != NULL);
+
+ FreePages (Context->SemBuffer, Context->SemBufferPages);
+
+ FreePool (Context);
+}
+
+/**
+ Reset SMM CPU Sync context. SMM CPU Sync context will be reset to the initialized state.
+
+ This function is called by one of CPUs after all CPUs are ready to exit SMI, which allows CPU to
+ check into the next SMI from this point.
+
+ If Context is NULL, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object to be reset.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncContextReset (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context
+ )
+{
+ ASSERT (Context != NULL);
+
+ Context->LockedCpuCount = 0;
+ *Context->CpuCount = 0;
+}
+
+/**
+ Get current number of arrived CPU in SMI.
+
+ BSP might need to know the current number of arrived CPU in SMI to make sure all APs
+ in SMI. This API can be for that purpose.
+
+ If Context is NULL, then ASSERT().
+
+ @param[in] Context Pointer to the SMM CPU Sync context object.
+
+ @retval Current number of arrived CPU in SMI.
+
+**/
+UINTN
+EFIAPI
+SmmCpuSyncGetArrivedCpuCount (
+ IN SMM_CPU_SYNC_CONTEXT *Context
+ )
+{
+ UINT32 Value;
+
+ ASSERT (Context != NULL);
+
+ Value = *Context->CpuCount;
+
+ if (Value == (UINT32)-1) {
+ return Context->LockedCpuCount;
+ }
+
+ return Value;
+}
+
+/**
+ Performs an atomic operation to check in CPU.
+
+ When SMI happens, all processors including BSP enter to SMM mode by calling SmmCpuSyncCheckInCpu().
+
+ If Context is NULL, then ASSERT().
+ If CpuIndex exceeds the range of all CPUs in the system, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] CpuIndex Check in CPU index.
+
+ @retval RETURN_SUCCESS Check in CPU (CpuIndex) successfully.
+ @retval RETURN_ABORTED Check in CPU failed due to SmmCpuSyncLockDoor() has been called by one elected CPU.
+
+**/
+RETURN_STATUS
+EFIAPI
+SmmCpuSyncCheckInCpu (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN CpuIndex
+ )
+{
+ ASSERT (Context != NULL);
+
+ ASSERT (CpuIndex < Context->NumberOfCpus);
+
+ //
+ // Check to return if CpuCount has already been locked.
+ //
+ if (InternalReleaseSemaphore (Context->CpuCount) == MAX_UINT32) {
+ return RETURN_ABORTED;
+ }
+
+ return RETURN_SUCCESS;
+}
+
+/**
+ Performs an atomic operation to check out CPU.
+
+ This function can be called in error handling flow for the CPU who calls CheckInCpu() earlier.
+ The caller shall make sure the CPU specified by CpuIndex has already checked-in.
+
+ If Context is NULL, then ASSERT().
+ If CpuIndex exceeds the range of all CPUs in the system, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] CpuIndex Check out CPU index.
+
+ @retval RETURN_SUCCESS Check out CPU (CpuIndex) successfully.
+ @retval RETURN_ABORTED Check out CPU failed due to SmmCpuSyncLockDoor() has been called by one elected CPU.
+
+**/
+RETURN_STATUS
+EFIAPI
+SmmCpuSyncCheckOutCpu (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN CpuIndex
+ )
+{
+ ASSERT (Context != NULL);
+
+ ASSERT (CpuIndex < Context->NumberOfCpus);
+
+ if (InternalWaitForSemaphore (Context->CpuCount) == MAX_UINT32) {
+ return RETURN_ABORTED;
+ }
+
+ return RETURN_SUCCESS;
+}
+
+/**
+ Performs an atomic operation lock door for CPU checkin and checkout. After this function:
+ CPU can not check in via SmmCpuSyncCheckInCpu().
+ CPU can not check out via SmmCpuSyncCheckOutCpu().
+
+ The CPU specified by CpuIndex is elected to lock door. The caller shall make sure the CpuIndex
+ is the actual CPU calling this function to avoid the undefined behavior.
+
+ If Context is NULL, then ASSERT().
+ If CpuCount is NULL, then ASSERT().
+ If CpuIndex exceeds the range of all CPUs in the system, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] CpuIndex Indicate which CPU to lock door.
+ @param[out] CpuCount Number of arrived CPU in SMI after look door.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncLockDoor (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN CpuIndex,
+ OUT UINTN *CpuCount
+ )
+{
+ ASSERT (Context != NULL);
+
+ ASSERT (CpuCount != NULL);
+
+ ASSERT (CpuIndex < Context->NumberOfCpus);
+
+ //
+ // Temporarily record the CpuCount into the LockedCpuCount before lock door.
+ // Recording before lock door is to avoid the Context->CpuCount is locked but possible
+ // Context->LockedCpuCount is not updated.
+ //
+ Context->LockedCpuCount = *Context->CpuCount;
+
+ //
+ // Lock door operation
+ //
+ *CpuCount = InternalLockdownSemaphore (Context->CpuCount);
+
+ //
+ // Update the LockedCpuCount
+ //
+ Context->LockedCpuCount = *CpuCount;
+}
+
+/**
+ Used by the BSP to wait for APs.
+
+ The number of APs need to be waited is specified by NumberOfAPs. The BSP is specified by BspIndex.
+ The caller shall make sure the BspIndex is the actual CPU calling this function to avoid the undefined behavior.
+ The caller shall make sure the NumberOfAPs have already checked-in to avoid the undefined behavior.
+
+ If Context is NULL, then ASSERT().
+ If NumberOfAPs >= All CPUs in system, then ASSERT().
+ If BspIndex exceeds the range of all CPUs in the system, then ASSERT().
+
+ Note:
+ This function is blocking mode, and it will return only after the number of APs released by
+ calling SmmCpuSyncReleaseBsp():
+ BSP: WaitForAPs <-- AP: ReleaseBsp
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] NumberOfAPs Number of APs need to be waited by BSP.
+ @param[in] BspIndex The BSP Index to wait for APs.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncWaitForAPs (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN NumberOfAPs,
+ IN UINTN BspIndex
+ )
+{
+ UINTN Arrived;
+
+ ASSERT (Context != NULL);
+
+ ASSERT (NumberOfAPs < Context->NumberOfCpus);
+
+ ASSERT (BspIndex < Context->NumberOfCpus);
+
+ for (Arrived = 0; Arrived < NumberOfAPs; Arrived++) {
+ InternalWaitForSemaphore (Context->CpuSem[BspIndex].Run);
+ }
+}
+
+/**
+ Used by the BSP to release one AP.
+
+ The AP is specified by CpuIndex. The BSP is specified by BspIndex.
+ The caller shall make sure the BspIndex is the actual CPU calling this function to avoid the undefined behavior.
+ The caller shall make sure the CpuIndex has already checked-in to avoid the undefined behavior.
+
+ If Context is NULL, then ASSERT().
+ If CpuIndex == BspIndex, then ASSERT().
+ If BspIndex or CpuIndex exceed the range of all CPUs in the system, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] CpuIndex Indicate which AP need to be released.
+ @param[in] BspIndex The BSP Index to release AP.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncReleaseOneAp (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN CpuIndex,
+ IN UINTN BspIndex
+ )
+{
+ ASSERT (Context != NULL);
+
+ ASSERT (BspIndex != CpuIndex);
+
+ ASSERT (CpuIndex < Context->NumberOfCpus);
+
+ ASSERT (BspIndex < Context->NumberOfCpus);
+
+ InternalReleaseSemaphore (Context->CpuSem[CpuIndex].Run);
+}
+
+/**
+ Used by the AP to wait BSP.
+
+ The AP is specified by CpuIndex.
+ The caller shall make sure the CpuIndex is the actual CPU calling this function to avoid the undefined behavior.
+ The BSP is specified by BspIndex.
+
+ If Context is NULL, then ASSERT().
+ If CpuIndex == BspIndex, then ASSERT().
+ If BspIndex or CpuIndex exceed the range of all CPUs in the system, then ASSERT().
+
+ Note:
+ This function is blocking mode, and it will return only after the AP released by
+ calling SmmCpuSyncReleaseOneAp():
+ BSP: ReleaseOneAp --> AP: WaitForBsp
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] CpuIndex Indicate which AP wait BSP.
+ @param[in] BspIndex The BSP Index to be waited.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncWaitForBsp (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN CpuIndex,
+ IN UINTN BspIndex
+ )
+{
+ ASSERT (Context != NULL);
+
+ ASSERT (BspIndex != CpuIndex);
+
+ ASSERT (CpuIndex < Context->NumberOfCpus);
+
+ ASSERT (BspIndex < Context->NumberOfCpus);
+
+ InternalWaitForSemaphore (Context->CpuSem[CpuIndex].Run);
+}
+
+/**
+ Used by the AP to release BSP.
+
+ The AP is specified by CpuIndex.
+ The caller shall make sure the CpuIndex is the actual CPU calling this function to avoid the undefined behavior.
+ The BSP is specified by BspIndex.
+
+ If Context is NULL, then ASSERT().
+ If CpuIndex == BspIndex, then ASSERT().
+ If BspIndex or CpuIndex exceed the range of all CPUs in the system, then ASSERT().
+
+ @param[in,out] Context Pointer to the SMM CPU Sync context object.
+ @param[in] CpuIndex Indicate which AP release BSP.
+ @param[in] BspIndex The BSP Index to be released.
+
+**/
+VOID
+EFIAPI
+SmmCpuSyncReleaseBsp (
+ IN OUT SMM_CPU_SYNC_CONTEXT *Context,
+ IN UINTN CpuIndex,
+ IN UINTN BspIndex
+ )
+{
+ ASSERT (Context != NULL);
+
+ ASSERT (BspIndex != CpuIndex);
+
+ ASSERT (CpuIndex < Context->NumberOfCpus);
+
+ ASSERT (BspIndex < Context->NumberOfCpus);
+
+ InternalReleaseSemaphore (Context->CpuSem[BspIndex].Run);
+}
diff --git a/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf b/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
new file mode 100644
index 0000000000..6b0d49c30a
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
@@ -0,0 +1,34 @@
+## @file
+# SMM CPU Synchronization lib.
+#
+# This is SMM CPU Synchronization lib used for SMM CPU sync operations.
+#
+# Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
+# SPDX-License-Identifier: BSD-2-Clause-Patent
+#
+##
+
+[Defines]
+ INF_VERSION = 0x00010005
+ BASE_NAME = SmmCpuSyncLib
+ FILE_GUID = 1ca1bc1a-16a4-46ef-956a-ca500fd3381f
+ MODULE_TYPE = DXE_SMM_DRIVER
+ LIBRARY_CLASS = SmmCpuSyncLib|DXE_SMM_DRIVER
+
+[Sources]
+ SmmCpuSyncLib.c
+
+[Packages]
+ MdePkg/MdePkg.dec
+ UefiCpuPkg/UefiCpuPkg.dec
+
+[LibraryClasses]
+ BaseLib
+ DebugLib
+ MemoryAllocationLib
+ SafeIntLib
+ SynchronizationLib
+
+[Pcd]
+
+[Protocols]
diff --git a/UefiCpuPkg/UefiCpuPkg.dsc b/UefiCpuPkg/UefiCpuPkg.dsc
index 074fd77461..28eed85bce 100644
--- a/UefiCpuPkg/UefiCpuPkg.dsc
+++ b/UefiCpuPkg/UefiCpuPkg.dsc
@@ -54,10 +54,11 @@
CacheMaintenanceLib|MdePkg/Library/BaseCacheMaintenanceLib/BaseCacheMaintenanceLib.inf
PciLib|MdePkg/Library/BasePciLibPciExpress/BasePciLibPciExpress.inf
PciExpressLib|MdePkg/Library/BasePciExpressLib/BasePciExpressLib.inf
SmmCpuPlatformHookLib|UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.inf
SmmCpuFeaturesLib|UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf
+ SmmCpuSyncLib|UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
PeCoffGetEntryPointLib|MdePkg/Library/BasePeCoffGetEntryPointLib/BasePeCoffGetEntryPointLib.inf
PeCoffExtraActionLib|MdePkg/Library/BasePeCoffExtraActionLibNull/BasePeCoffExtraActionLibNull.inf
TpmMeasurementLib|MdeModulePkg/Library/TpmMeasurementLibNull/TpmMeasurementLibNull.inf
CcExitLib|UefiCpuPkg/Library/CcExitLibNull/CcExitLibNull.inf
MicrocodeLib|UefiCpuPkg/Library/MicrocodeLib/MicrocodeLib.inf
@@ -154,10 +155,11 @@
UefiCpuPkg/Library/RegisterCpuFeaturesLib/DxeRegisterCpuFeaturesLib.inf
UefiCpuPkg/Library/SmmCpuPlatformHookLibNull/SmmCpuPlatformHookLibNull.inf
UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf
UefiCpuPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLibStm.inf
UefiCpuPkg/Library/SmmCpuFeaturesLib/StandaloneMmCpuFeaturesLib.inf
+ UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
UefiCpuPkg/Library/CcExitLibNull/CcExitLibNull.inf
UefiCpuPkg/PiSmmCommunication/PiSmmCommunicationPei.inf
UefiCpuPkg/PiSmmCommunication/PiSmmCommunicationSmm.inf
UefiCpuPkg/SecCore/SecCore.inf
UefiCpuPkg/SecCore/SecCoreNative.inf
--
2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112596): https://edk2.groups.io/g/devel/message/112596
Mute This Topic: https://groups.io/mt/103187894/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [edk2-devel] [PATCH v4 5/8] OvmfPkg: Specifies SmmCpuSyncLib instance
2023-12-15 9:55 [edk2-devel] [PATCH v4 0/8] Refine SMM CPU Sync flow and abstract SmmCpuSyncLib Wu, Jiaxin
` (3 preceding siblings ...)
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 4/8] UefiCpuPkg: Implements SmmCpuSyncLib library instance Wu, Jiaxin
@ 2023-12-15 9:55 ` Wu, Jiaxin
2023-12-19 3:55 ` Ni, Ray
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 6/8] UefiPayloadPkg: " Wu, Jiaxin
` (2 subsequent siblings)
7 siblings, 1 reply; 19+ messages in thread
From: Wu, Jiaxin @ 2023-12-15 9:55 UTC (permalink / raw)
To: devel
Cc: Laszlo Ersek, Ard Biesheuvel, Jiewen Yao, Jordan Justen,
Eric Dong, Ray Ni, Zeng Star, Rahul Kumar, Gerd Hoffmann
This patch is to specify SmmCpuSyncLib instance for OvmfPkg.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
---
OvmfPkg/CloudHv/CloudHvX64.dsc | 1 +
OvmfPkg/OvmfPkgIa32.dsc | 1 +
OvmfPkg/OvmfPkgIa32X64.dsc | 1 +
OvmfPkg/OvmfPkgX64.dsc | 1 +
4 files changed, 4 insertions(+)
diff --git a/OvmfPkg/CloudHv/CloudHvX64.dsc b/OvmfPkg/CloudHv/CloudHvX64.dsc
index 1660548e07..af594959a9 100644
--- a/OvmfPkg/CloudHv/CloudHvX64.dsc
+++ b/OvmfPkg/CloudHv/CloudHvX64.dsc
@@ -907,10 +907,11 @@
}
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf {
<LibraryClasses>
SmmCpuPlatformHookLib|OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.inf
SmmCpuFeaturesLib|OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf
+ SmmCpuSyncLib|UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
}
#
# Variable driver stack (SMM)
#
diff --git a/OvmfPkg/OvmfPkgIa32.dsc b/OvmfPkg/OvmfPkgIa32.dsc
index 6e8488007c..28379961a7 100644
--- a/OvmfPkg/OvmfPkgIa32.dsc
+++ b/OvmfPkg/OvmfPkgIa32.dsc
@@ -952,10 +952,11 @@
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf {
<LibraryClasses>
SmmCpuPlatformHookLib|OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.inf
SmmCpuFeaturesLib|OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf
MmSaveStateLib|UefiCpuPkg/Library/MmSaveStateLib/AmdMmSaveStateLib.inf
+ SmmCpuSyncLib|UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
}
#
# Variable driver stack (SMM)
#
diff --git a/OvmfPkg/OvmfPkgIa32X64.dsc b/OvmfPkg/OvmfPkgIa32X64.dsc
index 413ea71984..5e9eee628a 100644
--- a/OvmfPkg/OvmfPkgIa32X64.dsc
+++ b/OvmfPkg/OvmfPkgIa32X64.dsc
@@ -970,10 +970,11 @@
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf {
<LibraryClasses>
SmmCpuPlatformHookLib|OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.inf
SmmCpuFeaturesLib|OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf
MmSaveStateLib|UefiCpuPkg/Library/MmSaveStateLib/AmdMmSaveStateLib.inf
+ SmmCpuSyncLib|UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
}
#
# Variable driver stack (SMM)
#
diff --git a/OvmfPkg/OvmfPkgX64.dsc b/OvmfPkg/OvmfPkgX64.dsc
index f000092d70..bf4c7906c4 100644
--- a/OvmfPkg/OvmfPkgX64.dsc
+++ b/OvmfPkg/OvmfPkgX64.dsc
@@ -1040,10 +1040,11 @@
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf {
<LibraryClasses>
SmmCpuPlatformHookLib|OvmfPkg/Library/SmmCpuPlatformHookLibQemu/SmmCpuPlatformHookLibQemu.inf
SmmCpuFeaturesLib|OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf
MmSaveStateLib|UefiCpuPkg/Library/MmSaveStateLib/AmdMmSaveStateLib.inf
+ SmmCpuSyncLib|UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
}
#
# Variable driver stack (SMM)
#
--
2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112597): https://edk2.groups.io/g/devel/message/112597
Mute This Topic: https://groups.io/mt/103187895/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [edk2-devel] [PATCH v4 6/8] UefiPayloadPkg: Specifies SmmCpuSyncLib instance
2023-12-15 9:55 [edk2-devel] [PATCH v4 0/8] Refine SMM CPU Sync flow and abstract SmmCpuSyncLib Wu, Jiaxin
` (4 preceding siblings ...)
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 5/8] OvmfPkg: Specifies SmmCpuSyncLib instance Wu, Jiaxin
@ 2023-12-15 9:55 ` Wu, Jiaxin
2023-12-19 3:55 ` Ni, Ray
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 7/8] UefiCpuPkg/PiSmmCpuDxeSmm: Simplify RunningApCount decrement Wu, Jiaxin
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 8/8] UefiCpuPkg/PiSmmCpuDxeSmm: Consume SmmCpuSyncLib Wu, Jiaxin
7 siblings, 1 reply; 19+ messages in thread
From: Wu, Jiaxin @ 2023-12-15 9:55 UTC (permalink / raw)
To: devel
Cc: Laszlo Ersek, Guo Dong, Sean Rhodes, James Lu, Gua Guo, Ray Ni,
Zeng Star
This patch is to specify SmmCpuSyncLib instance for UefiPayloadPkg.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Guo Dong <guo.dong@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
Reviewed-by: Gua Guo <gua.guo@intel.com>
---
UefiPayloadPkg/UefiPayloadPkg.dsc | 1 +
1 file changed, 1 insertion(+)
diff --git a/UefiPayloadPkg/UefiPayloadPkg.dsc b/UefiPayloadPkg/UefiPayloadPkg.dsc
index a65f9d5b83..b8b13ad201 100644
--- a/UefiPayloadPkg/UefiPayloadPkg.dsc
+++ b/UefiPayloadPkg/UefiPayloadPkg.dsc
@@ -253,10 +253,11 @@
#
MtrrLib|UefiCpuPkg/Library/MtrrLib/MtrrLib.inf
LocalApicLib|UefiCpuPkg/Library/BaseXApicX2ApicLib/BaseXApicX2ApicLib.inf
MicrocodeLib|UefiCpuPkg/Library/MicrocodeLib/MicrocodeLib.inf
CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
+ SmmCpuSyncLib|UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
#
# Platform
#
!if $(CPU_TIMER_LIB_ENABLE) == TRUE && $(UNIVERSAL_PAYLOAD) == TRUE
--
2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112598): https://edk2.groups.io/g/devel/message/112598
Mute This Topic: https://groups.io/mt/103187896/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [edk2-devel] [PATCH v4 7/8] UefiCpuPkg/PiSmmCpuDxeSmm: Simplify RunningApCount decrement
2023-12-15 9:55 [edk2-devel] [PATCH v4 0/8] Refine SMM CPU Sync flow and abstract SmmCpuSyncLib Wu, Jiaxin
` (5 preceding siblings ...)
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 6/8] UefiPayloadPkg: " Wu, Jiaxin
@ 2023-12-15 9:55 ` Wu, Jiaxin
2023-12-19 3:55 ` Ni, Ray
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 8/8] UefiCpuPkg/PiSmmCpuDxeSmm: Consume SmmCpuSyncLib Wu, Jiaxin
7 siblings, 1 reply; 19+ messages in thread
From: Wu, Jiaxin @ 2023-12-15 9:55 UTC (permalink / raw)
To: devel
Cc: Laszlo Ersek, Eric Dong, Ray Ni, Zeng Star, Gerd Hoffmann,
Rahul Kumar
To decrease the count of RunningApCount, InterlockedDecrement is
enough to achieve that.
This patch is to simplify RunningApCount decrement.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
index 54542262a2..9b477b6695 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
@@ -1450,11 +1450,11 @@ InternalSmmStartupAllAPs (
//
// Decrease the count to mark this processor(AP or BSP) as finished.
//
if (ProcToken != NULL) {
- WaitForSemaphore (&ProcToken->RunningApCount);
+ InterlockedDecrement (&ProcToken->RunningApCount);
}
}
}
ReleaseAllAPs ();
--
2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112599): https://edk2.groups.io/g/devel/message/112599
Mute This Topic: https://groups.io/mt/103187897/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [edk2-devel] [PATCH v4 8/8] UefiCpuPkg/PiSmmCpuDxeSmm: Consume SmmCpuSyncLib
2023-12-15 9:55 [edk2-devel] [PATCH v4 0/8] Refine SMM CPU Sync flow and abstract SmmCpuSyncLib Wu, Jiaxin
` (6 preceding siblings ...)
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 7/8] UefiCpuPkg/PiSmmCpuDxeSmm: Simplify RunningApCount decrement Wu, Jiaxin
@ 2023-12-15 9:55 ` Wu, Jiaxin
2023-12-19 3:54 ` Ni, Ray
7 siblings, 1 reply; 19+ messages in thread
From: Wu, Jiaxin @ 2023-12-15 9:55 UTC (permalink / raw)
To: devel
Cc: Laszlo Ersek, Eric Dong, Ray Ni, Zeng Star, Gerd Hoffmann,
Rahul Kumar
There is the SmmCpuSyncLib Library class define the SMM CPU sync
flow, which is aligned with existing SMM CPU driver sync behavior.
This patch is to consume SmmCpuSyncLib instance directly.
With this change, SMM CPU Sync flow/logic can be customized
with different implementation no matter for any purpose, e.g.
performance tuning, handle specific register, etc.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 274 +++++++--------------------
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 6 +-
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 1 +
3 files changed, 68 insertions(+), 213 deletions(-)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
index 9b477b6695..4fbb0bba87 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
@@ -27,122 +27,10 @@ MM_COMPLETION mSmmStartupThisApToken;
//
// Processor specified by mPackageFirstThreadIndex[PackageIndex] will do the package-scope register check.
//
UINT32 *mPackageFirstThreadIndex = NULL;
-/**
- Performs an atomic compare exchange operation to get semaphore.
- The compare exchange operation must be performed using
- MP safe mechanisms.
-
- @param Sem IN: 32-bit unsigned integer
- OUT: original integer - 1
- @return Original integer - 1
-
-**/
-UINT32
-WaitForSemaphore (
- IN OUT volatile UINT32 *Sem
- )
-{
- UINT32 Value;
-
- for ( ; ;) {
- Value = *Sem;
- if ((Value != 0) &&
- (InterlockedCompareExchange32 (
- (UINT32 *)Sem,
- Value,
- Value - 1
- ) == Value))
- {
- break;
- }
-
- CpuPause ();
- }
-
- return Value - 1;
-}
-
-/**
- Performs an atomic compare exchange operation to release semaphore.
- The compare exchange operation must be performed using
- MP safe mechanisms.
-
- @param Sem IN: 32-bit unsigned integer
- OUT: original integer + 1
- @return Original integer + 1
-
-**/
-UINT32
-ReleaseSemaphore (
- IN OUT volatile UINT32 *Sem
- )
-{
- UINT32 Value;
-
- do {
- Value = *Sem;
- } while (Value + 1 != 0 &&
- InterlockedCompareExchange32 (
- (UINT32 *)Sem,
- Value,
- Value + 1
- ) != Value);
-
- return Value + 1;
-}
-
-/**
- Performs an atomic compare exchange operation to lock semaphore.
- The compare exchange operation must be performed using
- MP safe mechanisms.
-
- @param Sem IN: 32-bit unsigned integer
- OUT: -1
- @return Original integer
-
-**/
-UINT32
-LockdownSemaphore (
- IN OUT volatile UINT32 *Sem
- )
-{
- UINT32 Value;
-
- do {
- Value = *Sem;
- } while (InterlockedCompareExchange32 (
- (UINT32 *)Sem,
- Value,
- (UINT32)-1
- ) != Value);
-
- return Value;
-}
-
-/**
- Used for BSP to wait all APs.
- Wait all APs to performs an atomic compare exchange operation to release semaphore.
-
- @param NumberOfAPs AP number
-
-**/
-VOID
-WaitForAllAPs (
- IN UINTN NumberOfAPs
- )
-{
- UINTN BspIndex;
-
- BspIndex = mSmmMpSyncData->BspIndex;
- while (NumberOfAPs-- > 0) {
- WaitForSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
- }
-}
-
/**
Used for BSP to release all APs.
Performs an atomic compare exchange operation to release semaphore
for each AP.
@@ -154,57 +42,15 @@ ReleaseAllAPs (
{
UINTN Index;
for (Index = 0; Index < mMaxNumberOfCpus; Index++) {
if (IsPresentAp (Index)) {
- ReleaseSemaphore (mSmmMpSyncData->CpuData[Index].Run);
+ SmmCpuSyncReleaseOneAp (mSmmMpSyncData->SyncContext, Index, gSmmCpuPrivate->SmmCoreEntryContext.CurrentlyExecutingCpu);
}
}
}
-/**
- Used for BSP to release one AP.
-
- @param ApSem IN: 32-bit unsigned integer
- OUT: original integer + 1
-**/
-VOID
-ReleaseOneAp (
- IN OUT volatile UINT32 *ApSem
- )
-{
- ReleaseSemaphore (ApSem);
-}
-
-/**
- Used for AP to wait BSP.
-
- @param ApSem IN: 32-bit unsigned integer
- OUT: original integer - 1
-**/
-VOID
-WaitForBsp (
- IN OUT volatile UINT32 *ApSem
- )
-{
- WaitForSemaphore (ApSem);
-}
-
-/**
- Used for AP to release BSP.
-
- @param BspSem IN: 32-bit unsigned integer
- OUT: original integer + 1
-**/
-VOID
-ReleaseBsp (
- IN OUT volatile UINT32 *BspSem
- )
-{
- ReleaseSemaphore (BspSem);
-}
-
/**
Check whether the index of CPU perform the package level register
programming during System Management Mode initialization.
The index of Processor specified by mPackageFirstThreadIndex[PackageIndex]
@@ -292,35 +138,35 @@ AllCpusInSmmExceptBlockedDisabled (
BlockedCount = 0;
DisabledCount = 0;
//
- // Check to make sure mSmmMpSyncData->Counter is valid and not locked.
+ // Check to make sure the CPU arrival count is valid and not locked.
//
- ASSERT (*mSmmMpSyncData->Counter <= mNumberOfCpus);
+ ASSERT (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SyncContext) <= mNumberOfCpus);
//
// Check whether all CPUs in SMM.
//
- if (*mSmmMpSyncData->Counter == mNumberOfCpus) {
+ if (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SyncContext) == mNumberOfCpus) {
return TRUE;
}
//
// Check for the Blocked & Disabled Exceptions Case.
//
GetSmmDelayedBlockedDisabledCount (NULL, &BlockedCount, &DisabledCount);
//
- // *mSmmMpSyncData->Counter might be updated by all APs concurrently. The value
+ // The CPU arrival count might be updated by all APs concurrently. The value
// can be dynamic changed. If some Aps enter the SMI after the BlockedCount &
- // DisabledCount check, then the *mSmmMpSyncData->Counter will be increased, thus
- // leading the *mSmmMpSyncData->Counter + BlockedCount + DisabledCount > mNumberOfCpus.
+ // DisabledCount check, then the CPU arrival count will be increased, thus
+ // leading the retrieved CPU arrival count + BlockedCount + DisabledCount > mNumberOfCpus.
// since the BlockedCount & DisabledCount are local variable, it's ok here only for
// the checking of all CPUs In Smm.
//
- if (*mSmmMpSyncData->Counter + BlockedCount + DisabledCount >= mNumberOfCpus) {
+ if (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SyncContext) + BlockedCount + DisabledCount >= mNumberOfCpus) {
return TRUE;
}
return FALSE;
}
@@ -396,11 +242,11 @@ SmmWaitForApArrival (
PERF_FUNCTION_BEGIN ();
DelayedCount = 0;
BlockedCount = 0;
- ASSERT (*mSmmMpSyncData->Counter <= mNumberOfCpus);
+ ASSERT (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SyncContext) <= mNumberOfCpus);
LmceEn = FALSE;
LmceSignal = FALSE;
if (mMachineCheckSupported) {
LmceEn = IsLmceOsEnabled ();
@@ -447,11 +293,11 @@ SmmWaitForApArrival (
// d) We don't add code to check SMI disabling status to skip sending IPI to SMI disabled APs, because:
// - In traditional flow, SMI disabling is discouraged.
// - In relaxed flow, CheckApArrival() will check SMI disabling status before calling this function.
// In both cases, adding SMI-disabling checking code increases overhead.
//
- if (*mSmmMpSyncData->Counter < mNumberOfCpus) {
+ if (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SyncContext) < mNumberOfCpus) {
//
// Send SMI IPIs to bring outside processors in
//
for (Index = 0; Index < mMaxNumberOfCpus; Index++) {
if (!(*(mSmmMpSyncData->CpuData[Index].Present)) && (gSmmCpuPrivate->ProcessorInfo[Index].ProcessorId != INVALID_APIC_ID)) {
@@ -610,18 +456,20 @@ VOID
BSPHandler (
IN UINTN CpuIndex,
IN SMM_CPU_SYNC_MODE SyncMode
)
{
+ UINTN CpuCount;
UINTN Index;
MTRR_SETTINGS Mtrrs;
UINTN ApCount;
BOOLEAN ClearTopLevelSmiResult;
UINTN PresentCount;
ASSERT (CpuIndex == mSmmMpSyncData->BspIndex);
- ApCount = 0;
+ CpuCount = 0;
+ ApCount = 0;
PERF_FUNCTION_BEGIN ();
//
// Flag BSP's presence
@@ -659,28 +507,31 @@ BSPHandler (
// Wait for APs to arrive
//
SmmWaitForApArrival ();
//
- // Lock the counter down and retrieve the number of APs
+ // Lock door for late coming CPU checkin and retrieve the Arrived number of APs
//
*mSmmMpSyncData->AllCpusInSync = TRUE;
- ApCount = LockdownSemaphore (mSmmMpSyncData->Counter) - 1;
+
+ SmmCpuSyncLockDoor (mSmmMpSyncData->SyncContext, CpuIndex, &CpuCount);
+
+ ApCount = CpuCount - 1;
//
// Wait for all APs to get ready for programming MTRRs
//
- WaitForAllAPs (ApCount);
+ SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount, CpuIndex);
if (SmmCpuFeaturesNeedConfigureMtrrs ()) {
//
// Signal all APs it's time for backup MTRRs
//
ReleaseAllAPs ();
//
- // WaitForAllAPs() may wait for ever if an AP happens to enter SMM at
+ // SmmCpuSyncWaitForAPs() may wait for ever if an AP happens to enter SMM at
// exactly this point. Please make sure PcdCpuSmmMaxSyncLoops has been set
// to a large enough value to avoid this situation.
// Note: For HT capable CPUs, threads within a core share the same set of MTRRs.
// We do the backup first and then set MTRR to avoid race condition for threads
// in the same core.
@@ -688,28 +539,28 @@ BSPHandler (
MtrrGetAllMtrrs (&Mtrrs);
//
// Wait for all APs to complete their MTRR saving
//
- WaitForAllAPs (ApCount);
+ SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount, CpuIndex);
//
// Let all processors program SMM MTRRs together
//
ReleaseAllAPs ();
//
- // WaitForAllAPs() may wait for ever if an AP happens to enter SMM at
+ // SmmCpuSyncWaitForAPs() may wait for ever if an AP happens to enter SMM at
// exactly this point. Please make sure PcdCpuSmmMaxSyncLoops has been set
// to a large enough value to avoid this situation.
//
ReplaceOSMtrrs (CpuIndex);
//
// Wait for all APs to complete their MTRR programming
//
- WaitForAllAPs (ApCount);
+ SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount, CpuIndex);
}
}
//
// The BUSY lock is initialized to Acquired state
@@ -741,14 +592,18 @@ BSPHandler (
// make those APs to exit SMI synchronously. APs which arrive later will be excluded and
// will run through freely.
//
if ((SyncMode != SmmCpuSyncModeTradition) && !SmmCpuFeaturesNeedConfigureMtrrs ()) {
//
- // Lock the counter down and retrieve the number of APs
+ // Lock door for late coming CPU checkin and retrieve the Arrived number of APs
//
*mSmmMpSyncData->AllCpusInSync = TRUE;
- ApCount = LockdownSemaphore (mSmmMpSyncData->Counter) - 1;
+
+ SmmCpuSyncLockDoor (mSmmMpSyncData->SyncContext, CpuIndex, &CpuCount);
+
+ ApCount = CpuCount - 1;
+
//
// Make sure all APs have their Present flag set
//
while (TRUE) {
PresentCount = 0;
@@ -771,11 +626,11 @@ BSPHandler (
ReleaseAllAPs ();
//
// Wait for all APs to complete their pending tasks
//
- WaitForAllAPs (ApCount);
+ SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount, CpuIndex);
if (SmmCpuFeaturesNeedConfigureMtrrs ()) {
//
// Signal APs to restore MTRRs
//
@@ -788,11 +643,11 @@ BSPHandler (
MtrrSetAllMtrrs (&Mtrrs);
//
// Wait for all APs to complete MTRR programming
//
- WaitForAllAPs (ApCount);
+ SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount, CpuIndex);
}
//
// Stop source level debug in BSP handler, the code below will not be
// debugged.
@@ -816,11 +671,11 @@ BSPHandler (
//
// Gather APs to exit SMM synchronously. Note the Present flag is cleared by now but
// WaitForAllAps does not depend on the Present flag.
//
- WaitForAllAPs (ApCount);
+ SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount, CpuIndex);
//
// At this point, all APs should have exited from APHandler().
// Migrate the SMM MP performance logging to standard SMM performance logging.
// Any SMM MP performance logging after this point will be migrated in next SMI.
@@ -842,11 +697,11 @@ BSPHandler (
}
//
// Allow APs to check in from this point on
//
- *mSmmMpSyncData->Counter = 0;
+ SmmCpuSyncContextReset (mSmmMpSyncData->SyncContext);
*mSmmMpSyncData->AllCpusInSync = FALSE;
mSmmMpSyncData->AllApArrivedWithException = FALSE;
PERF_FUNCTION_END ();
}
@@ -912,21 +767,21 @@ APHandler (
if (!(*mSmmMpSyncData->InsideSmm)) {
//
// Give up since BSP is unable to enter SMM
// and signal the completion of this AP
- // Reduce the mSmmMpSyncData->Counter!
+ // Reduce the CPU arrival count!
//
- WaitForSemaphore (mSmmMpSyncData->Counter);
+ SmmCpuSyncCheckOutCpu (mSmmMpSyncData->SyncContext, CpuIndex);
return;
}
} else {
//
// Don't know BSP index. Give up without sending IPI to BSP.
- // Reduce the mSmmMpSyncData->Counter!
+ // Reduce the CPU arrival count!
//
- WaitForSemaphore (mSmmMpSyncData->Counter);
+ SmmCpuSyncCheckOutCpu (mSmmMpSyncData->SyncContext, CpuIndex);
return;
}
}
//
@@ -942,50 +797,50 @@ APHandler (
if ((SyncMode == SmmCpuSyncModeTradition) || SmmCpuFeaturesNeedConfigureMtrrs ()) {
//
// Notify BSP of arrival at this point
//
- ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
+ SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex, BspIndex);
}
if (SmmCpuFeaturesNeedConfigureMtrrs ()) {
//
// Wait for the signal from BSP to backup MTRRs
//
- WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
+ SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex, BspIndex);
//
// Backup OS MTRRs
//
MtrrGetAllMtrrs (&Mtrrs);
//
// Signal BSP the completion of this AP
//
- ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
+ SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex, BspIndex);
//
// Wait for BSP's signal to program MTRRs
//
- WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
+ SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex, BspIndex);
//
// Replace OS MTRRs with SMI MTRRs
//
ReplaceOSMtrrs (CpuIndex);
//
// Signal BSP the completion of this AP
//
- ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
+ SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex, BspIndex);
}
while (TRUE) {
//
// Wait for something to happen
//
- WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
+ SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex, BspIndex);
//
// Check if BSP wants to exit SMM
//
if (!(*mSmmMpSyncData->InsideSmm)) {
@@ -1021,16 +876,16 @@ APHandler (
if (SmmCpuFeaturesNeedConfigureMtrrs ()) {
//
// Notify BSP the readiness of this AP to program MTRRs
//
- ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
+ SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex, BspIndex);
//
// Wait for the signal from BSP to program MTRRs
//
- WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
+ SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex, BspIndex);
//
// Restore OS MTRRs
//
SmmCpuFeaturesReenableSmrr ();
@@ -1038,26 +893,26 @@ APHandler (
}
//
// Notify BSP the readiness of this AP to Reset states/semaphore for this processor
//
- ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
+ SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex, BspIndex);
//
// Wait for the signal from BSP to Reset states/semaphore for this processor
//
- WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
+ SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex, BspIndex);
//
// Reset states/semaphore for this processor
//
*(mSmmMpSyncData->CpuData[CpuIndex].Present) = FALSE;
//
// Notify BSP the readiness of this AP to exit SMM
//
- ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
+ SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex, BspIndex);
}
/**
Checks whether the input token is the current used token.
@@ -1321,11 +1176,11 @@ InternalSmmStartupThisAp (
mSmmMpSyncData->CpuData[CpuIndex].Status = CpuStatus;
if (mSmmMpSyncData->CpuData[CpuIndex].Status != NULL) {
*mSmmMpSyncData->CpuData[CpuIndex].Status = EFI_NOT_READY;
}
- ReleaseOneAp (mSmmMpSyncData->CpuData[CpuIndex].Run);
+ SmmCpuSyncReleaseOneAp (mSmmMpSyncData->SyncContext, CpuIndex, gSmmCpuPrivate->SmmCoreEntryContext.CurrentlyExecutingCpu);
if (Token == NULL) {
AcquireSpinLock (mSmmMpSyncData->CpuData[CpuIndex].Busy);
ReleaseSpinLock (mSmmMpSyncData->CpuData[CpuIndex].Busy);
}
@@ -1725,14 +1580,15 @@ SmiRendezvous (
//
goto Exit;
} else {
//
// Signal presence of this processor
- // mSmmMpSyncData->Counter is increased here!
- // "ReleaseSemaphore (mSmmMpSyncData->Counter) == 0" means BSP has already ended the synchronization.
+ // CPU check in here!
+ // "SmmCpuSyncCheckInCpu (mSmmMpSyncData->SyncContext, CpuIndex)" return error means failed
+ // to check in CPU. BSP has already ended the synchronization.
//
- if (ReleaseSemaphore (mSmmMpSyncData->Counter) == 0) {
+ if (RETURN_ERROR (SmmCpuSyncCheckInCpu (mSmmMpSyncData->SyncContext, CpuIndex))) {
//
// BSP has already ended the synchronization, so QUIT!!!
// Existing AP is too late now to enter SMI since BSP has already ended the synchronization!!!
//
@@ -1824,12 +1680,10 @@ SmiRendezvous (
} else {
APHandler (CpuIndex, ValidSmi, mSmmMpSyncData->EffectiveSyncMode);
}
}
- ASSERT (*mSmmMpSyncData->CpuData[CpuIndex].Run == 0);
-
//
// Wait for BSP's signal to exit SMI
//
while (*mSmmMpSyncData->AllCpusInSync) {
CpuPause ();
@@ -1945,12 +1799,10 @@ InitializeSmmCpuSemaphores (
SemaphoreBlock = AllocatePages (Pages);
ASSERT (SemaphoreBlock != NULL);
ZeroMem (SemaphoreBlock, TotalSize);
SemaphoreAddr = (UINTN)SemaphoreBlock;
- mSmmCpuSemaphores.SemaphoreGlobal.Counter = (UINT32 *)SemaphoreAddr;
- SemaphoreAddr += SemaphoreSize;
mSmmCpuSemaphores.SemaphoreGlobal.InsideSmm = (BOOLEAN *)SemaphoreAddr;
SemaphoreAddr += SemaphoreSize;
mSmmCpuSemaphores.SemaphoreGlobal.AllCpusInSync = (BOOLEAN *)SemaphoreAddr;
SemaphoreAddr += SemaphoreSize;
mSmmCpuSemaphores.SemaphoreGlobal.PFLock = (SPIN_LOCK *)SemaphoreAddr;
@@ -1960,12 +1812,10 @@ InitializeSmmCpuSemaphores (
SemaphoreAddr += SemaphoreSize;
SemaphoreAddr = (UINTN)SemaphoreBlock + GlobalSemaphoresSize;
mSmmCpuSemaphores.SemaphoreCpu.Busy = (SPIN_LOCK *)SemaphoreAddr;
SemaphoreAddr += ProcessorCount * SemaphoreSize;
- mSmmCpuSemaphores.SemaphoreCpu.Run = (UINT32 *)SemaphoreAddr;
- SemaphoreAddr += ProcessorCount * SemaphoreSize;
mSmmCpuSemaphores.SemaphoreCpu.Present = (BOOLEAN *)SemaphoreAddr;
mPFLock = mSmmCpuSemaphores.SemaphoreGlobal.PFLock;
mConfigSmmCodeAccessCheckLock = mSmmCpuSemaphores.SemaphoreGlobal.CodeAccessCheckLock;
@@ -1980,10 +1830,12 @@ VOID
EFIAPI
InitializeMpSyncData (
VOID
)
{
+ RETURN_STATUS Status;
+
UINTN CpuIndex;
if (mSmmMpSyncData != NULL) {
//
// mSmmMpSyncDataSize includes one structure of SMM_DISPATCHER_MP_SYNC_DATA, one
@@ -2009,32 +1861,36 @@ InitializeMpSyncData (
}
}
mSmmMpSyncData->EffectiveSyncMode = mCpuSmmSyncMode;
- mSmmMpSyncData->Counter = mSmmCpuSemaphores.SemaphoreGlobal.Counter;
+ Status = SmmCpuSyncContextInit (gSmmCpuPrivate->SmmCoreEntryContext.NumberOfCpus, &mSmmMpSyncData->SyncContext);
+ if (EFI_ERROR (Status)) {
+ DEBUG ((DEBUG_ERROR, "InitializeMpSyncData: SmmCpuSyncContextInit return error %r!\n", Status));
+ CpuDeadLoop ();
+ return;
+ }
+
+ ASSERT (mSmmMpSyncData->SyncContext != NULL);
+
mSmmMpSyncData->InsideSmm = mSmmCpuSemaphores.SemaphoreGlobal.InsideSmm;
mSmmMpSyncData->AllCpusInSync = mSmmCpuSemaphores.SemaphoreGlobal.AllCpusInSync;
ASSERT (
- mSmmMpSyncData->Counter != NULL && mSmmMpSyncData->InsideSmm != NULL &&
+ mSmmMpSyncData->InsideSmm != NULL &&
mSmmMpSyncData->AllCpusInSync != NULL
);
- *mSmmMpSyncData->Counter = 0;
*mSmmMpSyncData->InsideSmm = FALSE;
*mSmmMpSyncData->AllCpusInSync = FALSE;
mSmmMpSyncData->AllApArrivedWithException = FALSE;
for (CpuIndex = 0; CpuIndex < gSmmCpuPrivate->SmmCoreEntryContext.NumberOfCpus; CpuIndex++) {
mSmmMpSyncData->CpuData[CpuIndex].Busy =
(SPIN_LOCK *)((UINTN)mSmmCpuSemaphores.SemaphoreCpu.Busy + mSemaphoreSize * CpuIndex);
- mSmmMpSyncData->CpuData[CpuIndex].Run =
- (UINT32 *)((UINTN)mSmmCpuSemaphores.SemaphoreCpu.Run + mSemaphoreSize * CpuIndex);
mSmmMpSyncData->CpuData[CpuIndex].Present =
(BOOLEAN *)((UINTN)mSmmCpuSemaphores.SemaphoreCpu.Present + mSemaphoreSize * CpuIndex);
*(mSmmMpSyncData->CpuData[CpuIndex].Busy) = 0;
- *(mSmmMpSyncData->CpuData[CpuIndex].Run) = 0;
*(mSmmMpSyncData->CpuData[CpuIndex].Present) = FALSE;
}
}
}
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
index f18345881b..a2fa4f6734 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
@@ -52,10 +52,11 @@ SPDX-License-Identifier: BSD-2-Clause-Patent
#include <Library/PeCoffGetEntryPointLib.h>
#include <Library/RegisterCpuFeaturesLib.h>
#include <Library/PerformanceLib.h>
#include <Library/CpuPageTableLib.h>
#include <Library/MmSaveStateLib.h>
+#include <Library/SmmCpuSyncLib.h>
#include <AcpiCpuData.h>
#include <CpuHotPlugData.h>
#include <Register/Intel/Cpuid.h>
@@ -403,11 +404,10 @@ SmmRelocationSemaphoreComplete (
///
typedef struct {
SPIN_LOCK *Busy;
volatile EFI_AP_PROCEDURE2 Procedure;
volatile VOID *Parameter;
- volatile UINT32 *Run;
volatile BOOLEAN *Present;
PROCEDURE_TOKEN *Token;
EFI_STATUS *Status;
} SMM_CPU_DATA_BLOCK;
@@ -421,29 +421,28 @@ typedef struct {
//
// Pointer to an array. The array should be located immediately after this structure
// so that UC cache-ability can be set together.
//
SMM_CPU_DATA_BLOCK *CpuData;
- volatile UINT32 *Counter;
volatile UINT32 BspIndex;
volatile BOOLEAN *InsideSmm;
volatile BOOLEAN *AllCpusInSync;
volatile SMM_CPU_SYNC_MODE EffectiveSyncMode;
volatile BOOLEAN SwitchBsp;
volatile BOOLEAN *CandidateBsp;
volatile BOOLEAN AllApArrivedWithException;
EFI_AP_PROCEDURE StartupProcedure;
VOID *StartupProcArgs;
+ SMM_CPU_SYNC_CONTEXT *SyncContext;
} SMM_DISPATCHER_MP_SYNC_DATA;
#define SMM_PSD_OFFSET 0xfb00
///
/// All global semaphores' pointer
///
typedef struct {
- volatile UINT32 *Counter;
volatile BOOLEAN *InsideSmm;
volatile BOOLEAN *AllCpusInSync;
SPIN_LOCK *PFLock;
SPIN_LOCK *CodeAccessCheckLock;
} SMM_CPU_SEMAPHORE_GLOBAL;
@@ -451,11 +450,10 @@ typedef struct {
///
/// All semaphores for each processor
///
typedef struct {
SPIN_LOCK *Busy;
- volatile UINT32 *Run;
volatile BOOLEAN *Present;
SPIN_LOCK *Token;
} SMM_CPU_SEMAPHORE_CPU;
///
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
index 372596f24c..793220aba3 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
@@ -101,10 +101,11 @@
SmmCpuFeaturesLib
PeCoffGetEntryPointLib
PerformanceLib
CpuPageTableLib
MmSaveStateLib
+ SmmCpuSyncLib
[Protocols]
gEfiSmmAccess2ProtocolGuid ## CONSUMES
gEfiSmmConfigurationProtocolGuid ## PRODUCES
gEfiSmmCpuProtocolGuid ## PRODUCES
--
2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112600): https://edk2.groups.io/g/devel/message/112600
Mute This Topic: https://groups.io/mt/103187898/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [edk2-devel] [PATCH v4 3/8] MdePkg/MdeLibs.dsc.inc: Add SafeIntLib instance
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 3/8] MdePkg/MdeLibs.dsc.inc: Add SafeIntLib instance Wu, Jiaxin
@ 2023-12-15 16:30 ` Michael D Kinney
0 siblings, 0 replies; 19+ messages in thread
From: Michael D Kinney @ 2023-12-15 16:30 UTC (permalink / raw)
To: Wu, Jiaxin, devel@edk2.groups.io
Cc: Gao, Liming, Liu, Zhiguang, Laszlo Ersek, Ni, Ray, Zeng, Star,
Kinney, Michael D
Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
> -----Original Message-----
> From: Wu, Jiaxin <jiaxin.wu@intel.com>
> Sent: Friday, December 15, 2023 1:55 AM
> To: devel@edk2.groups.io
> Cc: Kinney, Michael D <michael.d.kinney@intel.com>; Gao, Liming
> <gaoliming@byosoft.com.cn>; Liu, Zhiguang <zhiguang.liu@intel.com>;
> Laszlo Ersek <lersek@redhat.com>; Ni, Ray <ray.ni@intel.com>; Zeng, Star
> <star.zeng@intel.com>
> Subject: [PATCH v4 3/8] MdePkg/MdeLibs.dsc.inc: Add SafeIntLib instance
>
> This patch is to add SafeIntLib in MdeLibs.dsc.inc
>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Cc: Liming Gao <gaoliming@byosoft.com.cn>
> Cc: Zhiguang Liu <zhiguang.liu@intel.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Zeng Star <star.zeng@intel.com>
> Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
> ---
> MdePkg/MdeLibs.dsc.inc | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/MdePkg/MdeLibs.dsc.inc b/MdePkg/MdeLibs.dsc.inc
> index 4580481cb5..deb35c1a18 100644
> --- a/MdePkg/MdeLibs.dsc.inc
> +++ b/MdePkg/MdeLibs.dsc.inc
> @@ -14,5 +14,6 @@
> [LibraryClasses]
> ArmTrngLib|MdePkg/Library/BaseArmTrngLibNull/BaseArmTrngLibNull.inf
>
> RegisterFilterLib|MdePkg/Library/RegisterFilterLibNull/RegisterFilterLib
> Null.inf
> CpuLib|MdePkg/Library/BaseCpuLib/BaseCpuLib.inf
>
> SmmCpuRendezvousLib|MdePkg/Library/SmmCpuRendezvousLibNull/SmmCpuRendezv
> ousLibNull.inf
> + SafeIntLib|MdePkg/Library/BaseSafeIntLib/BaseSafeIntLib.inf
> --
> 2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112603): https://edk2.groups.io/g/devel/message/112603
Mute This Topic: https://groups.io/mt/103187893/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/1913456212/xyzzy [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [edk2-devel] [PATCH v4 4/8] UefiCpuPkg: Implements SmmCpuSyncLib library instance
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 4/8] UefiCpuPkg: Implements SmmCpuSyncLib library instance Wu, Jiaxin
@ 2023-12-18 9:23 ` Ni, Ray
2023-12-19 5:44 ` Wu, Jiaxin
0 siblings, 1 reply; 19+ messages in thread
From: Ni, Ray @ 2023-12-18 9:23 UTC (permalink / raw)
To: Wu, Jiaxin, devel@edk2.groups.io
Cc: Laszlo Ersek, Dong, Eric, Zeng, Star, Gerd Hoffmann,
Kumar, Rahul R
> + ///
> + /// Indicate CPUs entered SMM after lock door.
> + ///
> + UINTN LockedCpuCount;
1. It's not "LockedCpuCount". It's "ArrivedCpuCountUponLock".
Comments can be:
Before the door is locked, CpuCount stores the arrived CPU count.
After the door is locked, CpuCount is set to -1 indicating the door is locked. ArrivedCpuCpuntUponLock stores the arrived CPU count then.
> +/**
> + Performs an atomic compare exchange operation to get semaphore.
> + The compare exchange operation must be performed using MP safe
> + mechanisms.
> +
> + @param[in,out] Sem IN: 32-bit unsigned integer
> + OUT: original integer - 1 if Sem is not locked.
> + OUT: original integer if Sem is locked
> (MAX_UINT32).
> +
> + @retval Original integer - 1 if Sem is not locked.
> + Original integer if Sem is locked (MAX_UINT32).
2. Can just say "MAX_UINT32 if Sem is locked".
> + //
> + // Calculate total semaphore size
> + //
> + CacheLineSize = GetSpinLockProperties ();
> + OneSemSize = ALIGN_VALUE (sizeof (SMM_CPU_SYNC_SEMAPHORE),
> CacheLineSize);
3. I prefer:
OneSemSize = GetSpinLockProperties ();
ASSERT (sizeof (SMM_CPU_SYNC_SEMAPHORE) <= OneSemSize);
> +
> + Status = SafeUintnAdd (1, NumberOfCpus, &NumSem);
4. ok:) you are checking if NumberOfCpus + 1 could exceed MAX_UINTN. Fine to me.
> +
> + //
> + // Assign CPU Semaphore pointer
> + //
> + CpuSem = (*Context)->CpuSem;
> + for (CpuIndex = 0; CpuIndex < NumberOfCpus; CpuIndex++) {
> + CpuSem->Run = (SMM_CPU_SYNC_SEMAPHORE *)SemAddr;
> + *CpuSem->Run = 0;
> +
> + CpuSem++;
> + SemAddr += OneSemSize;
5. SafeIntLib was used earlier to make sure no integer overflow.
But "SemAddr += OneSemSize" is simply ignoring the danger of integer overflow.
I agree (NumberOfCpus + 1) * OneSemSize shouldn't cause integer overflow when code runs to here.
But initial value of SemAddr is not zero. It's still possible the SemAddr + (NumberOfCpus+1)*OneSemSize causes integer overflow.
I am ok if you don't fix it as I don't believe the integer overflow could happen in 5 years.
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112638): https://edk2.groups.io/g/devel/message/112638
Mute This Topic: https://groups.io/mt/103187894/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/1913456212/xyzzy [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [edk2-devel] [PATCH v4 8/8] UefiCpuPkg/PiSmmCpuDxeSmm: Consume SmmCpuSyncLib
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 8/8] UefiCpuPkg/PiSmmCpuDxeSmm: Consume SmmCpuSyncLib Wu, Jiaxin
@ 2023-12-19 3:54 ` Ni, Ray
0 siblings, 0 replies; 19+ messages in thread
From: Ni, Ray @ 2023-12-19 3:54 UTC (permalink / raw)
To: Wu, Jiaxin, devel@edk2.groups.io
Cc: Laszlo Ersek, Dong, Eric, Zeng, Star, Gerd Hoffmann,
Kumar, Rahul R
Reviewed-by: Ray Ni <ray.ni@intel.com>
Thanks,
Ray
> -----Original Message-----
> From: Wu, Jiaxin <jiaxin.wu@intel.com>
> Sent: Friday, December 15, 2023 5:55 PM
> To: devel@edk2.groups.io
> Cc: Laszlo Ersek <lersek@redhat.com>; Dong, Eric <eric.dong@intel.com>; Ni,
> Ray <ray.ni@intel.com>; Zeng, Star <star.zeng@intel.com>; Gerd Hoffmann
> <kraxel@redhat.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>
> Subject: [PATCH v4 8/8] UefiCpuPkg/PiSmmCpuDxeSmm: Consume
> SmmCpuSyncLib
>
> There is the SmmCpuSyncLib Library class define the SMM CPU sync
> flow, which is aligned with existing SMM CPU driver sync behavior.
> This patch is to consume SmmCpuSyncLib instance directly.
>
> With this change, SMM CPU Sync flow/logic can be customized
> with different implementation no matter for any purpose, e.g.
> performance tuning, handle specific register, etc.
>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Zeng Star <star.zeng@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 274
> +++++++--------------------
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 6 +-
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 1 +
> 3 files changed, 68 insertions(+), 213 deletions(-)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> index 9b477b6695..4fbb0bba87 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> @@ -27,122 +27,10 @@ MM_COMPLETION
> mSmmStartupThisApToken;
> //
> // Processor specified by mPackageFirstThreadIndex[PackageIndex] will do
> the package-scope register check.
> //
> UINT32 *mPackageFirstThreadIndex = NULL;
>
> -/**
> - Performs an atomic compare exchange operation to get semaphore.
> - The compare exchange operation must be performed using
> - MP safe mechanisms.
> -
> - @param Sem IN: 32-bit unsigned integer
> - OUT: original integer - 1
> - @return Original integer - 1
> -
> -**/
> -UINT32
> -WaitForSemaphore (
> - IN OUT volatile UINT32 *Sem
> - )
> -{
> - UINT32 Value;
> -
> - for ( ; ;) {
> - Value = *Sem;
> - if ((Value != 0) &&
> - (InterlockedCompareExchange32 (
> - (UINT32 *)Sem,
> - Value,
> - Value - 1
> - ) == Value))
> - {
> - break;
> - }
> -
> - CpuPause ();
> - }
> -
> - return Value - 1;
> -}
> -
> -/**
> - Performs an atomic compare exchange operation to release semaphore.
> - The compare exchange operation must be performed using
> - MP safe mechanisms.
> -
> - @param Sem IN: 32-bit unsigned integer
> - OUT: original integer + 1
> - @return Original integer + 1
> -
> -**/
> -UINT32
> -ReleaseSemaphore (
> - IN OUT volatile UINT32 *Sem
> - )
> -{
> - UINT32 Value;
> -
> - do {
> - Value = *Sem;
> - } while (Value + 1 != 0 &&
> - InterlockedCompareExchange32 (
> - (UINT32 *)Sem,
> - Value,
> - Value + 1
> - ) != Value);
> -
> - return Value + 1;
> -}
> -
> -/**
> - Performs an atomic compare exchange operation to lock semaphore.
> - The compare exchange operation must be performed using
> - MP safe mechanisms.
> -
> - @param Sem IN: 32-bit unsigned integer
> - OUT: -1
> - @return Original integer
> -
> -**/
> -UINT32
> -LockdownSemaphore (
> - IN OUT volatile UINT32 *Sem
> - )
> -{
> - UINT32 Value;
> -
> - do {
> - Value = *Sem;
> - } while (InterlockedCompareExchange32 (
> - (UINT32 *)Sem,
> - Value,
> - (UINT32)-1
> - ) != Value);
> -
> - return Value;
> -}
> -
> -/**
> - Used for BSP to wait all APs.
> - Wait all APs to performs an atomic compare exchange operation to release
> semaphore.
> -
> - @param NumberOfAPs AP number
> -
> -**/
> -VOID
> -WaitForAllAPs (
> - IN UINTN NumberOfAPs
> - )
> -{
> - UINTN BspIndex;
> -
> - BspIndex = mSmmMpSyncData->BspIndex;
> - while (NumberOfAPs-- > 0) {
> - WaitForSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
> - }
> -}
> -
> /**
> Used for BSP to release all APs.
> Performs an atomic compare exchange operation to release semaphore
> for each AP.
>
> @@ -154,57 +42,15 @@ ReleaseAllAPs (
> {
> UINTN Index;
>
> for (Index = 0; Index < mMaxNumberOfCpus; Index++) {
> if (IsPresentAp (Index)) {
> - ReleaseSemaphore (mSmmMpSyncData->CpuData[Index].Run);
> + SmmCpuSyncReleaseOneAp (mSmmMpSyncData->SyncContext,
> Index, gSmmCpuPrivate->SmmCoreEntryContext.CurrentlyExecutingCpu);
> }
> }
> }
>
> -/**
> - Used for BSP to release one AP.
> -
> - @param ApSem IN: 32-bit unsigned integer
> - OUT: original integer + 1
> -**/
> -VOID
> -ReleaseOneAp (
> - IN OUT volatile UINT32 *ApSem
> - )
> -{
> - ReleaseSemaphore (ApSem);
> -}
> -
> -/**
> - Used for AP to wait BSP.
> -
> - @param ApSem IN: 32-bit unsigned integer
> - OUT: original integer - 1
> -**/
> -VOID
> -WaitForBsp (
> - IN OUT volatile UINT32 *ApSem
> - )
> -{
> - WaitForSemaphore (ApSem);
> -}
> -
> -/**
> - Used for AP to release BSP.
> -
> - @param BspSem IN: 32-bit unsigned integer
> - OUT: original integer + 1
> -**/
> -VOID
> -ReleaseBsp (
> - IN OUT volatile UINT32 *BspSem
> - )
> -{
> - ReleaseSemaphore (BspSem);
> -}
> -
> /**
> Check whether the index of CPU perform the package level register
> programming during System Management Mode initialization.
>
> The index of Processor specified by
> mPackageFirstThreadIndex[PackageIndex]
> @@ -292,35 +138,35 @@ AllCpusInSmmExceptBlockedDisabled (
>
> BlockedCount = 0;
> DisabledCount = 0;
>
> //
> - // Check to make sure mSmmMpSyncData->Counter is valid and not
> locked.
> + // Check to make sure the CPU arrival count is valid and not locked.
> //
> - ASSERT (*mSmmMpSyncData->Counter <= mNumberOfCpus);
> + ASSERT (SmmCpuSyncGetArrivedCpuCount
> (mSmmMpSyncData->SyncContext) <= mNumberOfCpus);
>
> //
> // Check whether all CPUs in SMM.
> //
> - if (*mSmmMpSyncData->Counter == mNumberOfCpus) {
> + if (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SyncContext)
> == mNumberOfCpus) {
> return TRUE;
> }
>
> //
> // Check for the Blocked & Disabled Exceptions Case.
> //
> GetSmmDelayedBlockedDisabledCount (NULL, &BlockedCount,
> &DisabledCount);
>
> //
> - // *mSmmMpSyncData->Counter might be updated by all APs concurrently.
> The value
> + // The CPU arrival count might be updated by all APs concurrently. The
> value
> // can be dynamic changed. If some Aps enter the SMI after the
> BlockedCount &
> - // DisabledCount check, then the *mSmmMpSyncData->Counter will be
> increased, thus
> - // leading the *mSmmMpSyncData->Counter + BlockedCount +
> DisabledCount > mNumberOfCpus.
> + // DisabledCount check, then the CPU arrival count will be increased, thus
> + // leading the retrieved CPU arrival count + BlockedCount +
> DisabledCount > mNumberOfCpus.
> // since the BlockedCount & DisabledCount are local variable, it's ok here
> only for
> // the checking of all CPUs In Smm.
> //
> - if (*mSmmMpSyncData->Counter + BlockedCount + DisabledCount >=
> mNumberOfCpus) {
> + if (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SyncContext)
> + BlockedCount + DisabledCount >= mNumberOfCpus) {
> return TRUE;
> }
>
> return FALSE;
> }
> @@ -396,11 +242,11 @@ SmmWaitForApArrival (
> PERF_FUNCTION_BEGIN ();
>
> DelayedCount = 0;
> BlockedCount = 0;
>
> - ASSERT (*mSmmMpSyncData->Counter <= mNumberOfCpus);
> + ASSERT (SmmCpuSyncGetArrivedCpuCount
> (mSmmMpSyncData->SyncContext) <= mNumberOfCpus);
>
> LmceEn = FALSE;
> LmceSignal = FALSE;
> if (mMachineCheckSupported) {
> LmceEn = IsLmceOsEnabled ();
> @@ -447,11 +293,11 @@ SmmWaitForApArrival (
> // d) We don't add code to check SMI disabling status to skip sending IPI to
> SMI disabled APs, because:
> // - In traditional flow, SMI disabling is discouraged.
> // - In relaxed flow, CheckApArrival() will check SMI disabling status
> before calling this function.
> // In both cases, adding SMI-disabling checking code increases
> overhead.
> //
> - if (*mSmmMpSyncData->Counter < mNumberOfCpus) {
> + if (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SyncContext)
> < mNumberOfCpus) {
> //
> // Send SMI IPIs to bring outside processors in
> //
> for (Index = 0; Index < mMaxNumberOfCpus; Index++) {
> if (!(*(mSmmMpSyncData->CpuData[Index].Present)) &&
> (gSmmCpuPrivate->ProcessorInfo[Index].ProcessorId != INVALID_APIC_ID)) {
> @@ -610,18 +456,20 @@ VOID
> BSPHandler (
> IN UINTN CpuIndex,
> IN SMM_CPU_SYNC_MODE SyncMode
> )
> {
> + UINTN CpuCount;
> UINTN Index;
> MTRR_SETTINGS Mtrrs;
> UINTN ApCount;
> BOOLEAN ClearTopLevelSmiResult;
> UINTN PresentCount;
>
> ASSERT (CpuIndex == mSmmMpSyncData->BspIndex);
> - ApCount = 0;
> + CpuCount = 0;
> + ApCount = 0;
>
> PERF_FUNCTION_BEGIN ();
>
> //
> // Flag BSP's presence
> @@ -659,28 +507,31 @@ BSPHandler (
> // Wait for APs to arrive
> //
> SmmWaitForApArrival ();
>
> //
> - // Lock the counter down and retrieve the number of APs
> + // Lock door for late coming CPU checkin and retrieve the Arrived
> number of APs
> //
> *mSmmMpSyncData->AllCpusInSync = TRUE;
> - ApCount = LockdownSemaphore
> (mSmmMpSyncData->Counter) - 1;
> +
> + SmmCpuSyncLockDoor (mSmmMpSyncData->SyncContext, CpuIndex,
> &CpuCount);
> +
> + ApCount = CpuCount - 1;
>
> //
> // Wait for all APs to get ready for programming MTRRs
> //
> - WaitForAllAPs (ApCount);
> + SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount,
> CpuIndex);
>
> if (SmmCpuFeaturesNeedConfigureMtrrs ()) {
> //
> // Signal all APs it's time for backup MTRRs
> //
> ReleaseAllAPs ();
>
> //
> - // WaitForAllAPs() may wait for ever if an AP happens to enter SMM
> at
> + // SmmCpuSyncWaitForAPs() may wait for ever if an AP happens to
> enter SMM at
> // exactly this point. Please make sure PcdCpuSmmMaxSyncLoops
> has been set
> // to a large enough value to avoid this situation.
> // Note: For HT capable CPUs, threads within a core share the same
> set of MTRRs.
> // We do the backup first and then set MTRR to avoid race condition
> for threads
> // in the same core.
> @@ -688,28 +539,28 @@ BSPHandler (
> MtrrGetAllMtrrs (&Mtrrs);
>
> //
> // Wait for all APs to complete their MTRR saving
> //
> - WaitForAllAPs (ApCount);
> + SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext,
> ApCount, CpuIndex);
>
> //
> // Let all processors program SMM MTRRs together
> //
> ReleaseAllAPs ();
>
> //
> - // WaitForAllAPs() may wait for ever if an AP happens to enter SMM
> at
> + // SmmCpuSyncWaitForAPs() may wait for ever if an AP happens to
> enter SMM at
> // exactly this point. Please make sure PcdCpuSmmMaxSyncLoops
> has been set
> // to a large enough value to avoid this situation.
> //
> ReplaceOSMtrrs (CpuIndex);
>
> //
> // Wait for all APs to complete their MTRR programming
> //
> - WaitForAllAPs (ApCount);
> + SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext,
> ApCount, CpuIndex);
> }
> }
>
> //
> // The BUSY lock is initialized to Acquired state
> @@ -741,14 +592,18 @@ BSPHandler (
> // make those APs to exit SMI synchronously. APs which arrive later will
> be excluded and
> // will run through freely.
> //
> if ((SyncMode != SmmCpuSyncModeTradition)
> && !SmmCpuFeaturesNeedConfigureMtrrs ()) {
> //
> - // Lock the counter down and retrieve the number of APs
> + // Lock door for late coming CPU checkin and retrieve the Arrived
> number of APs
> //
> *mSmmMpSyncData->AllCpusInSync = TRUE;
> - ApCount = LockdownSemaphore
> (mSmmMpSyncData->Counter) - 1;
> +
> + SmmCpuSyncLockDoor (mSmmMpSyncData->SyncContext, CpuIndex,
> &CpuCount);
> +
> + ApCount = CpuCount - 1;
> +
> //
> // Make sure all APs have their Present flag set
> //
> while (TRUE) {
> PresentCount = 0;
> @@ -771,11 +626,11 @@ BSPHandler (
> ReleaseAllAPs ();
>
> //
> // Wait for all APs to complete their pending tasks
> //
> - WaitForAllAPs (ApCount);
> + SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount,
> CpuIndex);
>
> if (SmmCpuFeaturesNeedConfigureMtrrs ()) {
> //
> // Signal APs to restore MTRRs
> //
> @@ -788,11 +643,11 @@ BSPHandler (
> MtrrSetAllMtrrs (&Mtrrs);
>
> //
> // Wait for all APs to complete MTRR programming
> //
> - WaitForAllAPs (ApCount);
> + SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount,
> CpuIndex);
> }
>
> //
> // Stop source level debug in BSP handler, the code below will not be
> // debugged.
> @@ -816,11 +671,11 @@ BSPHandler (
>
> //
> // Gather APs to exit SMM synchronously. Note the Present flag is cleared
> by now but
> // WaitForAllAps does not depend on the Present flag.
> //
> - WaitForAllAPs (ApCount);
> + SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount,
> CpuIndex);
>
> //
> // At this point, all APs should have exited from APHandler().
> // Migrate the SMM MP performance logging to standard SMM
> performance logging.
> // Any SMM MP performance logging after this point will be migrated in
> next SMI.
> @@ -842,11 +697,11 @@ BSPHandler (
> }
>
> //
> // Allow APs to check in from this point on
> //
> - *mSmmMpSyncData->Counter = 0;
> + SmmCpuSyncContextReset (mSmmMpSyncData->SyncContext);
> *mSmmMpSyncData->AllCpusInSync = FALSE;
> mSmmMpSyncData->AllApArrivedWithException = FALSE;
>
> PERF_FUNCTION_END ();
> }
> @@ -912,21 +767,21 @@ APHandler (
>
> if (!(*mSmmMpSyncData->InsideSmm)) {
> //
> // Give up since BSP is unable to enter SMM
> // and signal the completion of this AP
> - // Reduce the mSmmMpSyncData->Counter!
> + // Reduce the CPU arrival count!
> //
> - WaitForSemaphore (mSmmMpSyncData->Counter);
> + SmmCpuSyncCheckOutCpu (mSmmMpSyncData->SyncContext,
> CpuIndex);
> return;
> }
> } else {
> //
> // Don't know BSP index. Give up without sending IPI to BSP.
> - // Reduce the mSmmMpSyncData->Counter!
> + // Reduce the CPU arrival count!
> //
> - WaitForSemaphore (mSmmMpSyncData->Counter);
> + SmmCpuSyncCheckOutCpu (mSmmMpSyncData->SyncContext,
> CpuIndex);
> return;
> }
> }
>
> //
> @@ -942,50 +797,50 @@ APHandler (
>
> if ((SyncMode == SmmCpuSyncModeTradition) ||
> SmmCpuFeaturesNeedConfigureMtrrs ()) {
> //
> // Notify BSP of arrival at this point
> //
> - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
> + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex,
> BspIndex);
> }
>
> if (SmmCpuFeaturesNeedConfigureMtrrs ()) {
> //
> // Wait for the signal from BSP to backup MTRRs
> //
> - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
> + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex,
> BspIndex);
>
> //
> // Backup OS MTRRs
> //
> MtrrGetAllMtrrs (&Mtrrs);
>
> //
> // Signal BSP the completion of this AP
> //
> - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
> + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex,
> BspIndex);
>
> //
> // Wait for BSP's signal to program MTRRs
> //
> - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
> + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex,
> BspIndex);
>
> //
> // Replace OS MTRRs with SMI MTRRs
> //
> ReplaceOSMtrrs (CpuIndex);
>
> //
> // Signal BSP the completion of this AP
> //
> - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
> + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex,
> BspIndex);
> }
>
> while (TRUE) {
> //
> // Wait for something to happen
> //
> - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
> + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex,
> BspIndex);
>
> //
> // Check if BSP wants to exit SMM
> //
> if (!(*mSmmMpSyncData->InsideSmm)) {
> @@ -1021,16 +876,16 @@ APHandler (
>
> if (SmmCpuFeaturesNeedConfigureMtrrs ()) {
> //
> // Notify BSP the readiness of this AP to program MTRRs
> //
> - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
> + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex,
> BspIndex);
>
> //
> // Wait for the signal from BSP to program MTRRs
> //
> - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
> + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex,
> BspIndex);
>
> //
> // Restore OS MTRRs
> //
> SmmCpuFeaturesReenableSmrr ();
> @@ -1038,26 +893,26 @@ APHandler (
> }
>
> //
> // Notify BSP the readiness of this AP to Reset states/semaphore for this
> processor
> //
> - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
> + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex,
> BspIndex);
>
> //
> // Wait for the signal from BSP to Reset states/semaphore for this
> processor
> //
> - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
> + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex,
> BspIndex);
>
> //
> // Reset states/semaphore for this processor
> //
> *(mSmmMpSyncData->CpuData[CpuIndex].Present) = FALSE;
>
> //
> // Notify BSP the readiness of this AP to exit SMM
> //
> - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
> + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex,
> BspIndex);
> }
>
> /**
> Checks whether the input token is the current used token.
>
> @@ -1321,11 +1176,11 @@ InternalSmmStartupThisAp (
> mSmmMpSyncData->CpuData[CpuIndex].Status = CpuStatus;
> if (mSmmMpSyncData->CpuData[CpuIndex].Status != NULL) {
> *mSmmMpSyncData->CpuData[CpuIndex].Status = EFI_NOT_READY;
> }
>
> - ReleaseOneAp (mSmmMpSyncData->CpuData[CpuIndex].Run);
> + SmmCpuSyncReleaseOneAp (mSmmMpSyncData->SyncContext, CpuIndex,
> gSmmCpuPrivate->SmmCoreEntryContext.CurrentlyExecutingCpu);
>
> if (Token == NULL) {
> AcquireSpinLock (mSmmMpSyncData->CpuData[CpuIndex].Busy);
> ReleaseSpinLock (mSmmMpSyncData->CpuData[CpuIndex].Busy);
> }
> @@ -1725,14 +1580,15 @@ SmiRendezvous (
> //
> goto Exit;
> } else {
> //
> // Signal presence of this processor
> - // mSmmMpSyncData->Counter is increased here!
> - // "ReleaseSemaphore (mSmmMpSyncData->Counter) == 0" means BSP
> has already ended the synchronization.
> + // CPU check in here!
> + // "SmmCpuSyncCheckInCpu (mSmmMpSyncData->SyncContext,
> CpuIndex)" return error means failed
> + // to check in CPU. BSP has already ended the synchronization.
> //
> - if (ReleaseSemaphore (mSmmMpSyncData->Counter) == 0) {
> + if (RETURN_ERROR (SmmCpuSyncCheckInCpu
> (mSmmMpSyncData->SyncContext, CpuIndex))) {
> //
> // BSP has already ended the synchronization, so QUIT!!!
> // Existing AP is too late now to enter SMI since BSP has already
> ended the synchronization!!!
> //
>
> @@ -1824,12 +1680,10 @@ SmiRendezvous (
> } else {
> APHandler (CpuIndex, ValidSmi,
> mSmmMpSyncData->EffectiveSyncMode);
> }
> }
>
> - ASSERT (*mSmmMpSyncData->CpuData[CpuIndex].Run == 0);
> -
> //
> // Wait for BSP's signal to exit SMI
> //
> while (*mSmmMpSyncData->AllCpusInSync) {
> CpuPause ();
> @@ -1945,12 +1799,10 @@ InitializeSmmCpuSemaphores (
> SemaphoreBlock = AllocatePages (Pages);
> ASSERT (SemaphoreBlock != NULL);
> ZeroMem (SemaphoreBlock, TotalSize);
>
> SemaphoreAddr =
> (UINTN)SemaphoreBlock;
> - mSmmCpuSemaphores.SemaphoreGlobal.Counter = (UINT32
> *)SemaphoreAddr;
> - SemaphoreAddr +=
> SemaphoreSize;
> mSmmCpuSemaphores.SemaphoreGlobal.InsideSmm = (BOOLEAN
> *)SemaphoreAddr;
> SemaphoreAddr +=
> SemaphoreSize;
> mSmmCpuSemaphores.SemaphoreGlobal.AllCpusInSync = (BOOLEAN
> *)SemaphoreAddr;
> SemaphoreAddr +=
> SemaphoreSize;
> mSmmCpuSemaphores.SemaphoreGlobal.PFLock = (SPIN_LOCK
> *)SemaphoreAddr;
> @@ -1960,12 +1812,10 @@ InitializeSmmCpuSemaphores (
> SemaphoreAddr += SemaphoreSize;
>
> SemaphoreAddr =
> (UINTN)SemaphoreBlock + GlobalSemaphoresSize;
> mSmmCpuSemaphores.SemaphoreCpu.Busy = (SPIN_LOCK
> *)SemaphoreAddr;
> SemaphoreAddr += ProcessorCount *
> SemaphoreSize;
> - mSmmCpuSemaphores.SemaphoreCpu.Run = (UINT32
> *)SemaphoreAddr;
> - SemaphoreAddr += ProcessorCount *
> SemaphoreSize;
> mSmmCpuSemaphores.SemaphoreCpu.Present = (BOOLEAN
> *)SemaphoreAddr;
>
> mPFLock =
> mSmmCpuSemaphores.SemaphoreGlobal.PFLock;
> mConfigSmmCodeAccessCheckLock =
> mSmmCpuSemaphores.SemaphoreGlobal.CodeAccessCheckLock;
>
> @@ -1980,10 +1830,12 @@ VOID
> EFIAPI
> InitializeMpSyncData (
> VOID
> )
> {
> + RETURN_STATUS Status;
> +
> UINTN CpuIndex;
>
> if (mSmmMpSyncData != NULL) {
> //
> // mSmmMpSyncDataSize includes one structure of
> SMM_DISPATCHER_MP_SYNC_DATA, one
> @@ -2009,32 +1861,36 @@ InitializeMpSyncData (
> }
> }
>
> mSmmMpSyncData->EffectiveSyncMode = mCpuSmmSyncMode;
>
> - mSmmMpSyncData->Counter =
> mSmmCpuSemaphores.SemaphoreGlobal.Counter;
> + Status = SmmCpuSyncContextInit
> (gSmmCpuPrivate->SmmCoreEntryContext.NumberOfCpus,
> &mSmmMpSyncData->SyncContext);
> + if (EFI_ERROR (Status)) {
> + DEBUG ((DEBUG_ERROR, "InitializeMpSyncData:
> SmmCpuSyncContextInit return error %r!\n", Status));
> + CpuDeadLoop ();
> + return;
> + }
> +
> + ASSERT (mSmmMpSyncData->SyncContext != NULL);
> +
> mSmmMpSyncData->InsideSmm =
> mSmmCpuSemaphores.SemaphoreGlobal.InsideSmm;
> mSmmMpSyncData->AllCpusInSync =
> mSmmCpuSemaphores.SemaphoreGlobal.AllCpusInSync;
> ASSERT (
> - mSmmMpSyncData->Counter != NULL &&
> mSmmMpSyncData->InsideSmm != NULL &&
> + mSmmMpSyncData->InsideSmm != NULL &&
> mSmmMpSyncData->AllCpusInSync != NULL
> );
> - *mSmmMpSyncData->Counter = 0;
> *mSmmMpSyncData->InsideSmm = FALSE;
> *mSmmMpSyncData->AllCpusInSync = FALSE;
>
> mSmmMpSyncData->AllApArrivedWithException = FALSE;
>
> for (CpuIndex = 0; CpuIndex <
> gSmmCpuPrivate->SmmCoreEntryContext.NumberOfCpus; CpuIndex++) {
> mSmmMpSyncData->CpuData[CpuIndex].Busy =
> (SPIN_LOCK
> *)((UINTN)mSmmCpuSemaphores.SemaphoreCpu.Busy + mSemaphoreSize *
> CpuIndex);
> - mSmmMpSyncData->CpuData[CpuIndex].Run =
> - (UINT32 *)((UINTN)mSmmCpuSemaphores.SemaphoreCpu.Run +
> mSemaphoreSize * CpuIndex);
> mSmmMpSyncData->CpuData[CpuIndex].Present =
> (BOOLEAN
> *)((UINTN)mSmmCpuSemaphores.SemaphoreCpu.Present +
> mSemaphoreSize * CpuIndex);
> *(mSmmMpSyncData->CpuData[CpuIndex].Busy) = 0;
> - *(mSmmMpSyncData->CpuData[CpuIndex].Run) = 0;
> *(mSmmMpSyncData->CpuData[CpuIndex].Present) = FALSE;
> }
> }
> }
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> index f18345881b..a2fa4f6734 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h
> @@ -52,10 +52,11 @@ SPDX-License-Identifier: BSD-2-Clause-Patent
> #include <Library/PeCoffGetEntryPointLib.h>
> #include <Library/RegisterCpuFeaturesLib.h>
> #include <Library/PerformanceLib.h>
> #include <Library/CpuPageTableLib.h>
> #include <Library/MmSaveStateLib.h>
> +#include <Library/SmmCpuSyncLib.h>
>
> #include <AcpiCpuData.h>
> #include <CpuHotPlugData.h>
>
> #include <Register/Intel/Cpuid.h>
> @@ -403,11 +404,10 @@ SmmRelocationSemaphoreComplete (
> ///
> typedef struct {
> SPIN_LOCK *Busy;
> volatile EFI_AP_PROCEDURE2 Procedure;
> volatile VOID *Parameter;
> - volatile UINT32 *Run;
> volatile BOOLEAN *Present;
> PROCEDURE_TOKEN *Token;
> EFI_STATUS *Status;
> } SMM_CPU_DATA_BLOCK;
>
> @@ -421,29 +421,28 @@ typedef struct {
> //
> // Pointer to an array. The array should be located immediately after this
> structure
> // so that UC cache-ability can be set together.
> //
> SMM_CPU_DATA_BLOCK *CpuData;
> - volatile UINT32 *Counter;
> volatile UINT32 BspIndex;
> volatile BOOLEAN *InsideSmm;
> volatile BOOLEAN *AllCpusInSync;
> volatile SMM_CPU_SYNC_MODE EffectiveSyncMode;
> volatile BOOLEAN SwitchBsp;
> volatile BOOLEAN *CandidateBsp;
> volatile BOOLEAN AllApArrivedWithException;
> EFI_AP_PROCEDURE StartupProcedure;
> VOID *StartupProcArgs;
> + SMM_CPU_SYNC_CONTEXT *SyncContext;
> } SMM_DISPATCHER_MP_SYNC_DATA;
>
> #define SMM_PSD_OFFSET 0xfb00
>
> ///
> /// All global semaphores' pointer
> ///
> typedef struct {
> - volatile UINT32 *Counter;
> volatile BOOLEAN *InsideSmm;
> volatile BOOLEAN *AllCpusInSync;
> SPIN_LOCK *PFLock;
> SPIN_LOCK *CodeAccessCheckLock;
> } SMM_CPU_SEMAPHORE_GLOBAL;
> @@ -451,11 +450,10 @@ typedef struct {
> ///
> /// All semaphores for each processor
> ///
> typedef struct {
> SPIN_LOCK *Busy;
> - volatile UINT32 *Run;
> volatile BOOLEAN *Present;
> SPIN_LOCK *Token;
> } SMM_CPU_SEMAPHORE_CPU;
>
> ///
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> index 372596f24c..793220aba3 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> @@ -101,10 +101,11 @@
> SmmCpuFeaturesLib
> PeCoffGetEntryPointLib
> PerformanceLib
> CpuPageTableLib
> MmSaveStateLib
> + SmmCpuSyncLib
>
> [Protocols]
> gEfiSmmAccess2ProtocolGuid ## CONSUMES
> gEfiSmmConfigurationProtocolGuid ## PRODUCES
> gEfiSmmCpuProtocolGuid ## PRODUCES
> --
> 2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112677): https://edk2.groups.io/g/devel/message/112677
Mute This Topic: https://groups.io/mt/103187898/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/1913456212/xyzzy [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [edk2-devel] [PATCH v4 7/8] UefiCpuPkg/PiSmmCpuDxeSmm: Simplify RunningApCount decrement
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 7/8] UefiCpuPkg/PiSmmCpuDxeSmm: Simplify RunningApCount decrement Wu, Jiaxin
@ 2023-12-19 3:55 ` Ni, Ray
0 siblings, 0 replies; 19+ messages in thread
From: Ni, Ray @ 2023-12-19 3:55 UTC (permalink / raw)
To: Wu, Jiaxin, devel@edk2.groups.io
Cc: Laszlo Ersek, Dong, Eric, Zeng, Star, Gerd Hoffmann,
Kumar, Rahul R
Reviewed-by: Ray Ni <ray.ni@intel.com>
Thanks,
Ray
> -----Original Message-----
> From: Wu, Jiaxin <jiaxin.wu@intel.com>
> Sent: Friday, December 15, 2023 5:55 PM
> To: devel@edk2.groups.io
> Cc: Laszlo Ersek <lersek@redhat.com>; Dong, Eric <eric.dong@intel.com>; Ni,
> Ray <ray.ni@intel.com>; Zeng, Star <star.zeng@intel.com>; Gerd Hoffmann
> <kraxel@redhat.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>
> Subject: [PATCH v4 7/8] UefiCpuPkg/PiSmmCpuDxeSmm: Simplify
> RunningApCount decrement
>
> To decrease the count of RunningApCount, InterlockedDecrement is
> enough to achieve that.
>
> This patch is to simplify RunningApCount decrement.
>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Zeng Star <star.zeng@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> index 54542262a2..9b477b6695 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> @@ -1450,11 +1450,11 @@ InternalSmmStartupAllAPs (
>
> //
> // Decrease the count to mark this processor(AP or BSP) as finished.
> //
> if (ProcToken != NULL) {
> - WaitForSemaphore (&ProcToken->RunningApCount);
> + InterlockedDecrement (&ProcToken->RunningApCount);
> }
> }
> }
>
> ReleaseAllAPs ();
> --
> 2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112678): https://edk2.groups.io/g/devel/message/112678
Mute This Topic: https://groups.io/mt/103187897/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/1913456212/xyzzy [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [edk2-devel] [PATCH v4 6/8] UefiPayloadPkg: Specifies SmmCpuSyncLib instance
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 6/8] UefiPayloadPkg: " Wu, Jiaxin
@ 2023-12-19 3:55 ` Ni, Ray
0 siblings, 0 replies; 19+ messages in thread
From: Ni, Ray @ 2023-12-19 3:55 UTC (permalink / raw)
To: Wu, Jiaxin, devel@edk2.groups.io
Cc: Laszlo Ersek, Dong, Guo, Rhodes, Sean, Lu, James, Guo, Gua,
Zeng, Star
Reviewed-by: Ray Ni <ray.ni@intel.com>
Thanks,
Ray
> -----Original Message-----
> From: Wu, Jiaxin <jiaxin.wu@intel.com>
> Sent: Friday, December 15, 2023 5:55 PM
> To: devel@edk2.groups.io
> Cc: Laszlo Ersek <lersek@redhat.com>; Dong, Guo <guo.dong@intel.com>;
> Rhodes, Sean <sean@starlabs.systems>; Lu, James <james.lu@intel.com>;
> Guo, Gua <gua.guo@intel.com>; Ni, Ray <ray.ni@intel.com>; Zeng, Star
> <star.zeng@intel.com>
> Subject: [PATCH v4 6/8] UefiPayloadPkg: Specifies SmmCpuSyncLib instance
>
> This patch is to specify SmmCpuSyncLib instance for UefiPayloadPkg.
>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Guo Dong <guo.dong@intel.com>
> Cc: Sean Rhodes <sean@starlabs.systems>
> Cc: James Lu <james.lu@intel.com>
> Cc: Gua Guo <gua.guo@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Zeng Star <star.zeng@intel.com>
> Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
> Reviewed-by: Gua Guo <gua.guo@intel.com>
> ---
> UefiPayloadPkg/UefiPayloadPkg.dsc | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/UefiPayloadPkg/UefiPayloadPkg.dsc
> b/UefiPayloadPkg/UefiPayloadPkg.dsc
> index a65f9d5b83..b8b13ad201 100644
> --- a/UefiPayloadPkg/UefiPayloadPkg.dsc
> +++ b/UefiPayloadPkg/UefiPayloadPkg.dsc
> @@ -253,10 +253,11 @@
> #
> MtrrLib|UefiCpuPkg/Library/MtrrLib/MtrrLib.inf
>
> LocalApicLib|UefiCpuPkg/Library/BaseXApicX2ApicLib/BaseXApicX2ApicLib.in
> f
> MicrocodeLib|UefiCpuPkg/Library/MicrocodeLib/MicrocodeLib.inf
>
> CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
> +
> SmmCpuSyncLib|UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
>
> #
> # Platform
> #
> !if $(CPU_TIMER_LIB_ENABLE) == TRUE && $(UNIVERSAL_PAYLOAD) ==
> TRUE
> --
> 2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112679): https://edk2.groups.io/g/devel/message/112679
Mute This Topic: https://groups.io/mt/103187896/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/1913456212/xyzzy [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [edk2-devel] [PATCH v4 5/8] OvmfPkg: Specifies SmmCpuSyncLib instance
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 5/8] OvmfPkg: Specifies SmmCpuSyncLib instance Wu, Jiaxin
@ 2023-12-19 3:55 ` Ni, Ray
0 siblings, 0 replies; 19+ messages in thread
From: Ni, Ray @ 2023-12-19 3:55 UTC (permalink / raw)
To: Wu, Jiaxin, devel@edk2.groups.io
Cc: Laszlo Ersek, Ard Biesheuvel, Yao, Jiewen, Justen, Jordan L,
Dong, Eric, Zeng, Star, Kumar, Rahul R, Gerd Hoffmann
Reviewed-by: Ray Ni <ray.ni@intel.com>
Thanks,
Ray
> -----Original Message-----
> From: Wu, Jiaxin <jiaxin.wu@intel.com>
> Sent: Friday, December 15, 2023 5:55 PM
> To: devel@edk2.groups.io
> Cc: Laszlo Ersek <lersek@redhat.com>; Ard Biesheuvel
> <ardb+tianocore@kernel.org>; Yao, Jiewen <jiewen.yao@intel.com>; Justen,
> Jordan L <jordan.l.justen@intel.com>; Dong, Eric <eric.dong@intel.com>; Ni,
> Ray <ray.ni@intel.com>; Zeng, Star <star.zeng@intel.com>; Kumar, Rahul R
> <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: [PATCH v4 5/8] OvmfPkg: Specifies SmmCpuSyncLib instance
>
> This patch is to specify SmmCpuSyncLib instance for OvmfPkg.
>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Jordan Justen <jordan.l.justen@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Zeng Star <star.zeng@intel.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
> ---
> OvmfPkg/CloudHv/CloudHvX64.dsc | 1 +
> OvmfPkg/OvmfPkgIa32.dsc | 1 +
> OvmfPkg/OvmfPkgIa32X64.dsc | 1 +
> OvmfPkg/OvmfPkgX64.dsc | 1 +
> 4 files changed, 4 insertions(+)
>
> diff --git a/OvmfPkg/CloudHv/CloudHvX64.dsc
> b/OvmfPkg/CloudHv/CloudHvX64.dsc
> index 1660548e07..af594959a9 100644
> --- a/OvmfPkg/CloudHv/CloudHvX64.dsc
> +++ b/OvmfPkg/CloudHv/CloudHvX64.dsc
> @@ -907,10 +907,11 @@
> }
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf {
> <LibraryClasses>
>
> SmmCpuPlatformHookLib|OvmfPkg/Library/SmmCpuPlatformHookLibQemu/
> SmmCpuPlatformHookLibQemu.inf
>
> SmmCpuFeaturesLib|OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeature
> sLib.inf
> +
> SmmCpuSyncLib|UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
> }
>
> #
> # Variable driver stack (SMM)
> #
> diff --git a/OvmfPkg/OvmfPkgIa32.dsc b/OvmfPkg/OvmfPkgIa32.dsc
> index 6e8488007c..28379961a7 100644
> --- a/OvmfPkg/OvmfPkgIa32.dsc
> +++ b/OvmfPkg/OvmfPkgIa32.dsc
> @@ -952,10 +952,11 @@
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf {
> <LibraryClasses>
>
> SmmCpuPlatformHookLib|OvmfPkg/Library/SmmCpuPlatformHookLibQemu/
> SmmCpuPlatformHookLibQemu.inf
>
> SmmCpuFeaturesLib|OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeature
> sLib.inf
>
> MmSaveStateLib|UefiCpuPkg/Library/MmSaveStateLib/AmdMmSaveStateLib.
> inf
> +
> SmmCpuSyncLib|UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
> }
>
> #
> # Variable driver stack (SMM)
> #
> diff --git a/OvmfPkg/OvmfPkgIa32X64.dsc b/OvmfPkg/OvmfPkgIa32X64.dsc
> index 413ea71984..5e9eee628a 100644
> --- a/OvmfPkg/OvmfPkgIa32X64.dsc
> +++ b/OvmfPkg/OvmfPkgIa32X64.dsc
> @@ -970,10 +970,11 @@
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf {
> <LibraryClasses>
>
> SmmCpuPlatformHookLib|OvmfPkg/Library/SmmCpuPlatformHookLibQemu/
> SmmCpuPlatformHookLibQemu.inf
>
> SmmCpuFeaturesLib|OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeature
> sLib.inf
>
> MmSaveStateLib|UefiCpuPkg/Library/MmSaveStateLib/AmdMmSaveStateLib.
> inf
> +
> SmmCpuSyncLib|UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
> }
>
> #
> # Variable driver stack (SMM)
> #
> diff --git a/OvmfPkg/OvmfPkgX64.dsc b/OvmfPkg/OvmfPkgX64.dsc
> index f000092d70..bf4c7906c4 100644
> --- a/OvmfPkg/OvmfPkgX64.dsc
> +++ b/OvmfPkg/OvmfPkgX64.dsc
> @@ -1040,10 +1040,11 @@
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf {
> <LibraryClasses>
>
> SmmCpuPlatformHookLib|OvmfPkg/Library/SmmCpuPlatformHookLibQemu/
> SmmCpuPlatformHookLibQemu.inf
>
> SmmCpuFeaturesLib|OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeature
> sLib.inf
>
> MmSaveStateLib|UefiCpuPkg/Library/MmSaveStateLib/AmdMmSaveStateLib.
> inf
> +
> SmmCpuSyncLib|UefiCpuPkg/Library/SmmCpuSyncLib/SmmCpuSyncLib.inf
> }
>
> #
> # Variable driver stack (SMM)
> #
> --
> 2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112680): https://edk2.groups.io/g/devel/message/112680
Mute This Topic: https://groups.io/mt/103187895/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/1913456212/xyzzy [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [edk2-devel] [PATCH v4 2/8] UefiCpuPkg: Adds SmmCpuSyncLib library class
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 2/8] UefiCpuPkg: Adds SmmCpuSyncLib library class Wu, Jiaxin
@ 2023-12-19 3:55 ` Ni, Ray
0 siblings, 0 replies; 19+ messages in thread
From: Ni, Ray @ 2023-12-19 3:55 UTC (permalink / raw)
To: Wu, Jiaxin, devel@edk2.groups.io
Cc: Laszlo Ersek, Dong, Eric, Zeng, Star, Gerd Hoffmann,
Kumar, Rahul R
Reviewed-by: Ray Ni <ray.ni@intel.com>
Thanks,
Ray
> -----Original Message-----
> From: Wu, Jiaxin <jiaxin.wu@intel.com>
> Sent: Friday, December 15, 2023 5:55 PM
> To: devel@edk2.groups.io
> Cc: Laszlo Ersek <lersek@redhat.com>; Dong, Eric <eric.dong@intel.com>; Ni,
> Ray <ray.ni@intel.com>; Zeng, Star <star.zeng@intel.com>; Gerd Hoffmann
> <kraxel@redhat.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>
> Subject: [PATCH v4 2/8] UefiCpuPkg: Adds SmmCpuSyncLib library class
>
> Intel is planning to provide different SMM CPU Sync implementation
> along with some specific registers to improve the SMI performance,
> hence need SmmCpuSyncLib Library for Intel.
>
> This patch is to:
> 1.Adds SmmCpuSyncLib Library class in UefiCpuPkg.dec.
> 2.Adds SmmCpuSyncLib.h function declaration header file.
>
> For the new SmmCpuSyncLib, it provides 3 sets of APIs:
>
> 1. ContextInit/ContextDeinit/ContextReset:
> ContextInit() is called in driver's entrypoint to allocate and
> initialize the SMM CPU Sync context. ContextDeinit() is called in
> driver's unload function to deinitialize SMM CPU Sync context.
> ContextReset() is called before CPU exist SMI, which allows CPU to
> check into the next SMI from this point.
>
> 2. GetArrivedCpuCount/CheckInCpu/CheckOutCpu/LockDoor:
> When SMI happens, all processors including BSP enter to SMM mode by
> calling CheckInCpu(). The elected BSP calls LockDoor() so that
> CheckInCpu() will return the error code after that. CheckOutCpu() can
> be called in error handling flow for the CPU who calls CheckInCpu()
> earlier. GetArrivedCpuCount() returns the number of checked-in CPUs.
>
> 3. WaitForAPs/ReleaseOneAp/WaitForBsp/ReleaseBsp
> WaitForAPs() & ReleaseOneAp() are called from BSP to wait the number
> of APs and release one specific AP. WaitForBsp() & ReleaseBsp() are
> called from APs to wait and release BSP. The 4 APIs are used to
> synchronize the running flow among BSP and APs. BSP and AP Sync flow
> can be easy understand as below:
> BSP: ReleaseOneAp --> AP: WaitForBsp
> BSP: WaitForAPs <-- AP: ReleaseBsp
>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Zeng Star <star.zeng@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
> ---
> UefiCpuPkg/Include/Library/SmmCpuSyncLib.h | 290
> +++++++++++++++++++++++++++++
> UefiCpuPkg/UefiCpuPkg.dec | 3 +
> 2 files changed, 293 insertions(+)
> create mode 100644 UefiCpuPkg/Include/Library/SmmCpuSyncLib.h
>
> diff --git a/UefiCpuPkg/Include/Library/SmmCpuSyncLib.h
> b/UefiCpuPkg/Include/Library/SmmCpuSyncLib.h
> new file mode 100644
> index 0000000000..4d273095c9
> --- /dev/null
> +++ b/UefiCpuPkg/Include/Library/SmmCpuSyncLib.h
> @@ -0,0 +1,290 @@
> +/** @file
> + Library that provides SMM CPU Sync related operations.
> +
> + The lib provides 3 sets of APIs:
> + 1. ContextInit/ContextDeinit/ContextReset:
> +
> + ContextInit() is called in driver's entrypoint to allocate and initialize the
> SMM CPU Sync context.
> + ContextDeinit() is called in driver's unload function to deinitialize the
> SMM CPU Sync context.
> + ContextReset() is called by one of CPUs after all CPUs are ready to exit
> SMI, which allows CPU to
> + check into the next SMI from this point.
> +
> + 2. GetArrivedCpuCount/CheckInCpu/CheckOutCpu/LockDoor:
> + When SMI happens, all processors including BSP enter to SMM mode by
> calling CheckInCpu().
> + CheckOutCpu() can be called in error handling flow for the CPU who
> calls CheckInCpu() earlier.
> + The elected BSP calls LockDoor() so that CheckInCpu() and
> CheckOutCpu() will return the error code after that.
> + GetArrivedCpuCount() returns the number of checked-in CPUs.
> +
> + 3. WaitForAPs/ReleaseOneAp/WaitForBsp/ReleaseBsp
> + WaitForAPs() & ReleaseOneAp() are called from BSP to wait the number
> of APs and release one specific AP.
> + WaitForBsp() & ReleaseBsp() are called from APs to wait and release
> BSP.
> + The 4 APIs are used to synchronize the running flow among BSP and
> APs.
> + BSP and AP Sync flow can be easy understand as below:
> + BSP: ReleaseOneAp --> AP: WaitForBsp
> + BSP: WaitForAPs <-- AP: ReleaseBsp
> +
> + Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
> + SPDX-License-Identifier: BSD-2-Clause-Patent
> +
> +**/
> +
> +#ifndef SMM_CPU_SYNC_LIB_H_
> +#define SMM_CPU_SYNC_LIB_H_
> +
> +#include <Uefi/UefiBaseType.h>
> +
> +//
> +// Opaque structure for SMM CPU Sync context.
> +//
> +typedef struct SMM_CPU_SYNC_CONTEXT SMM_CPU_SYNC_CONTEXT;
> +
> +/**
> + Create and initialize the SMM CPU Sync context. It is to allocate and
> initialize the
> + SMM CPU Sync context.
> +
> + If Context is NULL, then ASSERT().
> +
> + @param[in] NumberOfCpus The number of Logical
> Processors in the system.
> + @param[out] Context Pointer to the new created and
> initialized SMM CPU Sync context object.
> + NULL will be returned if any
> error happen during init.
> +
> + @retval RETURN_SUCCESS The SMM CPU Sync context
> was successful created and initialized.
> + @retval RETURN_OUT_OF_RESOURCES There are not enough
> resources available to create and initialize SMM CPU Sync context.
> + @retval RETURN_BUFFER_TOO_SMALL Overflow happen
> +
> +**/
> +RETURN_STATUS
> +EFIAPI
> +SmmCpuSyncContextInit (
> + IN UINTN NumberOfCpus,
> + OUT SMM_CPU_SYNC_CONTEXT **Context
> + );
> +
> +/**
> + Deinit an allocated SMM CPU Sync context. The resources allocated in
> SmmCpuSyncContextInit() will
> + be freed.
> +
> + If Context is NULL, then ASSERT().
> +
> + @param[in,out] Context Pointer to the SMM CPU Sync context
> object to be deinitialized.
> +
> +**/
> +VOID
> +EFIAPI
> +SmmCpuSyncContextDeinit (
> + IN OUT SMM_CPU_SYNC_CONTEXT *Context
> + );
> +
> +/**
> + Reset SMM CPU Sync context. SMM CPU Sync context will be reset to the
> initialized state.
> +
> + This function is called by one of CPUs after all CPUs are ready to exit SMI,
> which allows CPU to
> + check into the next SMI from this point.
> +
> + If Context is NULL, then ASSERT().
> +
> + @param[in,out] Context Pointer to the SMM CPU Sync context
> object to be reset.
> +
> +**/
> +VOID
> +EFIAPI
> +SmmCpuSyncContextReset (
> + IN OUT SMM_CPU_SYNC_CONTEXT *Context
> + );
> +
> +/**
> + Get current number of arrived CPU in SMI.
> +
> + BSP might need to know the current number of arrived CPU in SMI to
> make sure all APs
> + in SMI. This API can be for that purpose.
> +
> + If Context is NULL, then ASSERT().
> +
> + @param[in] Context Pointer to the SMM CPU Sync context
> object.
> +
> + @retval Current number of arrived CPU in SMI.
> +
> +**/
> +UINTN
> +EFIAPI
> +SmmCpuSyncGetArrivedCpuCount (
> + IN SMM_CPU_SYNC_CONTEXT *Context
> + );
> +
> +/**
> + Performs an atomic operation to check in CPU.
> +
> + When SMI happens, all processors including BSP enter to SMM mode by
> calling SmmCpuSyncCheckInCpu().
> +
> + If Context is NULL, then ASSERT().
> + If CpuIndex exceeds the range of all CPUs in the system, then ASSERT().
> +
> + @param[in,out] Context Pointer to the SMM CPU Sync
> context object.
> + @param[in] CpuIndex Check in CPU index.
> +
> + @retval RETURN_SUCCESS Check in CPU (CpuIndex)
> successfully.
> + @retval RETURN_ABORTED Check in CPU failed due to
> SmmCpuSyncLockDoor() has been called by one elected CPU.
> +
> +**/
> +RETURN_STATUS
> +EFIAPI
> +SmmCpuSyncCheckInCpu (
> + IN OUT SMM_CPU_SYNC_CONTEXT *Context,
> + IN UINTN CpuIndex
> + );
> +
> +/**
> + Performs an atomic operation to check out CPU.
> +
> + This function can be called in error handling flow for the CPU who calls
> CheckInCpu() earlier.
> + The caller shall make sure the CPU specified by CpuIndex has already
> checked-in.
> +
> + If Context is NULL, then ASSERT().
> + If CpuIndex exceeds the range of all CPUs in the system, then ASSERT().
> +
> + @param[in,out] Context Pointer to the SMM CPU Sync
> context object.
> + @param[in] CpuIndex Check out CPU index.
> +
> + @retval RETURN_SUCCESS Check out CPU (CpuIndex)
> successfully.
> + @retval RETURN_ABORTED Check out CPU failed due to
> SmmCpuSyncLockDoor() has been called by one elected CPU.
> +
> +**/
> +RETURN_STATUS
> +EFIAPI
> +SmmCpuSyncCheckOutCpu (
> + IN OUT SMM_CPU_SYNC_CONTEXT *Context,
> + IN UINTN CpuIndex
> + );
> +
> +/**
> + Performs an atomic operation lock door for CPU checkin and checkout.
> After this function:
> + CPU can not check in via SmmCpuSyncCheckInCpu().
> + CPU can not check out via SmmCpuSyncCheckOutCpu().
> +
> + The CPU specified by CpuIndex is elected to lock door. The caller shall
> make sure the CpuIndex
> + is the actual CPU calling this function to avoid the undefined behavior.
> +
> + If Context is NULL, then ASSERT().
> + If CpuCount is NULL, then ASSERT().
> + If CpuIndex exceeds the range of all CPUs in the system, then ASSERT().
> +
> + @param[in,out] Context Pointer to the SMM CPU Sync
> context object.
> + @param[in] CpuIndex Indicate which CPU to lock door.
> + @param[out] CpuCount Number of arrived CPU in SMI
> after look door.
> +
> +**/
> +VOID
> +EFIAPI
> +SmmCpuSyncLockDoor (
> + IN OUT SMM_CPU_SYNC_CONTEXT *Context,
> + IN UINTN CpuIndex,
> + OUT UINTN *CpuCount
> + );
> +
> +/**
> + Used by the BSP to wait for APs.
> +
> + The number of APs need to be waited is specified by NumberOfAPs. The
> BSP is specified by BspIndex.
> + The caller shall make sure the BspIndex is the actual CPU calling this
> function to avoid the undefined behavior.
> + The caller shall make sure the NumberOfAPs have already checked-in to
> avoid the undefined behavior.
> +
> + If Context is NULL, then ASSERT().
> + If NumberOfAPs >= All CPUs in system, then ASSERT().
> + If BspIndex exceeds the range of all CPUs in the system, then ASSERT().
> +
> + Note:
> + This function is blocking mode, and it will return only after the number of
> APs released by
> + calling SmmCpuSyncReleaseBsp():
> + BSP: WaitForAPs <-- AP: ReleaseBsp
> +
> + @param[in,out] Context Pointer to the SMM CPU Sync
> context object.
> + @param[in] NumberOfAPs Number of APs need to be
> waited by BSP.
> + @param[in] BspIndex The BSP Index to wait for APs.
> +
> +**/
> +VOID
> +EFIAPI
> +SmmCpuSyncWaitForAPs (
> + IN OUT SMM_CPU_SYNC_CONTEXT *Context,
> + IN UINTN NumberOfAPs,
> + IN UINTN BspIndex
> + );
> +
> +/**
> + Used by the BSP to release one AP.
> +
> + The AP is specified by CpuIndex. The BSP is specified by BspIndex.
> + The caller shall make sure the BspIndex is the actual CPU calling this
> function to avoid the undefined behavior.
> + The caller shall make sure the CpuIndex has already checked-in to avoid
> the undefined behavior.
> +
> + If Context is NULL, then ASSERT().
> + If CpuIndex == BspIndex, then ASSERT().
> + If BspIndex or CpuIndex exceed the range of all CPUs in the system, then
> ASSERT().
> +
> + @param[in,out] Context Pointer to the SMM CPU Sync
> context object.
> + @param[in] CpuIndex Indicate which AP need to be
> released.
> + @param[in] BspIndex The BSP Index to release AP.
> +
> +**/
> +VOID
> +EFIAPI
> +SmmCpuSyncReleaseOneAp (
> + IN OUT SMM_CPU_SYNC_CONTEXT *Context,
> + IN UINTN CpuIndex,
> + IN UINTN BspIndex
> + );
> +
> +/**
> + Used by the AP to wait BSP.
> +
> + The AP is specified by CpuIndex.
> + The caller shall make sure the CpuIndex is the actual CPU calling this
> function to avoid the undefined behavior.
> + The BSP is specified by BspIndex.
> +
> + If Context is NULL, then ASSERT().
> + If CpuIndex == BspIndex, then ASSERT().
> + If BspIndex or CpuIndex exceed the range of all CPUs in the system, then
> ASSERT().
> +
> + Note:
> + This function is blocking mode, and it will return only after the AP released
> by
> + calling SmmCpuSyncReleaseOneAp():
> + BSP: ReleaseOneAp --> AP: WaitForBsp
> +
> + @param[in,out] Context Pointer to the SMM CPU Sync
> context object.
> + @param[in] CpuIndex Indicate which AP wait BSP.
> + @param[in] BspIndex The BSP Index to be waited.
> +
> +**/
> +VOID
> +EFIAPI
> +SmmCpuSyncWaitForBsp (
> + IN OUT SMM_CPU_SYNC_CONTEXT *Context,
> + IN UINTN CpuIndex,
> + IN UINTN BspIndex
> + );
> +
> +/**
> + Used by the AP to release BSP.
> +
> + The AP is specified by CpuIndex.
> + The caller shall make sure the CpuIndex is the actual CPU calling this
> function to avoid the undefined behavior.
> + The BSP is specified by BspIndex.
> +
> + If Context is NULL, then ASSERT().
> + If CpuIndex == BspIndex, then ASSERT().
> + If BspIndex or CpuIndex exceed the range of all CPUs in the system, then
> ASSERT().
> +
> + @param[in,out] Context Pointer to the SMM CPU Sync
> context object.
> + @param[in] CpuIndex Indicate which AP release BSP.
> + @param[in] BspIndex The BSP Index to be released.
> +
> +**/
> +VOID
> +EFIAPI
> +SmmCpuSyncReleaseBsp (
> + IN OUT SMM_CPU_SYNC_CONTEXT *Context,
> + IN UINTN CpuIndex,
> + IN UINTN BspIndex
> + );
> +
> +#endif
> diff --git a/UefiCpuPkg/UefiCpuPkg.dec b/UefiCpuPkg/UefiCpuPkg.dec
> index 61bd34ef17..cc785a3222 100644
> --- a/UefiCpuPkg/UefiCpuPkg.dec
> +++ b/UefiCpuPkg/UefiCpuPkg.dec
> @@ -62,10 +62,13 @@
> CpuPageTableLib|Include/Library/CpuPageTableLib.h
>
> ## @libraryclass Provides functions for manipulating smram savestate
> registers.
> MmSaveStateLib|Include/Library/MmSaveStateLib.h
>
> + ## @libraryclass Provides functions for SMM CPU Sync Operation.
> + SmmCpuSyncLib|Include/Library/SmmCpuSyncLib.h
> +
> [LibraryClasses.RISCV64]
> ## @libraryclass Provides functions to manage MMU features on
> RISCV64 CPUs.
> ##
> RiscVMmuLib|Include/Library/BaseRiscVMmuLib.h
>
> --
> 2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112681): https://edk2.groups.io/g/devel/message/112681
Mute This Topic: https://groups.io/mt/103187892/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/1913456212/xyzzy [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [edk2-devel] [PATCH v4 1/8] UefiCpuPkg/PiSmmCpuDxeSmm: Optimize Semaphore Sync between BSP and AP
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 1/8] UefiCpuPkg/PiSmmCpuDxeSmm: Optimize Semaphore Sync between BSP and AP Wu, Jiaxin
@ 2023-12-19 3:55 ` Ni, Ray
0 siblings, 0 replies; 19+ messages in thread
From: Ni, Ray @ 2023-12-19 3:55 UTC (permalink / raw)
To: Wu, Jiaxin, devel@edk2.groups.io
Cc: Laszlo Ersek, Dong, Eric, Zeng, Star, Kumar, Rahul R,
Gerd Hoffmann
Reviewed-by: Ray Ni <ray.ni@intel.com>
Thanks,
Ray
> -----Original Message-----
> From: Wu, Jiaxin <jiaxin.wu@intel.com>
> Sent: Friday, December 15, 2023 5:55 PM
> To: devel@edk2.groups.io
> Cc: Laszlo Ersek <lersek@redhat.com>; Dong, Eric <eric.dong@intel.com>; Ni,
> Ray <ray.ni@intel.com>; Zeng, Star <star.zeng@intel.com>; Kumar, Rahul R
> <rahul.r.kumar@intel.com>; Gerd Hoffmann <kraxel@redhat.com>
> Subject: [PATCH v4 1/8] UefiCpuPkg/PiSmmCpuDxeSmm: Optimize
> Semaphore Sync between BSP and AP
>
> This patch is to define 3 new functions (WaitForBsp & ReleaseBsp &
> ReleaseOneAp) used for the semaphore sync between BSP & AP. With the
> change, BSP and AP Sync flow will be easy understand as below:
> BSP: ReleaseAllAPs or ReleaseOneAp --> AP: WaitForBsp
> BSP: WaitForAllAPs <-- AP: ReleaseBsp
>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Zeng Star <star.zeng@intel.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 72
> ++++++++++++++++++++++++++++-------
> 1 file changed, 58 insertions(+), 14 deletions(-)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> index b279f5dfcc..54542262a2 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c
> @@ -120,10 +120,11 @@ LockdownSemaphore (
>
> return Value;
> }
>
> /**
> + Used for BSP to wait all APs.
> Wait all APs to performs an atomic compare exchange operation to
> release semaphore.
>
> @param NumberOfAPs AP number
>
> **/
> @@ -139,10 +140,11 @@ WaitForAllAPs (
> WaitForSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
> }
> }
>
> /**
> + Used for BSP to release all APs.
> Performs an atomic compare exchange operation to release semaphore
> for each AP.
>
> **/
> VOID
> @@ -157,10 +159,52 @@ ReleaseAllAPs (
> ReleaseSemaphore (mSmmMpSyncData->CpuData[Index].Run);
> }
> }
> }
>
> +/**
> + Used for BSP to release one AP.
> +
> + @param ApSem IN: 32-bit unsigned integer
> + OUT: original integer + 1
> +**/
> +VOID
> +ReleaseOneAp (
> + IN OUT volatile UINT32 *ApSem
> + )
> +{
> + ReleaseSemaphore (ApSem);
> +}
> +
> +/**
> + Used for AP to wait BSP.
> +
> + @param ApSem IN: 32-bit unsigned integer
> + OUT: original integer - 1
> +**/
> +VOID
> +WaitForBsp (
> + IN OUT volatile UINT32 *ApSem
> + )
> +{
> + WaitForSemaphore (ApSem);
> +}
> +
> +/**
> + Used for AP to release BSP.
> +
> + @param BspSem IN: 32-bit unsigned integer
> + OUT: original integer + 1
> +**/
> +VOID
> +ReleaseBsp (
> + IN OUT volatile UINT32 *BspSem
> + )
> +{
> + ReleaseSemaphore (BspSem);
> +}
> +
> /**
> Check whether the index of CPU perform the package level register
> programming during System Management Mode initialization.
>
> The index of Processor specified by
> mPackageFirstThreadIndex[PackageIndex]
> @@ -632,11 +676,11 @@ BSPHandler (
> // Signal all APs it's time for backup MTRRs
> //
> ReleaseAllAPs ();
>
> //
> - // WaitForSemaphore() may wait for ever if an AP happens to enter
> SMM at
> + // WaitForAllAPs() may wait for ever if an AP happens to enter SMM
> at
> // exactly this point. Please make sure PcdCpuSmmMaxSyncLoops
> has been set
> // to a large enough value to avoid this situation.
> // Note: For HT capable CPUs, threads within a core share the same
> set of MTRRs.
> // We do the backup first and then set MTRR to avoid race condition
> for threads
> // in the same core.
> @@ -652,11 +696,11 @@ BSPHandler (
> // Let all processors program SMM MTRRs together
> //
> ReleaseAllAPs ();
>
> //
> - // WaitForSemaphore() may wait for ever if an AP happens to enter
> SMM at
> + // WaitForAllAPs() may wait for ever if an AP happens to enter SMM
> at
> // exactly this point. Please make sure PcdCpuSmmMaxSyncLoops
> has been set
> // to a large enough value to avoid this situation.
> //
> ReplaceOSMtrrs (CpuIndex);
>
> @@ -898,50 +942,50 @@ APHandler (
>
> if ((SyncMode == SmmCpuSyncModeTradition) ||
> SmmCpuFeaturesNeedConfigureMtrrs ()) {
> //
> // Notify BSP of arrival at this point
> //
> - ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
> + ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
> }
>
> if (SmmCpuFeaturesNeedConfigureMtrrs ()) {
> //
> // Wait for the signal from BSP to backup MTRRs
> //
> - WaitForSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run);
> + WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
>
> //
> // Backup OS MTRRs
> //
> MtrrGetAllMtrrs (&Mtrrs);
>
> //
> // Signal BSP the completion of this AP
> //
> - ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
> + ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
>
> //
> // Wait for BSP's signal to program MTRRs
> //
> - WaitForSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run);
> + WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
>
> //
> // Replace OS MTRRs with SMI MTRRs
> //
> ReplaceOSMtrrs (CpuIndex);
>
> //
> // Signal BSP the completion of this AP
> //
> - ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
> + ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
> }
>
> while (TRUE) {
> //
> // Wait for something to happen
> //
> - WaitForSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run);
> + WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
>
> //
> // Check if BSP wants to exit SMM
> //
> if (!(*mSmmMpSyncData->InsideSmm)) {
> @@ -977,16 +1021,16 @@ APHandler (
>
> if (SmmCpuFeaturesNeedConfigureMtrrs ()) {
> //
> // Notify BSP the readiness of this AP to program MTRRs
> //
> - ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
> + ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
>
> //
> // Wait for the signal from BSP to program MTRRs
> //
> - WaitForSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run);
> + WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
>
> //
> // Restore OS MTRRs
> //
> SmmCpuFeaturesReenableSmrr ();
> @@ -994,26 +1038,26 @@ APHandler (
> }
>
> //
> // Notify BSP the readiness of this AP to Reset states/semaphore for this
> processor
> //
> - ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
> + ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
>
> //
> // Wait for the signal from BSP to Reset states/semaphore for this
> processor
> //
> - WaitForSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run);
> + WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run);
>
> //
> // Reset states/semaphore for this processor
> //
> *(mSmmMpSyncData->CpuData[CpuIndex].Present) = FALSE;
>
> //
> // Notify BSP the readiness of this AP to exit SMM
> //
> - ReleaseSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run);
> + ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run);
> }
>
> /**
> Checks whether the input token is the current used token.
>
> @@ -1277,11 +1321,11 @@ InternalSmmStartupThisAp (
> mSmmMpSyncData->CpuData[CpuIndex].Status = CpuStatus;
> if (mSmmMpSyncData->CpuData[CpuIndex].Status != NULL) {
> *mSmmMpSyncData->CpuData[CpuIndex].Status = EFI_NOT_READY;
> }
>
> - ReleaseSemaphore (mSmmMpSyncData->CpuData[CpuIndex].Run);
> + ReleaseOneAp (mSmmMpSyncData->CpuData[CpuIndex].Run);
>
> if (Token == NULL) {
> AcquireSpinLock (mSmmMpSyncData->CpuData[CpuIndex].Busy);
> ReleaseSpinLock (mSmmMpSyncData->CpuData[CpuIndex].Busy);
> }
> --
> 2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112682): https://edk2.groups.io/g/devel/message/112682
Mute This Topic: https://groups.io/mt/103187891/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/1913456212/xyzzy [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [edk2-devel] [PATCH v4 4/8] UefiCpuPkg: Implements SmmCpuSyncLib library instance
2023-12-18 9:23 ` Ni, Ray
@ 2023-12-19 5:44 ` Wu, Jiaxin
2023-12-19 6:21 ` Ni, Ray
0 siblings, 1 reply; 19+ messages in thread
From: Wu, Jiaxin @ 2023-12-19 5:44 UTC (permalink / raw)
To: Ni, Ray, devel@edk2.groups.io
Cc: Laszlo Ersek, Dong, Eric, Zeng, Star, Gerd Hoffmann,
Kumar, Rahul R
Thanks Ray.
>
> 1. It's not "LockedCpuCount". It's "ArrivedCpuCountUponLock".
> Comments can be:
> Before the door is locked, CpuCount stores the arrived CPU count.
> After the door is locked, CpuCount is set to -1 indicating the door is locked.
> ArrivedCpuCpuntUponLock stores the arrived CPU count then.
>
>
Ok, I will update.
> > +/**
> > + Performs an atomic compare exchange operation to get semaphore.
> > + The compare exchange operation must be performed using MP safe
> > + mechanisms.
> > +
> > + @param[in,out] Sem IN: 32-bit unsigned integer
> > + OUT: original integer - 1 if Sem is not locked.
> > + OUT: original integer if Sem is locked
> > (MAX_UINT32).
> > +
> > + @retval Original integer - 1 if Sem is not locked.
> > + Original integer if Sem is locked (MAX_UINT32).
>
> 2. Can just say "MAX_UINT32 if Sem is locked".
>
>
Agree.
> > +
> > + //
> > + // Assign CPU Semaphore pointer
> > + //
> > + CpuSem = (*Context)->CpuSem;
> > + for (CpuIndex = 0; CpuIndex < NumberOfCpus; CpuIndex++) {
> > + CpuSem->Run = (SMM_CPU_SYNC_SEMAPHORE *)SemAddr;
> > + *CpuSem->Run = 0;
> > +
> > + CpuSem++;
> > + SemAddr += OneSemSize;
>
> 5. SafeIntLib was used earlier to make sure no integer overflow.
> But "SemAddr += OneSemSize" is simply ignoring the danger of integer
> overflow.
> I agree (NumberOfCpus + 1) * OneSemSize shouldn't cause integer overflow
> when code runs to here.
> But initial value of SemAddr is not zero. It's still possible the SemAddr +
> (NumberOfCpus+1)*OneSemSize causes integer overflow.
> I am ok if you don't fix it as I don't believe the integer overflow could happen
> in 5 years.
SemAddr is the address of allocated buffer (SemBuffer), "SemAddr += OneSemSize" will make SemAddr point to the part of allocated buffer. is it possible integer overflow? If so, the allocatepage should fail?
Thanks,
Jiaxin
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112690): https://edk2.groups.io/g/devel/message/112690
Mute This Topic: https://groups.io/mt/103187894/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [edk2-devel] [PATCH v4 4/8] UefiCpuPkg: Implements SmmCpuSyncLib library instance
2023-12-19 5:44 ` Wu, Jiaxin
@ 2023-12-19 6:21 ` Ni, Ray
0 siblings, 0 replies; 19+ messages in thread
From: Ni, Ray @ 2023-12-19 6:21 UTC (permalink / raw)
To: Wu, Jiaxin, devel@edk2.groups.io
Cc: Laszlo Ersek, Dong, Eric, Zeng, Star, Gerd Hoffmann,
Kumar, Rahul R
>
> SemAddr is the address of allocated buffer (SemBuffer), "SemAddr +=
> OneSemSize" will make SemAddr point to the part of allocated buffer. is it
> possible integer overflow? If so, the allocatepage should fail?
Good reason! I agree.
-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#112692): https://edk2.groups.io/g/devel/message/112692
Mute This Topic: https://groups.io/mt/103187894/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/leave/12367111/7686176/1913456212/xyzzy [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2023-12-19 6:21 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-12-15 9:55 [edk2-devel] [PATCH v4 0/8] Refine SMM CPU Sync flow and abstract SmmCpuSyncLib Wu, Jiaxin
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 1/8] UefiCpuPkg/PiSmmCpuDxeSmm: Optimize Semaphore Sync between BSP and AP Wu, Jiaxin
2023-12-19 3:55 ` Ni, Ray
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 2/8] UefiCpuPkg: Adds SmmCpuSyncLib library class Wu, Jiaxin
2023-12-19 3:55 ` Ni, Ray
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 3/8] MdePkg/MdeLibs.dsc.inc: Add SafeIntLib instance Wu, Jiaxin
2023-12-15 16:30 ` Michael D Kinney
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 4/8] UefiCpuPkg: Implements SmmCpuSyncLib library instance Wu, Jiaxin
2023-12-18 9:23 ` Ni, Ray
2023-12-19 5:44 ` Wu, Jiaxin
2023-12-19 6:21 ` Ni, Ray
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 5/8] OvmfPkg: Specifies SmmCpuSyncLib instance Wu, Jiaxin
2023-12-19 3:55 ` Ni, Ray
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 6/8] UefiPayloadPkg: " Wu, Jiaxin
2023-12-19 3:55 ` Ni, Ray
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 7/8] UefiCpuPkg/PiSmmCpuDxeSmm: Simplify RunningApCount decrement Wu, Jiaxin
2023-12-19 3:55 ` Ni, Ray
2023-12-15 9:55 ` [edk2-devel] [PATCH v4 8/8] UefiCpuPkg/PiSmmCpuDxeSmm: Consume SmmCpuSyncLib Wu, Jiaxin
2023-12-19 3:54 ` Ni, Ray
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox