public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
From: "Wu, Jiaxin" <jiaxin.wu@intel.com>
To: devel@edk2.groups.io
Cc: Ray Ni <ray.ni@intel.com>, Zeng Star <star.zeng@intel.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Rahul Kumar <rahul1.kumar@intel.com>
Subject: [edk2-devel] [PATCH v1 02/13] UefiCpuPkg/SmmRelocationLib: Add SmmRelocationLib library instance
Date: Wed, 10 Apr 2024 21:57:13 +0800	[thread overview]
Message-ID: <20240410135724.15344-3-jiaxin.wu@intel.com> (raw)
In-Reply-To: <20240410135724.15344-1-jiaxin.wu@intel.com>

This patch separates the smbase relocation logic from
PiSmmCpuDxeSmm driver, and moves to the
SmmRelocationInit interface.

Platform shall consume the interface for the smbase
relocation if need SMM support.

Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
---
 .../Library/SmmRelocationLib/Ia32/Semaphore.c      |  43 ++
 .../Library/SmmRelocationLib/Ia32/SmmInit.nasm     | 157 +++++
 .../SmmRelocationLib/InternalSmmRelocationLib.h    | 141 +++++
 .../Library/SmmRelocationLib/SmmRelocationLib.c    | 659 +++++++++++++++++++++
 .../Library/SmmRelocationLib/SmmRelocationLib.inf  |  61 ++
 .../SmmRelocationLib/SmramSaveStateConfig.c        |  91 +++
 .../Library/SmmRelocationLib/X64/Semaphore.c       |  70 +++
 .../Library/SmmRelocationLib/X64/SmmInit.nasm      | 207 +++++++
 8 files changed, 1429 insertions(+)
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/Ia32/Semaphore.c
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/Ia32/SmmInit.nasm
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/InternalSmmRelocationLib.h
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.c
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.inf
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/SmramSaveStateConfig.c
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/X64/Semaphore.c
 create mode 100644 UefiCpuPkg/Library/SmmRelocationLib/X64/SmmInit.nasm

diff --git a/UefiCpuPkg/Library/SmmRelocationLib/Ia32/Semaphore.c b/UefiCpuPkg/Library/SmmRelocationLib/Ia32/Semaphore.c
new file mode 100644
index 0000000000..ace3221cfc
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/Ia32/Semaphore.c
@@ -0,0 +1,43 @@
+/** @file
+  Semaphore mechanism to indicate to the BSP that an AP has exited SMM
+  after SMBASE relocation.
+
+  Copyright (c) 2024, Intel Corporation. All rights reserved.<BR>
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+#include "InternalSmmRelocationLib.h"
+
+UINTN             mSmmRelocationOriginalAddress;
+volatile BOOLEAN  *mRebasedFlag;
+
+/**
+  Hook return address of SMM Save State so that semaphore code
+  can be executed immediately after AP exits SMM to indicate to
+  the BSP that an AP has exited SMM after SMBASE relocation.
+
+  @param[in] CpuIndex     The processor index.
+  @param[in] RebasedFlag  A pointer to a flag that is set to TRUE
+                          immediately after AP exits SMM.
+
+**/
+VOID
+SemaphoreHook (
+  IN UINTN             CpuIndex,
+  IN volatile BOOLEAN  *RebasedFlag
+  )
+{
+  SMRAM_SAVE_STATE_MAP  *CpuState;
+
+  mRebasedFlag = RebasedFlag;
+
+  CpuState = (SMRAM_SAVE_STATE_MAP *)(UINTN)(SMM_DEFAULT_SMBASE + SMRAM_SAVE_STATE_MAP_OFFSET);
+
+  mSmmRelocationOriginalAddress = (UINTN)HookReturnFromSmm (
+                                           CpuIndex,
+                                           CpuState,
+                                           (UINT64)(UINTN)&SmmRelocationSemaphoreComplete,
+                                           (UINT64)(UINTN)&SmmRelocationSemaphoreComplete
+                                           );
+}
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/Ia32/SmmInit.nasm b/UefiCpuPkg/Library/SmmRelocationLib/Ia32/SmmInit.nasm
new file mode 100644
index 0000000000..cb8b030693
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/Ia32/SmmInit.nasm
@@ -0,0 +1,157 @@
+;------------------------------------------------------------------------------ ;
+; Copyright (c) 2024, Intel Corporation. All rights reserved.<BR>
+; SPDX-License-Identifier: BSD-2-Clause-Patent
+;
+; Module Name:
+;
+;   SmmInit.nasm
+;
+; Abstract:
+;
+;   Functions for relocating SMBASE's for all processors
+;
+;-------------------------------------------------------------------------------
+
+%include "StuffRsbNasm.inc"
+
+global  ASM_PFX(gcSmiIdtr)
+global  ASM_PFX(gcSmiGdtr)
+
+extern ASM_PFX(SmmInitHandler)
+extern ASM_PFX(mRebasedFlag)
+extern ASM_PFX(mSmmRelocationOriginalAddress)
+
+global ASM_PFX(gPatchSmmCr3)
+global ASM_PFX(gPatchSmmCr4)
+global ASM_PFX(gPatchSmmCr0)
+global ASM_PFX(gPatchSmmInitStack)
+global ASM_PFX(gcSmmInitSize)
+global ASM_PFX(gcSmmInitTemplate)
+
+%define PROTECT_MODE_CS 0x8
+%define PROTECT_MODE_DS 0x20
+
+    SECTION .data
+
+NullSeg: DQ 0                   ; reserved by architecture
+CodeSeg32:
+            DW      -1                  ; LimitLow
+            DW      0                   ; BaseLow
+            DB      0                   ; BaseMid
+            DB      0x9b
+            DB      0xcf                ; LimitHigh
+            DB      0                   ; BaseHigh
+ProtModeCodeSeg32:
+            DW      -1                  ; LimitLow
+            DW      0                   ; BaseLow
+            DB      0                   ; BaseMid
+            DB      0x9b
+            DB      0xcf                ; LimitHigh
+            DB      0                   ; BaseHigh
+ProtModeSsSeg32:
+            DW      -1                  ; LimitLow
+            DW      0                   ; BaseLow
+            DB      0                   ; BaseMid
+            DB      0x93
+            DB      0xcf                ; LimitHigh
+            DB      0                   ; BaseHigh
+DataSeg32:
+            DW      -1                  ; LimitLow
+            DW      0                   ; BaseLow
+            DB      0                   ; BaseMid
+            DB      0x93
+            DB      0xcf                ; LimitHigh
+            DB      0                   ; BaseHigh
+CodeSeg16:
+            DW      -1
+            DW      0
+            DB      0
+            DB      0x9b
+            DB      0x8f
+            DB      0
+DataSeg16:
+            DW      -1
+            DW      0
+            DB      0
+            DB      0x93
+            DB      0x8f
+            DB      0
+CodeSeg64:
+            DW      -1                  ; LimitLow
+            DW      0                   ; BaseLow
+            DB      0                   ; BaseMid
+            DB      0x9b
+            DB      0xaf                ; LimitHigh
+            DB      0                   ; BaseHigh
+GDT_SIZE equ $ - NullSeg
+
+ASM_PFX(gcSmiGdtr):
+    DW      GDT_SIZE - 1
+    DD      NullSeg
+
+ASM_PFX(gcSmiIdtr):
+    DW      0
+    DD      0
+
+
+    SECTION .text
+
+global ASM_PFX(SmmStartup)
+
+BITS 16
+ASM_PFX(SmmStartup):
+    ;mov     eax, 0x80000001             ; read capability
+    ;cpuid
+    ;mov     ebx, edx                    ; rdmsr will change edx. keep it in ebx.
+    ;and     ebx, BIT20                  ; extract NX capability bit
+    ;shr     ebx, 9                      ; shift bit to IA32_EFER.NXE[BIT11] position
+    mov     eax, strict dword 0         ; source operand will be patched
+ASM_PFX(gPatchSmmCr3):
+    mov     cr3, eax
+o32 lgdt    [cs:ebp + (ASM_PFX(gcSmiGdtr) - ASM_PFX(SmmStartup))]
+    mov     eax, strict dword 0         ; source operand will be patched
+ASM_PFX(gPatchSmmCr4):
+    mov     cr4, eax
+    ;mov     ecx, 0xc0000080             ; IA32_EFER MSR
+    ;rdmsr
+    ;or      eax, ebx                    ; set NXE bit if NX is available
+    ;wrmsr
+    mov     eax, strict dword 0         ; source operand will be patched
+ASM_PFX(gPatchSmmCr0):
+    mov     di, PROTECT_MODE_DS
+    mov     cr0, eax
+    jmp     PROTECT_MODE_CS : dword @32bit
+
+BITS 32
+@32bit:
+    mov     ds, edi
+    mov     es, edi
+    mov     fs, edi
+    mov     gs, edi
+    mov     ss, edi
+    mov     esp, strict dword 0         ; source operand will be patched
+ASM_PFX(gPatchSmmInitStack):
+    call    ASM_PFX(SmmInitHandler)
+    StuffRsb32
+    rsm
+
+BITS 16
+ASM_PFX(gcSmmInitTemplate):
+    mov ebp, ASM_PFX(SmmStartup)
+    sub ebp, 0x30000
+    jmp ebp
+
+ASM_PFX(gcSmmInitSize): DW $ - ASM_PFX(gcSmmInitTemplate)
+
+BITS 32
+global ASM_PFX(SmmRelocationSemaphoreComplete)
+ASM_PFX(SmmRelocationSemaphoreComplete):
+    push    eax
+    mov     eax, [ASM_PFX(mRebasedFlag)]
+    mov     byte [eax], 1
+    pop     eax
+    jmp     [ASM_PFX(mSmmRelocationOriginalAddress)]
+
+global ASM_PFX(SmmInitFixupAddress)
+ASM_PFX(SmmInitFixupAddress):
+    ret
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/InternalSmmRelocationLib.h b/UefiCpuPkg/Library/SmmRelocationLib/InternalSmmRelocationLib.h
new file mode 100644
index 0000000000..c8647fbfe7
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/InternalSmmRelocationLib.h
@@ -0,0 +1,141 @@
+/** @file
+  SMM Relocation Lib for each processor.
+
+  This Lib produces the SMM_BASE_HOB in HOB database which tells
+  the PiSmmCpuDxeSmm driver (runs at a later phase) about the new
+  SMBASE for each processor. PiSmmCpuDxeSmm driver installs the
+  SMI handler at the SMM_BASE_HOB.SmBase[Index]+0x8000 for processor
+  Index.
+
+  Copyright (c) 2024, Intel Corporation. All rights reserved.<BR>
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+#ifndef INTERNAL_SMM_RELOCATION_LIB_H_
+#define INTERNAL_SMM_RELOCATION_LIB_H_
+
+#include <PiPei.h>
+#include <Library/BaseLib.h>
+#include <Library/BaseMemoryLib.h>
+#include <Library/CpuExceptionHandlerLib.h>
+#include <Library/DebugLib.h>
+#include <Library/HobLib.h>
+#include <Library/LocalApicLib.h>
+#include <Library/MemoryAllocationLib.h>
+#include <Library/PcdLib.h>
+#include <Library/PeimEntryPoint.h>
+#include <Library/PeiServicesLib.h>
+#include <Library/SmmRelocationLib.h>
+#include <Guid/SmramMemoryReserve.h>
+#include <Guid/SmmBaseHob.h>
+#include <Register/Intel/Cpuid.h>
+#include <Register/Intel/SmramSaveStateMap.h>
+#include <Protocol/MmCpu.h>
+
+extern UINT64  *mSmBaseForAllCpus;
+extern UINT8   mSmmSaveStateRegisterLma;
+
+extern IA32_DESCRIPTOR  gcSmiGdtr;
+extern IA32_DESCRIPTOR  gcSmiIdtr;
+extern CONST UINT16     gcSmmInitSize;
+extern CONST UINT8      gcSmmInitTemplate[];
+
+X86_ASSEMBLY_PATCH_LABEL  gPatchSmmCr0;
+X86_ASSEMBLY_PATCH_LABEL  gPatchSmmCr3;
+X86_ASSEMBLY_PATCH_LABEL  gPatchSmmCr4;
+X86_ASSEMBLY_PATCH_LABEL  gPatchSmmInitStack;
+
+//
+// The size 0x20 must be bigger than
+// the size of template code of SmmInit. Currently,
+// the size of SmmInit requires the 0x16 Bytes buffer
+// at least.
+//
+#define BACK_BUF_SIZE  0x20
+
+#define CR4_CET_ENABLE  BIT23
+
+//
+// EFER register LMA bit
+//
+#define LMA  BIT10
+
+/**
+  This function configures the SmBase on the currently executing CPU.
+
+  @param[in]     CpuIndex             The index of the CPU.
+  @param[in,out] CpuState             Pointer to SMRAM Save State Map for the
+                                      currently executing CPU. On out, SmBase is
+                                      updated to the new value.
+
+**/
+VOID
+EFIAPI
+ConfigureSmBase (
+  IN     UINTN                 CpuIndex,
+  IN OUT SMRAM_SAVE_STATE_MAP  *CpuState
+  );
+
+/**
+  Semaphore operation for all processor relocate SMMBase.
+**/
+VOID
+EFIAPI
+SmmRelocationSemaphoreComplete (
+  VOID
+  );
+
+/**
+  Hook the code executed immediately after an RSM instruction on the currently
+  executing CPU.  The mode of code executed immediately after RSM must be
+  detected, and the appropriate hook must be selected.  Always clear the auto
+  HALT restart flag if it is set.
+
+  @param[in]     CpuIndex                 The processor index for the currently
+                                          executing CPU.
+  @param[in,out] CpuState                 Pointer to SMRAM Save State Map for the
+                                          currently executing CPU.
+  @param[in]     NewInstructionPointer32  Instruction pointer to use if resuming to
+                                          32-bit mode from 64-bit SMM.
+  @param[in]     NewInstructionPointer    Instruction pointer to use if resuming to
+                                          same mode as SMM.
+
+  @retval The value of the original instruction pointer before it was hooked.
+
+**/
+UINT64
+EFIAPI
+HookReturnFromSmm (
+  IN     UINTN                 CpuIndex,
+  IN OUT SMRAM_SAVE_STATE_MAP  *CpuState,
+  IN     UINT64                NewInstructionPointer32,
+  IN     UINT64                NewInstructionPointer
+  );
+
+/**
+  Hook return address of SMM Save State so that semaphore code
+  can be executed immediately after AP exits SMM to indicate to
+  the BSP that an AP has exited SMM after SMBASE relocation.
+
+  @param[in] CpuIndex     The processor index.
+  @param[in] RebasedFlag  A pointer to a flag that is set to TRUE
+                          immediately after AP exits SMM.
+
+**/
+VOID
+SemaphoreHook (
+  IN UINTN             CpuIndex,
+  IN volatile BOOLEAN  *RebasedFlag
+  );
+
+/**
+  This function fixes up the address of the global variable or function
+  referred in SmmInit assembly files to be the absolute address.
+**/
+VOID
+EFIAPI
+SmmInitFixupAddress (
+  );
+
+#endif
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.c b/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.c
new file mode 100644
index 0000000000..38bad24e7a
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.c
@@ -0,0 +1,659 @@
+/** @file
+  SMM Relocation Lib for each processor.
+
+  This Lib produces the SMM_BASE_HOB in HOB database which tells
+  the PiSmmCpuDxeSmm driver (runs at a later phase) about the new
+  SMBASE for each processor. PiSmmCpuDxeSmm driver installs the
+  SMI handler at the SMM_BASE_HOB.SmBase[Index]+0x8000 for processor
+  Index.
+
+  Copyright (c) 2024, Intel Corporation. All rights reserved.<BR>
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+#include "InternalSmmRelocationLib.h"
+
+UINTN   mMaxNumberOfCpus   = 1;
+UINTN   mNumberOfCpus      = 1;
+UINT64  *mSmBaseForAllCpus = NULL;
+
+//
+// The mode of the CPU at the time an SMI occurs
+//
+UINT8  mSmmSaveStateRegisterLma;
+
+//
+// Record all Processors Info
+//
+EFI_PROCESSOR_INFORMATION  *mProcessorInfo = NULL;
+
+//
+// SmBase Rebased or not
+//
+volatile BOOLEAN  *mRebased;
+
+/**
+  C function for SMI handler. To change all processor's SMMBase Register.
+
+**/
+VOID
+EFIAPI
+SmmInitHandler (
+  VOID
+  )
+{
+  UINT32  ApicId;
+  UINTN   Index;
+
+  SMRAM_SAVE_STATE_MAP  *CpuState;
+
+  //
+  // Update SMM IDT entries' code segment and load IDT
+  //
+  AsmWriteIdtr (&gcSmiIdtr);
+  ApicId = GetApicId ();
+
+  ASSERT (mNumberOfCpus <= mMaxNumberOfCpus);
+
+  for (Index = 0; Index < mNumberOfCpus; Index++) {
+    if (ApicId == (UINT32)mProcessorInfo[Index].ProcessorId) {
+      //
+      // Configure SmBase.
+      //
+      CpuState = (SMRAM_SAVE_STATE_MAP *)(UINTN)(SMM_DEFAULT_SMBASE + SMRAM_SAVE_STATE_MAP_OFFSET);
+      ConfigureSmBase (Index, CpuState);
+
+      //
+      // Hook return after RSM to set SMM re-based flag
+      // SMM re-based flag can't be set before RSM, because SMM save state context might be override
+      // by next AP flow before it take effect.
+      //
+      SemaphoreHook (Index, &mRebased[Index]);
+      return;
+    }
+  }
+
+  ASSERT (FALSE);
+}
+
+/**
+  This routine will split SmramReserve HOB to reserve SmmRelocationSize for Smm relocated memory.
+
+  @param[in]       SmmRelocationSize   SmmRelocationSize for all processors.
+  @param[in,out]   SmmRelocationStart  Return the start address of Smm relocated memory in SMRAM.
+
+  @retval EFI_SUCCESS           The gEfiSmmSmramMemoryGuid is split successfully.
+  @retval EFI_DEVICE_ERROR      Failed to build new HOB for gEfiSmmSmramMemoryGuid.
+  @retval EFI_NOT_FOUND         The gEfiSmmSmramMemoryGuid is not found.
+
+**/
+EFI_STATUS
+SplitSmramHobForSmmRelocation (
+  IN     UINT64                SmmRelocationSize,
+  IN OUT EFI_PHYSICAL_ADDRESS  *SmmRelocationStart
+  )
+{
+  EFI_HOB_GUID_TYPE               *GuidHob;
+  EFI_SMRAM_HOB_DESCRIPTOR_BLOCK  *DescriptorBlock;
+  EFI_SMRAM_HOB_DESCRIPTOR_BLOCK  *NewDescriptorBlock;
+  UINTN                           BufferSize;
+  UINTN                           SmramRanges;
+
+  NewDescriptorBlock = NULL;
+
+  //
+  // Retrieve the GUID HOB data that contains the set of SMRAM descriptors
+  //
+  GuidHob = GetFirstGuidHob (&gEfiSmmSmramMemoryGuid);
+  if (GuidHob == NULL) {
+    return EFI_NOT_FOUND;
+  }
+
+  DescriptorBlock = (EFI_SMRAM_HOB_DESCRIPTOR_BLOCK *)GET_GUID_HOB_DATA (GuidHob);
+
+  //
+  // Allocate one extra EFI_SMRAM_DESCRIPTOR to describe SMRAM memory that contains a pointer
+  // to the Smm relocated memory.
+  //
+  SmramRanges = DescriptorBlock->NumberOfSmmReservedRegions;
+  BufferSize  = sizeof (EFI_SMRAM_HOB_DESCRIPTOR_BLOCK) + (SmramRanges * sizeof (EFI_SMRAM_DESCRIPTOR));
+
+  NewDescriptorBlock = (EFI_SMRAM_HOB_DESCRIPTOR_BLOCK *)BuildGuidHob (
+                                                           &gEfiSmmSmramMemoryGuid,
+                                                           BufferSize
+                                                           );
+  ASSERT (NewDescriptorBlock != NULL);
+  if (NewDescriptorBlock == NULL) {
+    return EFI_DEVICE_ERROR;
+  }
+
+  //
+  // Copy old EFI_SMRAM_HOB_DESCRIPTOR_BLOCK to new allocated region
+  //
+  CopyMem ((VOID *)NewDescriptorBlock, DescriptorBlock, BufferSize - sizeof (EFI_SMRAM_DESCRIPTOR));
+
+  //
+  // Increase the number of SMRAM descriptors by 1 to make room for the ALLOCATED descriptor of size EFI_PAGE_SIZE
+  //
+  NewDescriptorBlock->NumberOfSmmReservedRegions = (UINT32)(SmramRanges + 1);
+
+  ASSERT (SmramRanges >= 1);
+  //
+  // Copy last entry to the end - we assume TSEG is last entry.
+  //
+  CopyMem (&NewDescriptorBlock->Descriptor[SmramRanges], &NewDescriptorBlock->Descriptor[SmramRanges - 1], sizeof (EFI_SMRAM_DESCRIPTOR));
+
+  //
+  // Update the entry in the array with a size of SmmRelocationSize and put into the ALLOCATED state
+  //
+  NewDescriptorBlock->Descriptor[SmramRanges - 1].PhysicalSize = SmmRelocationSize;
+  NewDescriptorBlock->Descriptor[SmramRanges - 1].RegionState |= EFI_ALLOCATED;
+
+  //
+  // Return the start address of Smm relocated memory in SMRAM.
+  //
+  if (SmmRelocationStart != NULL) {
+    *SmmRelocationStart = NewDescriptorBlock->Descriptor[SmramRanges - 1].CpuStart;
+  }
+
+  //
+  // Reduce the size of the last SMRAM descriptor by SmmRelocationSize
+  //
+  NewDescriptorBlock->Descriptor[SmramRanges].PhysicalStart += SmmRelocationSize;
+  NewDescriptorBlock->Descriptor[SmramRanges].CpuStart      += SmmRelocationSize;
+  NewDescriptorBlock->Descriptor[SmramRanges].PhysicalSize  -= SmmRelocationSize;
+
+  //
+  // Last step, we can scrub old one
+  //
+  ZeroMem (&GuidHob->Name, sizeof (GuidHob->Name));
+
+  return EFI_SUCCESS;
+}
+
+/**
+  This function will create SmBase for all CPUs.
+
+  @param[in] SmBaseForAllCpus    Pointer to SmBase for all CPUs.
+
+  @retval EFI_SUCCESS           Create SmBase for all CPUs successfully.
+  @retval Others                Failed to create SmBase for all CPUs.
+
+**/
+EFI_STATUS
+CreateSmmBaseHob (
+  IN UINT64  *SmBaseForAllCpus
+  )
+{
+  UINTN              Index;
+  SMM_BASE_HOB_DATA  *SmmBaseHobData;
+  UINT32             CpuCount;
+  UINT32             NumberOfProcessorsInHob;
+  UINT32             MaxCapOfProcessorsInHob;
+  UINT32             HobCount;
+
+  SmmBaseHobData          = NULL;
+  CpuCount                = 0;
+  NumberOfProcessorsInHob = 0;
+  MaxCapOfProcessorsInHob = 0;
+  HobCount                = 0;
+
+  //
+  // Count the HOB instance maximum capacity of CPU (MaxCapOfProcessorsInHob) since the max HobLength is 0xFFF8.
+  //
+  MaxCapOfProcessorsInHob = (0xFFF8 - sizeof (EFI_HOB_GUID_TYPE) - sizeof (SMM_BASE_HOB_DATA)) / sizeof (UINT64) + 1;
+  DEBUG ((DEBUG_INFO, "CreateSmmBaseHob - MaxCapOfProcessorsInHob: %03x\n", MaxCapOfProcessorsInHob));
+
+  //
+  // Create Guided SMM Base HOB Instances.
+  //
+  while (CpuCount != mMaxNumberOfCpus) {
+    NumberOfProcessorsInHob = MIN ((UINT32)mMaxNumberOfCpus - CpuCount, MaxCapOfProcessorsInHob);
+
+    SmmBaseHobData = BuildGuidHob (
+                       &gSmmBaseHobGuid,
+                       sizeof (SMM_BASE_HOB_DATA) + sizeof (UINT64) * NumberOfProcessorsInHob
+                       );
+    if (SmmBaseHobData == NULL) {
+      return EFI_OUT_OF_RESOURCES;
+    }
+
+    SmmBaseHobData->ProcessorIndex     = CpuCount;
+    SmmBaseHobData->NumberOfProcessors = NumberOfProcessorsInHob;
+
+    DEBUG ((DEBUG_INFO, "CreateSmmBaseHob - SmmBaseHobData[%d]->ProcessorIndex: %03x\n", HobCount, SmmBaseHobData->ProcessorIndex));
+    DEBUG ((DEBUG_INFO, "CreateSmmBaseHob - SmmBaseHobData[%d]->NumberOfProcessors: %03x\n", HobCount, SmmBaseHobData->NumberOfProcessors));
+    for (Index = 0; Index < SmmBaseHobData->NumberOfProcessors; Index++) {
+      //
+      // Calculate the new SMBASE address
+      //
+      SmmBaseHobData->SmBase[Index] = SmBaseForAllCpus[Index + CpuCount];
+      DEBUG ((DEBUG_INFO, "CreateSmmBaseHob - SmmBaseHobData[%d]->SmBase[%03x]: %08x\n", HobCount, Index, SmmBaseHobData->SmBase[Index]));
+    }
+
+    CpuCount += NumberOfProcessorsInHob;
+    HobCount++;
+    SmmBaseHobData = NULL;
+  }
+
+  return EFI_SUCCESS;
+}
+
+/**
+  Relocate SmmBases for each processor.
+  Execute on first boot and all S3 resumes
+
+**/
+VOID
+SmmRelocateBases (
+  VOID
+  )
+{
+  UINT8                 BakBuf[BACK_BUF_SIZE];
+  SMRAM_SAVE_STATE_MAP  BakBuf2;
+  SMRAM_SAVE_STATE_MAP  *CpuStatePtr;
+  UINT8                 *U8Ptr;
+  UINTN                 Index;
+  UINTN                 BspIndex;
+  UINT32                BspApicId;
+
+  //
+  // Make sure the reserved size is large enough for procedure SmmInitTemplate.
+  //
+  ASSERT (sizeof (BakBuf) >= gcSmmInitSize);
+
+  //
+  // Patch ASM code template with current CR0, CR3, and CR4 values
+  //
+  PatchInstructionX86 (gPatchSmmCr0, AsmReadCr0 (), 4);
+  PatchInstructionX86 (gPatchSmmCr3, AsmReadCr3 (), 4);
+  PatchInstructionX86 (gPatchSmmCr4, AsmReadCr4 () & (~CR4_CET_ENABLE), 4);
+
+  U8Ptr       = (UINT8 *)(UINTN)(SMM_DEFAULT_SMBASE + SMM_HANDLER_OFFSET);
+  CpuStatePtr = (SMRAM_SAVE_STATE_MAP *)(UINTN)(SMM_DEFAULT_SMBASE + SMRAM_SAVE_STATE_MAP_OFFSET);
+
+  //
+  // Backup original contents at address 0x38000
+  //
+  CopyMem (BakBuf, U8Ptr, sizeof (BakBuf));
+  CopyMem (&BakBuf2, CpuStatePtr, sizeof (BakBuf2));
+
+  //
+  // Load image for relocation
+  //
+  CopyMem (U8Ptr, gcSmmInitTemplate, gcSmmInitSize);
+
+  //
+  // Retrieve the local APIC ID of current processor
+  //
+  BspApicId = GetApicId ();
+
+  //
+  // Relocate SM bases for all APs
+  // This is APs' 1st SMI - rebase will be done here, and APs' default SMI handler will be overridden by gcSmmInitTemplate
+  //
+  BspIndex = (UINTN)-1;
+  for (Index = 0; Index < mNumberOfCpus; Index++) {
+    mRebased[Index] = FALSE;
+    if (BspApicId != (UINT32)mProcessorInfo[Index].ProcessorId) {
+      SendSmiIpi ((UINT32)mProcessorInfo[Index].ProcessorId);
+      //
+      // Wait for this AP to finish its 1st SMI
+      //
+      while (!mRebased[Index]) {
+      }
+    } else {
+      //
+      // BSP will be Relocated later
+      //
+      BspIndex = Index;
+    }
+  }
+
+  //
+  // Relocate BSP's SMM base
+  //
+  ASSERT (BspIndex != (UINTN)-1);
+  SendSmiIpi (BspApicId);
+
+  //
+  // Wait for the BSP to finish its 1st SMI
+  //
+  while (!mRebased[BspIndex]) {
+  }
+
+  //
+  // Restore contents at address 0x38000
+  //
+  CopyMem (CpuStatePtr, &BakBuf2, sizeof (BakBuf2));
+  CopyMem (U8Ptr, BakBuf, sizeof (BakBuf));
+}
+
+/**
+  This function will initialize SmBase for all CPUs.
+
+  @param[in,out] SmBaseForAllCpus    Pointer to SmBase for all CPUs.
+
+  @retval EFI_SUCCESS           Initialize SmBase for all CPUs successfully.
+  @retval Others                Failed to initialize SmBase for all CPUs.
+
+**/
+EFI_STATUS
+InitSmBaseForAllCpus (
+  IN OUT UINT64  **SmBaseForAllCpus
+  )
+{
+  EFI_STATUS            Status;
+  UINTN                 TileSize;
+  UINT64                SmmRelocationSize;
+  EFI_PHYSICAL_ADDRESS  SmmRelocationStart;
+  UINTN                 Index;
+
+  SmmRelocationStart = 0;
+
+  ASSERT (SmBaseForAllCpus != NULL);
+
+  //
+  // Calculate SmmRelocationSize for all of the tiles.
+  //
+  // The CPU save state and code for the SMI entry point are tiled within an SMRAM
+  // allocated buffer. The minimum size of this buffer for a uniprocessor system
+  // is 32 KB, because the entry point is SMBASE + 32KB, and CPU save state area
+  // just below SMBASE + 64KB. If more than one CPU is present in the platform,
+  // then the SMI entry point and the CPU save state areas can be tiles to minimize
+  // the total amount SMRAM required for all the CPUs. The tile size can be computed
+  // by adding the CPU save state size, any extra CPU specific context, and
+  // the size of code that must be placed at the SMI entry point to transfer
+  // control to a C function in the native SMM execution mode. This size is
+  // rounded up to the nearest power of 2 to give the tile size for a each CPU.
+  // The total amount of memory required is the maximum number of CPUs that
+  // platform supports times the tile size.
+  //
+  TileSize          = SIZE_8KB;
+  SmmRelocationSize = EFI_PAGES_TO_SIZE (EFI_SIZE_TO_PAGES (SIZE_32KB + TileSize * (mMaxNumberOfCpus - 1)));
+
+  //
+  // Split SmramReserve HOB to reserve SmmRelocationSize for Smm relocated memory
+  //
+  Status = SplitSmramHobForSmmRelocation (
+             SmmRelocationSize,
+             &SmmRelocationStart
+             );
+  if (EFI_ERROR (Status)) {
+    return Status;
+  }
+
+  ASSERT (SmmRelocationStart != 0);
+  DEBUG ((DEBUG_INFO, "InitSmBaseForAllCpus - SmmRelocationSize: 0x%08x\n", SmmRelocationSize));
+  DEBUG ((DEBUG_INFO, "InitSmBaseForAllCpus - SmmRelocationStart: 0x%08x\n", SmmRelocationStart));
+
+  //
+  // Init SmBaseForAllCpus
+  //
+  *SmBaseForAllCpus = (UINT64 *)AllocatePages (EFI_SIZE_TO_PAGES (sizeof (UINT64) * mMaxNumberOfCpus));
+  if (*SmBaseForAllCpus == NULL) {
+    return EFI_OUT_OF_RESOURCES;
+  }
+
+  for (Index = 0; Index < mMaxNumberOfCpus; Index++) {
+    //
+    // Return each SmBase in SmBaseForAllCpus
+    //
+    (*SmBaseForAllCpus)[Index] = (UINTN)(SmmRelocationStart)+ Index * TileSize - SMM_HANDLER_OFFSET;
+    DEBUG ((DEBUG_INFO, "InitSmBaseForAllCpus - SmBase For CPU[%03x]: %08x\n", Index, (*SmBaseForAllCpus)[Index]));
+  }
+
+  return EFI_SUCCESS;
+}
+
+/**
+  Initialize IDT to setup exception handlers in SMM.
+
+**/
+VOID
+InitSmmIdt (
+  VOID
+  )
+{
+  EFI_STATUS              Status;
+  BOOLEAN                 InterruptState;
+  IA32_DESCRIPTOR         PeiIdtr;
+  CONST EFI_PEI_SERVICES  **PeiServices;
+
+  //
+  // There are 32 (not 255) entries in it since only processor
+  // generated exceptions will be handled.
+  //
+  gcSmiIdtr.Limit = (sizeof (IA32_IDT_GATE_DESCRIPTOR) * 32) - 1;
+
+  //
+  // Allocate for IDT.
+  // sizeof (UINTN) is for the PEI Services Table pointer.
+  //
+  gcSmiIdtr.Base = (UINTN)AllocateZeroPool (gcSmiIdtr.Limit + 1 + sizeof (UINTN));
+  ASSERT (gcSmiIdtr.Base != 0);
+  gcSmiIdtr.Base += sizeof (UINTN);
+
+  //
+  // Disable Interrupt, save InterruptState and save PEI IDT table
+  //
+  InterruptState = SaveAndDisableInterrupts ();
+  AsmReadIdtr (&PeiIdtr);
+
+  //
+  // Save the PEI Services Table pointer
+  // The PEI Services Table pointer will be stored in the sizeof (UINTN) bytes
+  // immediately preceding the IDT in memory.
+  //
+  PeiServices                                   = (CONST EFI_PEI_SERVICES **)(*(UINTN *)(PeiIdtr.Base - sizeof (UINTN)));
+  (*(UINTN *)(gcSmiIdtr.Base - sizeof (UINTN))) = (UINTN)PeiServices;
+
+  //
+  // Load SMM temporary IDT table
+  //
+  AsmWriteIdtr (&gcSmiIdtr);
+
+  //
+  // Setup SMM default exception handlers, SMM IDT table
+  // will be updated and saved in gcSmiIdtr
+  //
+  Status = InitializeCpuExceptionHandlers (NULL);
+  ASSERT_EFI_ERROR (Status);
+
+  //
+  // Restore PEI IDT table and CPU InterruptState
+  //
+  AsmWriteIdtr ((IA32_DESCRIPTOR *)&PeiIdtr);
+  SetInterruptState (InterruptState);
+}
+
+/**
+  Determine the mode of the CPU at the time an SMI occurs
+
+  @retval EFI_MM_SAVE_STATE_REGISTER_LMA_32BIT   32 bit.
+  @retval EFI_MM_SAVE_STATE_REGISTER_LMA_64BIT   64 bit.
+
+**/
+UINT8
+CheckSmmCpuMode (
+  VOID
+  )
+{
+  UINT32  RegEax;
+  UINT32  RegEdx;
+  UINTN   FamilyId;
+  UINTN   ModelId;
+  UINT8   SmmSaveStateRegisterLma;
+
+  //
+  // Determine the mode of the CPU at the time an SMI occurs
+  //   Intel(R) 64 and IA-32 Architectures Software Developer's Manual
+  //   Volume 3C, Section 34.4.1.1
+  //
+  AsmCpuid (CPUID_VERSION_INFO, &RegEax, NULL, NULL, NULL);
+  FamilyId = (RegEax >> 8) & 0xf;
+  ModelId  = (RegEax >> 4) & 0xf;
+  if ((FamilyId == 0x06) || (FamilyId == 0x0f)) {
+    ModelId = ModelId | ((RegEax >> 12) & 0xf0);
+  }
+
+  RegEdx = 0;
+  AsmCpuid (CPUID_EXTENDED_FUNCTION, &RegEax, NULL, NULL, NULL);
+  if (RegEax >= CPUID_EXTENDED_CPU_SIG) {
+    AsmCpuid (CPUID_EXTENDED_CPU_SIG, NULL, NULL, NULL, &RegEdx);
+  }
+
+  SmmSaveStateRegisterLma = EFI_MM_SAVE_STATE_REGISTER_LMA_32BIT;
+  if ((RegEdx & BIT29) != 0) {
+    SmmSaveStateRegisterLma = EFI_MM_SAVE_STATE_REGISTER_LMA_64BIT;
+  }
+
+  if (FamilyId == 0x06) {
+    if ((ModelId == 0x17) || (ModelId == 0x0f) || (ModelId == 0x1c)) {
+      SmmSaveStateRegisterLma = EFI_MM_SAVE_STATE_REGISTER_LMA_64BIT;
+    }
+  }
+
+  return SmmSaveStateRegisterLma;
+}
+
+/**
+  CPU SmmBase Relocation Init.
+
+  This function is to relocate CPU SmmBase.
+
+  @param[in] MpServices2        Pointer to this instance of the MpServices.
+
+  @retval EFI_SUCCESS           CPU SmmBase Relocated successfully.
+  @retval Others                CPU SmmBase Relocation failed.
+
+**/
+EFI_STATUS
+EFIAPI
+SmmRelocationInit (
+  IN EDKII_PEI_MP_SERVICES2_PPI  *MpServices2
+  )
+{
+  EFI_STATUS  Status;
+  UINTN       NumberOfEnabledCpus;
+  UINTN       SmmStackSize;
+  UINT8       *SmmStacks;
+  UINTN       Index;
+
+  SmmStacks = NULL;
+
+  DEBUG ((DEBUG_INFO, "SmmRelocationInit Start \n"));
+  if (MpServices2 == NULL) {
+    return EFI_INVALID_PARAMETER;
+  }
+
+  //
+  // Fix up the address of the global variable or function referred in
+  // SmmInit assembly files to be the absolute address
+  //
+  SmmInitFixupAddress ();
+
+  //
+  // Check the mode of the CPU at the time an SMI occurs
+  //
+  mSmmSaveStateRegisterLma = CheckSmmCpuMode ();
+
+  //
+  // Patch SMI stack for SMM base relocation
+  // Note: No need allocate stack for all CPUs since the relocation
+  // occurs serially for each CPU
+  //
+  SmmStackSize = EFI_PAGES_TO_SIZE (EFI_SIZE_TO_PAGES (PcdGet32 (PcdCpuSmmStackSize)));
+  SmmStacks    = (UINT8 *)AllocatePages (EFI_SIZE_TO_PAGES (SmmStackSize));
+  if (SmmStacks == NULL) {
+    Status = EFI_OUT_OF_RESOURCES;
+    goto ON_EXIT;
+  }
+
+  DEBUG ((DEBUG_INFO, "SmmRelocationInit - SmmStacks: 0x%x\n", SmmStacks));
+  DEBUG ((DEBUG_INFO, "SmmRelocationInit - SmmStackSize: 0x%x\n", SmmStackSize));
+
+  PatchInstructionX86 (
+    gPatchSmmInitStack,
+    (UINTN)(SmmStacks + SmmStackSize - sizeof (UINTN)),
+    sizeof (UINTN)
+    );
+
+  //
+  // Initialize the SMM IDT for SMM base relocation
+  //
+  InitSmmIdt ();
+
+  //
+  // Get the number of processors
+  //
+  Status = MpServices2->GetNumberOfProcessors (
+                          MpServices2,
+                          &mNumberOfCpus,
+                          &NumberOfEnabledCpus
+                          );
+  if (EFI_ERROR (Status)) {
+    return Status;
+  }
+
+  if (FeaturePcdGet (PcdCpuHotPlugSupport)) {
+    mMaxNumberOfCpus = PcdGet32 (PcdCpuMaxLogicalProcessorNumber);
+  } else {
+    mMaxNumberOfCpus = mNumberOfCpus;
+  }
+
+  //
+  // Retrieve the Processor Info for all CPUs
+  //
+  mProcessorInfo = (EFI_PROCESSOR_INFORMATION *)AllocatePool (sizeof (EFI_PROCESSOR_INFORMATION) * mMaxNumberOfCpus);
+  if (mProcessorInfo == NULL) {
+    Status = EFI_OUT_OF_RESOURCES;
+    goto ON_EXIT;
+  }
+
+  for (Index = 0; Index < mMaxNumberOfCpus; Index++) {
+    if (Index < mNumberOfCpus) {
+      Status = MpServices2->GetProcessorInfo (MpServices2, Index | CPU_V2_EXTENDED_TOPOLOGY, &mProcessorInfo[Index]);
+      if (EFI_ERROR (Status)) {
+        goto ON_EXIT;
+      }
+    }
+  }
+
+  //
+  // Initialize the SmBase for all CPUs
+  //
+  Status = InitSmBaseForAllCpus (&mSmBaseForAllCpus);
+  if (EFI_ERROR (Status)) {
+    goto ON_EXIT;
+  }
+
+  //
+  // Relocate SmmBases for each processor.
+  // Allocate mRebased as the flag to indicate the relocation is done for each CPU.
+  //
+  mRebased = (BOOLEAN *)AllocateZeroPool (sizeof (BOOLEAN) * mMaxNumberOfCpus);
+  if (mRebased == NULL) {
+    Status = EFI_OUT_OF_RESOURCES;
+    goto ON_EXIT;
+  }
+
+  SmmRelocateBases ();
+
+  //
+  // Create the SmBase HOB for all CPUs
+  //
+  Status = CreateSmmBaseHob (mSmBaseForAllCpus);
+
+ON_EXIT:
+  if (SmmStacks != NULL) {
+    FreePages (SmmStacks, EFI_SIZE_TO_PAGES (SmmStackSize));
+  }
+
+  if (mSmBaseForAllCpus != NULL) {
+    FreePages (mSmBaseForAllCpus, EFI_SIZE_TO_PAGES (sizeof (UINT64) * mMaxNumberOfCpus));
+  }
+
+  DEBUG ((DEBUG_INFO, "SmmRelocationInit Done\n"));
+  return Status;
+}
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.inf b/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.inf
new file mode 100644
index 0000000000..2ac16ab5d1
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/SmmRelocationLib.inf
@@ -0,0 +1,61 @@
+## @file
+# SMM Relocation Lib for each processor.
+#
+# This Lib produces the SMM_BASE_HOB in HOB database which tells
+# the PiSmmCpuDxeSmm driver (runs at a later phase) about the new
+# SMBASE for each processor. PiSmmCpuDxeSmm driver installs the
+# SMI handler at the SMM_BASE_HOB.SmBase[Index]+0x8000 for processor
+# Index.
+#
+# Copyright (c) 2024, Intel Corporation. All rights reserved.<BR>
+# SPDX-License-Identifier: BSD-2-Clause-Patent
+#
+##
+
+[Defines]
+  INF_VERSION                    = 0x00010005
+  BASE_NAME                      = SmmRelocationLib
+  FILE_GUID                      = 853E97B3-790C-4EA3-945C-8F622FC47FE8
+  MODULE_TYPE                    = PEIM
+  VERSION_STRING                 = 1.0
+  LIBRARY_CLASS                  = SmmRelocationLib
+
+[Sources]
+  InternalSmmRelocationLib.h
+  SmramSaveStateConfig.c
+  SmmRelocationLib.c
+
+[Sources.Ia32]
+  Ia32/Semaphore.c
+  Ia32/SmmInit.nasm
+
+[Sources.X64]
+  X64/Semaphore.c
+  X64/SmmInit.nasm
+
+[Packages]
+  MdePkg/MdePkg.dec
+  MdeModulePkg/MdeModulePkg.dec
+  UefiCpuPkg/UefiCpuPkg.dec
+
+[LibraryClasses]
+  BaseLib
+  BaseMemoryLib
+  CpuExceptionHandlerLib
+  DebugLib
+  HobLib
+  LocalApicLib
+  MemoryAllocationLib
+  PcdLib
+  PeiServicesLib
+
+[Guids]
+  gSmmBaseHobGuid                               ## HOB ALWAYS_PRODUCED
+  gEfiSmmSmramMemoryGuid                        ## CONSUMES
+
+[Pcd]
+  gUefiCpuPkgTokenSpaceGuid.PcdCpuMaxLogicalProcessorNumber
+  gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmStackSize                     ## CONSUMES
+
+[FeaturePcd]
+  gUefiCpuPkgTokenSpaceGuid.PcdCpuHotPlugSupport                        ## CONSUMES
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/SmramSaveStateConfig.c b/UefiCpuPkg/Library/SmmRelocationLib/SmramSaveStateConfig.c
new file mode 100644
index 0000000000..3982158979
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/SmramSaveStateConfig.c
@@ -0,0 +1,91 @@
+/** @file
+  Config SMRAM Save State for SmmBases Relocation.
+
+  Copyright (c) 2024, Intel Corporation. All rights reserved.<BR>
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+#include "InternalSmmRelocationLib.h"
+
+/**
+  This function configures the SmBase on the currently executing CPU.
+
+  @param[in]     CpuIndex             The index of the CPU.
+  @param[in,out] CpuState             Pointer to SMRAM Save State Map for the
+                                      currently executing CPU. On out, SmBase is
+                                      updated to the new value.
+
+**/
+VOID
+EFIAPI
+ConfigureSmBase (
+  IN     UINTN                 CpuIndex,
+  IN OUT SMRAM_SAVE_STATE_MAP  *CpuState
+  )
+{
+  if (mSmmSaveStateRegisterLma == EFI_MM_SAVE_STATE_REGISTER_LMA_32BIT) {
+    CpuState->x86.SMBASE = (UINT32)mSmBaseForAllCpus[CpuIndex];
+  } else {
+    CpuState->x64.SMBASE = (UINT32)mSmBaseForAllCpus[CpuIndex];
+  }
+}
+
+/**
+  Hook the code executed immediately after an RSM instruction on the currently
+  executing CPU.  The mode of code executed immediately after RSM must be
+  detected, and the appropriate hook must be selected.  Always clear the auto
+  HALT restart flag if it is set.
+
+  @param[in]     CpuIndex                 The processor index for the currently
+                                          executing CPU.
+  @param[in,out] CpuState                 Pointer to SMRAM Save State Map for the
+                                          currently executing CPU.
+  @param[in]     NewInstructionPointer32  Instruction pointer to use if resuming to
+                                          32-bit mode from 64-bit SMM.
+  @param[in]     NewInstructionPointer    Instruction pointer to use if resuming to
+                                          same mode as SMM.
+
+  @retval The value of the original instruction pointer before it was hooked.
+
+**/
+UINT64
+EFIAPI
+HookReturnFromSmm (
+  IN     UINTN                 CpuIndex,
+  IN OUT SMRAM_SAVE_STATE_MAP  *CpuState,
+  IN     UINT64                NewInstructionPointer32,
+  IN     UINT64                NewInstructionPointer
+  )
+{
+  UINT64  OriginalInstructionPointer;
+
+  if (mSmmSaveStateRegisterLma == EFI_MM_SAVE_STATE_REGISTER_LMA_32BIT) {
+    OriginalInstructionPointer = (UINT64)CpuState->x86._EIP;
+    CpuState->x86._EIP         = (UINT32)NewInstructionPointer;
+
+    //
+    // Clear the auto HALT restart flag so the RSM instruction returns
+    // program control to the instruction following the HLT instruction.
+    //
+    if ((CpuState->x86.AutoHALTRestart & BIT0) != 0) {
+      CpuState->x86.AutoHALTRestart &= ~BIT0;
+    }
+  } else {
+    OriginalInstructionPointer = CpuState->x64._RIP;
+    if ((CpuState->x64.IA32_EFER & LMA) == 0) {
+      CpuState->x64._RIP = (UINT32)NewInstructionPointer32;
+    } else {
+      CpuState->x64._RIP = (UINT32)NewInstructionPointer;
+    }
+
+    //
+    // Clear the auto HALT restart flag so the RSM instruction returns
+    // program control to the instruction following the HLT instruction.
+    //
+    if ((CpuState->x64.AutoHALTRestart & BIT0) != 0) {
+      CpuState->x64.AutoHALTRestart &= ~BIT0;
+    }
+  }
+
+  return OriginalInstructionPointer;
+}
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/X64/Semaphore.c b/UefiCpuPkg/Library/SmmRelocationLib/X64/Semaphore.c
new file mode 100644
index 0000000000..54d3462ef8
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/X64/Semaphore.c
@@ -0,0 +1,70 @@
+/** @file
+  Semaphore mechanism to indicate to the BSP that an AP has exited SMM
+  after SMBASE relocation.
+
+  Copyright (c) 2024, Intel Corporation. All rights reserved.<BR>
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+
+**/
+
+#include "InternalSmmRelocationLib.h"
+
+X86_ASSEMBLY_PATCH_LABEL  gPatchSmmRelocationOriginalAddressPtr32;
+X86_ASSEMBLY_PATCH_LABEL  gPatchRebasedFlagAddr32;
+
+UINTN             mSmmRelocationOriginalAddress;
+volatile BOOLEAN  *mRebasedFlag;
+
+/**
+AP Semaphore operation in 32-bit mode while BSP runs in 64-bit mode.
+**/
+VOID
+SmmRelocationSemaphoreComplete32 (
+  VOID
+  );
+
+/**
+  Hook return address of SMM Save State so that semaphore code
+  can be executed immediately after AP exits SMM to indicate to
+  the BSP that an AP has exited SMM after SMBASE relocation.
+
+  @param[in] CpuIndex     The processor index.
+  @param[in] RebasedFlag  A pointer to a flag that is set to TRUE
+                          immediately after AP exits SMM.
+
+**/
+VOID
+SemaphoreHook (
+  IN UINTN             CpuIndex,
+  IN volatile BOOLEAN  *RebasedFlag
+  )
+{
+  SMRAM_SAVE_STATE_MAP  *CpuState;
+  UINTN                 TempValue;
+
+  mRebasedFlag = RebasedFlag;
+  PatchInstructionX86 (
+    gPatchRebasedFlagAddr32,
+    (UINT32)(UINTN)mRebasedFlag,
+    4
+    );
+
+  CpuState = (SMRAM_SAVE_STATE_MAP *)(UINTN)(SMM_DEFAULT_SMBASE + SMRAM_SAVE_STATE_MAP_OFFSET);
+
+  mSmmRelocationOriginalAddress = HookReturnFromSmm (
+                                    CpuIndex,
+                                    CpuState,
+                                    (UINT64)(UINTN)&SmmRelocationSemaphoreComplete32,
+                                    (UINT64)(UINTN)&SmmRelocationSemaphoreComplete
+                                    );
+
+  //
+  // Use temp value to fix ICC compiler warning
+  //
+  TempValue = (UINTN)&mSmmRelocationOriginalAddress;
+  PatchInstructionX86 (
+    gPatchSmmRelocationOriginalAddressPtr32,
+    (UINT32)TempValue,
+    4
+    );
+}
diff --git a/UefiCpuPkg/Library/SmmRelocationLib/X64/SmmInit.nasm b/UefiCpuPkg/Library/SmmRelocationLib/X64/SmmInit.nasm
new file mode 100644
index 0000000000..ce4311fffd
--- /dev/null
+++ b/UefiCpuPkg/Library/SmmRelocationLib/X64/SmmInit.nasm
@@ -0,0 +1,207 @@
+;------------------------------------------------------------------------------ ;
+; Copyright (c) 2024, Intel Corporation. All rights reserved.<BR>
+; SPDX-License-Identifier: BSD-2-Clause-Patent
+;
+; Module Name:
+;
+;   SmmInit.nasm
+;
+; Abstract:
+;
+;   Functions for relocating SMBASE's for all processors
+;
+;-------------------------------------------------------------------------------
+
+%include "StuffRsbNasm.inc"
+
+global  ASM_PFX(gcSmiIdtr)
+global  ASM_PFX(gcSmiGdtr)
+
+extern ASM_PFX(SmmInitHandler)
+extern ASM_PFX(mRebasedFlag)
+extern ASM_PFX(mSmmRelocationOriginalAddress)
+
+global ASM_PFX(gPatchSmmCr3)
+global ASM_PFX(gPatchSmmCr4)
+global ASM_PFX(gPatchSmmCr0)
+global ASM_PFX(gPatchSmmInitStack)
+global ASM_PFX(gcSmmInitSize)
+global ASM_PFX(gcSmmInitTemplate)
+global ASM_PFX(gPatchRebasedFlagAddr32)
+global ASM_PFX(gPatchSmmRelocationOriginalAddressPtr32)
+
+%define LONG_MODE_CS 0x38
+
+    SECTION .data
+
+NullSeg: DQ 0                   ; reserved by architecture
+CodeSeg32:
+            DW      -1                  ; LimitLow
+            DW      0                   ; BaseLow
+            DB      0                   ; BaseMid
+            DB      0x9b
+            DB      0xcf                ; LimitHigh
+            DB      0                   ; BaseHigh
+ProtModeCodeSeg32:
+            DW      -1                  ; LimitLow
+            DW      0                   ; BaseLow
+            DB      0                   ; BaseMid
+            DB      0x9b
+            DB      0xcf                ; LimitHigh
+            DB      0                   ; BaseHigh
+ProtModeSsSeg32:
+            DW      -1                  ; LimitLow
+            DW      0                   ; BaseLow
+            DB      0                   ; BaseMid
+            DB      0x93
+            DB      0xcf                ; LimitHigh
+            DB      0                   ; BaseHigh
+DataSeg32:
+            DW      -1                  ; LimitLow
+            DW      0                   ; BaseLow
+            DB      0                   ; BaseMid
+            DB      0x93
+            DB      0xcf                ; LimitHigh
+            DB      0                   ; BaseHigh
+CodeSeg16:
+            DW      -1
+            DW      0
+            DB      0
+            DB      0x9b
+            DB      0x8f
+            DB      0
+DataSeg16:
+            DW      -1
+            DW      0
+            DB      0
+            DB      0x93
+            DB      0x8f
+            DB      0
+CodeSeg64:
+            DW      -1                  ; LimitLow
+            DW      0                   ; BaseLow
+            DB      0                   ; BaseMid
+            DB      0x9b
+            DB      0xaf                ; LimitHigh
+            DB      0                   ; BaseHigh
+GDT_SIZE equ $ -   NullSeg
+
+ASM_PFX(gcSmiGdtr):
+    DW      GDT_SIZE - 1
+    DQ      NullSeg
+
+ASM_PFX(gcSmiIdtr):
+    DW      0
+    DQ      0
+
+
+    DEFAULT REL
+    SECTION .text
+
+global ASM_PFX(SmmStartup)
+
+BITS 16
+ASM_PFX(SmmStartup):
+    ;mov     eax, 0x80000001             ; read capability
+    ;cpuid
+    ;mov     ebx, edx                    ; rdmsr will change edx. keep it in ebx.
+    mov     eax, strict dword 0         ; source operand will be patched
+ASM_PFX(gPatchSmmCr3):
+    mov     cr3, eax
+o32 lgdt    [cs:ebp + (ASM_PFX(gcSmiGdtr) - ASM_PFX(SmmStartup))]
+    mov     eax, strict dword 0         ; source operand will be patched
+ASM_PFX(gPatchSmmCr4):
+    or      ah,  2                      ; enable XMM registers access
+    mov     cr4, eax
+    mov     ecx, 0xc0000080             ; IA32_EFER MSR
+    rdmsr
+    or      ah, BIT0                    ; set LME bit
+    ;test    ebx, BIT20                  ; check NXE capability
+    ;jz      .1
+    ;or      ah, BIT3                    ; set NXE bit
+;.1:
+    wrmsr
+    mov     eax, strict dword 0         ; source operand will be patched
+ASM_PFX(gPatchSmmCr0):
+    mov     cr0, eax                    ; enable protected mode & paging
+    jmp     LONG_MODE_CS : dword 0      ; offset will be patched to @LongMode
+@PatchLongModeOffset:
+
+BITS 64
+@LongMode:                              ; long-mode starts here
+    mov     rsp, strict qword 0         ; source operand will be patched
+ASM_PFX(gPatchSmmInitStack):
+    and     sp, 0xfff0                  ; make sure RSP is 16-byte aligned
+    ;
+    ; According to X64 calling convention, XMM0~5 are volatile, we need to save
+    ; them before calling C-function.
+    ;
+    sub     rsp, 0x60
+    movdqa  [rsp], xmm0
+    movdqa  [rsp + 0x10], xmm1
+    movdqa  [rsp + 0x20], xmm2
+    movdqa  [rsp + 0x30], xmm3
+    movdqa  [rsp + 0x40], xmm4
+    movdqa  [rsp + 0x50], xmm5
+
+    add     rsp, -0x20
+    call    ASM_PFX(SmmInitHandler)
+    add     rsp, 0x20
+
+    ;
+    ; Restore XMM0~5 after calling C-function.
+    ;
+    movdqa  xmm0, [rsp]
+    movdqa  xmm1, [rsp + 0x10]
+    movdqa  xmm2, [rsp + 0x20]
+    movdqa  xmm3, [rsp + 0x30]
+    movdqa  xmm4, [rsp + 0x40]
+    movdqa  xmm5, [rsp + 0x50]
+
+    StuffRsb64
+    rsm
+
+BITS 16
+ASM_PFX(gcSmmInitTemplate):
+    mov ebp, [cs:@L1 - ASM_PFX(gcSmmInitTemplate) + 0x8000]
+    sub ebp, 0x30000
+    jmp ebp
+@L1:
+    DQ     0; ASM_PFX(SmmStartup)
+
+ASM_PFX(gcSmmInitSize): DW $ - ASM_PFX(gcSmmInitTemplate)
+
+BITS 64
+global ASM_PFX(SmmRelocationSemaphoreComplete)
+ASM_PFX(SmmRelocationSemaphoreComplete):
+    push    rax
+    mov     rax, [ASM_PFX(mRebasedFlag)]
+    mov     byte [rax], 1
+    pop     rax
+    jmp     [ASM_PFX(mSmmRelocationOriginalAddress)]
+
+;
+; Semaphore code running in 32-bit mode
+;
+BITS 32
+global ASM_PFX(SmmRelocationSemaphoreComplete32)
+ASM_PFX(SmmRelocationSemaphoreComplete32):
+    push    eax
+    mov     eax, strict dword 0                ; source operand will be patched
+ASM_PFX(gPatchRebasedFlagAddr32):
+    mov     byte [eax], 1
+    pop     eax
+    jmp     dword [dword 0]                    ; destination will be patched
+ASM_PFX(gPatchSmmRelocationOriginalAddressPtr32):
+
+BITS 64
+global ASM_PFX(SmmInitFixupAddress)
+ASM_PFX(SmmInitFixupAddress):
+    lea    rax, [@LongMode]
+    lea    rcx, [@PatchLongModeOffset - 6]
+    mov    dword [rcx], eax
+
+    lea    rax, [ASM_PFX(SmmStartup)]
+    lea    rcx, [@L1]
+    mov    qword [rcx], rax
+    ret
-- 
2.16.2.windows.1



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#117590): https://edk2.groups.io/g/devel/message/117590
Mute This Topic: https://groups.io/mt/105441989/7686176
Group Owner: devel+owner@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [rebecca@openfw.io]
-=-=-=-=-=-=-=-=-=-=-=-



  parent reply	other threads:[~2024-04-10 13:57 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-10 13:57 [edk2-devel] [PATCH v1 00/13] Add SmmRelocationLib Wu, Jiaxin
2024-04-10 13:57 ` [edk2-devel] [PATCH v1 01/13] UefiCpuPkg: Add SmmRelocationLib class Wu, Jiaxin
2024-04-10 13:57 ` Wu, Jiaxin [this message]
2024-04-11  3:18   ` [edk2-devel] [PATCH v1 02/13] UefiCpuPkg/SmmRelocationLib: Add SmmRelocationLib library instance Ni, Ray
2024-04-10 13:57 ` [edk2-devel] [PATCH v1 03/13] UefiCpuPkg/SmmRelocationLib: Add library instance for OVMF Wu, Jiaxin
2024-04-11  3:19   ` Ni, Ray
2024-04-11  7:11   ` Gerd Hoffmann
2024-04-15 13:04     ` Wu, Jiaxin
2024-04-16  7:35       ` Gerd Hoffmann
2024-04-16 10:12         ` Wu, Jiaxin
2024-04-16 10:30           ` Gerd Hoffmann
2024-04-16 11:34             ` Wu, Jiaxin
2024-04-17  7:04               ` Gerd Hoffmann
2024-04-17  7:45                 ` Wu, Jiaxin
2024-04-10 13:57 ` [edk2-devel] [PATCH v1 04/13] UefiCpuPkg/SmmRelocationLib: Add library instance for AMD Wu, Jiaxin
2024-04-16 10:20   ` Abdul Lateef Attar via groups.io
2024-04-10 13:57 ` [edk2-devel] [PATCH v1 05/13] UefiCpuPkg/UefiCpuPkg.dsc: Include SmmRelocationLib in UefiCpuPkg Wu, Jiaxin
2024-04-11  3:21   ` Ni, Ray
2024-04-10 13:57 ` [edk2-devel] [PATCH v1 06/13] UefiPayloadPkg/UefiPayloadPkg.dsc: Include SmmRelocationLib Wu, Jiaxin
2024-04-10 13:59   ` Guo, Gua
2024-04-10 13:57 ` [edk2-devel] [PATCH v1 07/13] OvmfPkg: Include SmmRelocationLib in OvmfPkg Wu, Jiaxin
2024-04-10 13:57 ` [edk2-devel] [PATCH v1 08/13] OvmfPkg/PlatformInitLib: Create gEfiSmmSmramMemoryGuid Wu, Jiaxin
2024-04-11  3:27   ` Ni, Ray
2024-04-10 13:57 ` [edk2-devel] [PATCH v1 09/13] OvmfPkg/SmmAccess: Consume gEfiSmmSmramMemoryGuid Wu, Jiaxin
2024-04-11  8:21   ` Ni, Ray
2024-04-10 13:57 ` [edk2-devel] [PATCH v1 10/13] OvmfPkg/PlatformInitLib: Create gEfiAcpiVariableGuid Wu, Jiaxin
2024-04-11  8:20   ` Ni, Ray
2024-04-10 13:57 ` [edk2-devel] [PATCH v1 11/13] OvmfPkg/SmmCpuFeaturesLib: Check Smbase Relocation is done or not Wu, Jiaxin
2024-04-11  8:22   ` Ni, Ray
2024-04-10 13:57 ` [edk2-devel] [PATCH v1 12/13] OvmfPkg/PlatformPei: Relocate SmBases in PEI phase Wu, Jiaxin
2024-04-11  8:25   ` Ni, Ray
2024-04-10 13:57 ` [edk2-devel] [PATCH v1 13/13] UefiCpuPkg/PiSmmCpuDxeSmm: Remove SmBases relocation logic Wu, Jiaxin
2024-04-10 14:01 ` [edk2-devel] [PATCH v1 00/13] Add SmmRelocationLib Yao, Jiewen
2024-04-10 15:15   ` Wu, Jiaxin
2024-04-11  3:14 ` Ni, Ray
2024-04-11  4:31   ` Wu, Jiaxin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240410135724.15344-3-jiaxin.wu@intel.com \
    --to=devel@edk2.groups.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox