* [PATCH 0/5] Put APs in 64 bit mode before handoff to OS
@ 2023-02-20 5:20 Yuanhao Xie
2023-02-20 5:20 ` [Patch V2 1/5] UefiCpuPkg: Duplicate RelocateApLoop for the processors with SEV-ES Yuanhao Xie
` (4 more replies)
0 siblings, 5 replies; 14+ messages in thread
From: Yuanhao Xie @ 2023-02-20 5:20 UTC (permalink / raw)
To: devel
The purpose of this patch series is to put the AP in 64-bit mode
before handing off the boot process to the OS. To do this,
duplicate relocateApLoop for processors with SEV-ES, allocate
contiguous memory, then create page tables and keep AP in 64-bit
mode.
Yuanhao Xie (5):
UefiCpuPkg: Duplicate RelocateApLoop for the processors with SEV-ES.
UefiCpuPkg: Contiguous memory allocation and code clean-up.
OvmfPkg: Add CpuPageTableLib required by MpInitLib.
UefiPayloadPkg: Add CpuPageTableLib required by MpInitLib.
UefiCpuPkg: Put APs in 64 bit mode before handoff to OS.
OvmfPkg/AmdSev/AmdSevX64.dsc | 3 ++-
OvmfPkg/CloudHv/CloudHvX64.dsc | 3 ++-
OvmfPkg/IntelTdx/IntelTdxX64.dsc | 4 +++-
OvmfPkg/Microvm/MicrovmX64.dsc | 3 ++-
OvmfPkg/OvmfPkgIa32X64.dsc | 3 ++-
OvmfPkg/OvmfPkgX64.dsc | 4 +++-
OvmfPkg/OvmfXen.dsc | 3 ++-
UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf | 6 +++++-
UefiCpuPkg/Library/MpInitLib/DxeMpLib.c | 149 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------------------------------------------------
UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c | 23 +++++++++++++++++++++++
UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm | 11 ++++-------
UefiCpuPkg/Library/MpInitLib/MpEqu.inc | 22 ++++++++++++----------
UefiCpuPkg/Library/MpInitLib/MpLib.h | 45 ++++++++++++++++++++++++++++++++++++++++++++-
UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm | 169 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 178 ++++++++++++++++++++++++++++++++--------------------------------------------------------------------------------------------------------------------------------------------------
UefiCpuPkg/UefiCpuPkg.dsc | 3 ++-
UefiPayloadPkg/UefiPayloadPkg.dsc | 3 ++-
18 files changed, 474 insertions(+), 240 deletions(-)
create mode 100644 UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
create mode 100644 UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
--
2.36.1.windows.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Patch V2 1/5] UefiCpuPkg: Duplicate RelocateApLoop for the processors with SEV-ES.
2023-02-20 5:20 [PATCH 0/5] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
@ 2023-02-20 5:20 ` Yuanhao Xie
2023-02-21 9:22 ` Gerd Hoffmann
2023-02-20 5:20 ` [Patch V2 2/5] UefiCpuPkg: Contiguous memory allocation and code clean-up Yuanhao Xie
` (3 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Yuanhao Xie @ 2023-02-20 5:20 UTC (permalink / raw)
To: devel
Cc: Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo, Gerd Hoffmann,
Tom Lendacky, Laszlo Ersek
The processors with SEV-ES enabled follow the original logic, for the
other cases the APs will be put in 64-bit mode before handing off the
boot process to the OS. To achieve this purpose, this patch duplicate
RelocateApLoop and other related code for the processors with SEV-ES.
Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
---
UefiCpuPkg/Library/MpInitLib/DxeMpLib.c | 68 ++++++++++++++++++++++++++++++++++++++++++++------------------------
UefiCpuPkg/Library/MpInitLib/MpEqu.inc | 22 ++++++++++++----------
UefiCpuPkg/Library/MpInitLib/MpLib.h | 30 +++++++++++++++++++++++++++++-
UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm | 169 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 5 ++++-
5 files changed, 258 insertions(+), 36 deletions(-)
diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
index a84e9e33ba..dd935a79d3 100644
--- a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
+++ b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
@@ -1,7 +1,7 @@
/** @file
MP initialize support functions for DXE phase.
- Copyright (c) 2016 - 2020, Intel Corporation. All rights reserved.<BR>
+ Copyright (c) 2016 - 2023, Intel Corporation. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
**/
@@ -378,32 +378,44 @@ RelocateApLoop (
IN OUT VOID *Buffer
)
{
- CPU_MP_DATA *CpuMpData;
- BOOLEAN MwaitSupport;
- ASM_RELOCATE_AP_LOOP AsmRelocateApLoopFunc;
- UINTN ProcessorNumber;
- UINTN StackStart;
+ CPU_MP_DATA *CpuMpData;
+ BOOLEAN MwaitSupport;
+ ASM_RELOCATE_AP_LOOP AsmRelocateApLoopFunc;
+ ASM_RELOCATE_AP_LOOP_AMDSEV AsmRelocateApLoopFuncAmdSev;
+ UINTN ProcessorNumber;
+ UINTN StackStart;
MpInitLibWhoAmI (&ProcessorNumber);
CpuMpData = GetCpuMpData ();
MwaitSupport = IsMwaitSupport ();
if (CpuMpData->UseSevEsAPMethod) {
- StackStart = CpuMpData->SevEsAPResetStackStart;
+ StackStart = CpuMpData->SevEsAPResetStackStart;
+ AsmRelocateApLoopFuncAmdSev = (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;
+ AsmRelocateApLoopFuncAmdSev (
+ MwaitSupport,
+ CpuMpData->ApTargetCState,
+ CpuMpData->PmCodeSegment,
+ StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
+ (UINTN)&mNumberToFinish,
+ CpuMpData->Pm16CodeSegment,
+ CpuMpData->SevEsAPBuffer,
+ CpuMpData->WakeupBuffer
+ );
} else {
- StackStart = mReservedTopOfApStack;
+ StackStart = mReservedTopOfApStack;
+ AsmRelocateApLoopFunc = (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;
+ AsmRelocateApLoopFunc (
+ MwaitSupport,
+ CpuMpData->ApTargetCState,
+ CpuMpData->PmCodeSegment,
+ StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
+ (UINTN)&mNumberToFinish,
+ CpuMpData->Pm16CodeSegment,
+ CpuMpData->SevEsAPBuffer,
+ CpuMpData->WakeupBuffer
+ );
}
- AsmRelocateApLoopFunc = (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;
- AsmRelocateApLoopFunc (
- MwaitSupport,
- CpuMpData->ApTargetCState,
- CpuMpData->PmCodeSegment,
- StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
- (UINTN)&mNumberToFinish,
- CpuMpData->Pm16CodeSegment,
- CpuMpData->SevEsAPBuffer,
- CpuMpData->WakeupBuffer
- );
//
// It should never reach here
//
@@ -582,11 +594,19 @@ InitMpGlobalData (
mReservedTopOfApStack = (UINTN)Address + ApSafeBufferSize;
ASSERT ((mReservedTopOfApStack & (UINTN)(CPU_STACK_ALIGNMENT - 1)) == 0);
- CopyMem (
- mReservedApLoopFunc,
- CpuMpData->AddressMap.RelocateApLoopFuncAddress,
- CpuMpData->AddressMap.RelocateApLoopFuncSize
- );
+ if (CpuMpData->UseSevEsAPMethod) {
+ CopyMem (
+ mReservedApLoopFunc,
+ CpuMpData->AddressMap.RelocateApLoopFuncAddressAmdSev,
+ CpuMpData->AddressMap.RelocateApLoopFuncSizeAmdSev
+ );
+ } else {
+ CopyMem (
+ mReservedApLoopFunc,
+ CpuMpData->AddressMap.RelocateApLoopFuncAddress,
+ CpuMpData->AddressMap.RelocateApLoopFuncSize
+ );
+ }
Status = gBS->CreateEvent (
EVT_TIMER | EVT_NOTIFY_SIGNAL,
diff --git a/UefiCpuPkg/Library/MpInitLib/MpEqu.inc b/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
index ebadcc6fb3..1472ef2024 100644
--- a/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
+++ b/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
@@ -1,5 +1,5 @@
;------------------------------------------------------------------------------ ;
-; Copyright (c) 2015 - 2022, Intel Corporation. All rights reserved.<BR>
+; Copyright (c) 2015 - 2023, Intel Corporation. All rights reserved.<BR>
; SPDX-License-Identifier: BSD-2-Clause-Patent
;
; Module Name:
@@ -21,15 +21,17 @@ CPU_SWITCH_STATE_LOADED equ 2
; Equivalent NASM structure of MP_ASSEMBLY_ADDRESS_MAP
;
struc MP_ASSEMBLY_ADDRESS_MAP
- .RendezvousFunnelAddress CTYPE_UINTN 1
- .ModeEntryOffset CTYPE_UINTN 1
- .RendezvousFunnelSize CTYPE_UINTN 1
- .RelocateApLoopFuncAddress CTYPE_UINTN 1
- .RelocateApLoopFuncSize CTYPE_UINTN 1
- .ModeTransitionOffset CTYPE_UINTN 1
- .SwitchToRealNoNxOffset CTYPE_UINTN 1
- .SwitchToRealPM16ModeOffset CTYPE_UINTN 1
- .SwitchToRealPM16ModeSize CTYPE_UINTN 1
+ .RendezvousFunnelAddress CTYPE_UINTN 1
+ .ModeEntryOffset CTYPE_UINTN 1
+ .RendezvousFunnelSize CTYPE_UINTN 1
+ .RelocateApLoopFuncAddress CTYPE_UINTN 1
+ .RelocateApLoopFuncSize CTYPE_UINTN 1
+ .RelocateApLoopFuncAddressAmdSev CTYPE_UINTN 1
+ .RelocateApLoopFuncSizeAmdSev CTYPE_UINTN 1
+ .ModeTransitionOffset CTYPE_UINTN 1
+ .SwitchToRealNoNxOffset CTYPE_UINTN 1
+ .SwitchToRealPM16ModeOffset CTYPE_UINTN 1
+ .SwitchToRealPM16ModeSize CTYPE_UINTN 1
endstruc
;
diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.h b/UefiCpuPkg/Library/MpInitLib/MpLib.h
index f5086e497e..fe60084333 100644
--- a/UefiCpuPkg/Library/MpInitLib/MpLib.h
+++ b/UefiCpuPkg/Library/MpInitLib/MpLib.h
@@ -1,7 +1,7 @@
/** @file
Common header file for MP Initialize Library.
- Copyright (c) 2016 - 2022, Intel Corporation. All rights reserved.<BR>
+ Copyright (c) 2016 - 2023, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2020, AMD Inc. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
@@ -179,6 +179,8 @@ typedef struct {
UINTN RendezvousFunnelSize;
UINT8 *RelocateApLoopFuncAddress;
UINTN RelocateApLoopFuncSize;
+ UINT8 *RelocateApLoopFuncAddressAmdSev;
+ UINTN RelocateApLoopFuncSizeAmdSev;
UINTN ModeTransitionOffset;
UINTN SwitchToRealNoNxOffset;
UINTN SwitchToRealPM16ModeOffset;
@@ -311,6 +313,7 @@ typedef struct {
#define AP_SAFE_STACK_SIZE 128
#define AP_RESET_STACK_SIZE AP_SAFE_STACK_SIZE
+STATIC_ASSERT ((AP_SAFE_STACK_SIZE & (CPU_STACK_ALIGNMENT - 1)) == 0, "AP_SAFE_STACK_SIZE is not aligned with CPU_STACK_ALIGNMENT");
#pragma pack(1)
@@ -373,6 +376,31 @@ typedef
IN UINTN WakeupBuffer
);
+/**
+ Assembly code to place AP into safe loop mode for Amd processors with Sev enabled.
+ Place AP into targeted C-State if MONITOR is supported, otherwise
+ place AP into hlt state.
+ Place AP in protected mode if the current is long mode. Due to AP maybe
+ wakeup by some hardware event. It could avoid accessing page table that
+ may not available during booting to OS.
+ @param[in] MwaitSupport TRUE indicates MONITOR is supported.
+ FALSE indicates MONITOR is not supported.
+ @param[in] ApTargetCState Target C-State value.
+ @param[in] PmCodeSegment Protected mode code segment value.
+**/
+typedef
+ VOID
+(EFIAPI *ASM_RELOCATE_AP_LOOP_AMDSEV)(
+ IN BOOLEAN MwaitSupport,
+ IN UINTN ApTargetCState,
+ IN UINTN PmCodeSegment,
+ IN UINTN TopOfApStack,
+ IN UINTN NumberToFinish,
+ IN UINTN Pm16CodeSegment,
+ IN UINTN SevEsAPJumpTable,
+ IN UINTN WakeupBuffer
+ );
+
/**
Assembly code to get starting address and size of the rendezvous entry for APs.
Information for fixing a jump instruction in the code is also returned.
diff --git a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
index 7c2469f9c5..6b48913306 100644
--- a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
+++ b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
@@ -346,3 +346,172 @@ PM16Mode:
iret
SwitchToRealProcEnd:
+;-------------------------------------------------------------------------------------
+; AsmRelocateApLoopAmdSev (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
+;-------------------------------------------------------------------------------------
+
+AsmRelocateApLoopStartAmdSev:
+BITS 64
+ cmp qword [rsp + 56], 0 ; SevEsAPJumpTable
+ je NoSevEsAmdSev
+
+ ;
+ ; Perform some SEV-ES related setup before leaving 64-bit mode
+ ;
+ push rcx
+ push rdx
+
+ ;
+ ; Get the RDX reset value using CPUID
+ ;
+ mov rax, 1
+ cpuid
+ mov rsi, rax ; Save off the reset value for RDX
+
+ ;
+ ; Prepare the GHCB for the AP_HLT_LOOP VMGEXIT call
+ ; - Must be done while in 64-bit long mode so that writes to
+ ; the GHCB memory will be unencrypted.
+ ; - No NAE events can be generated once this is set otherwise
+ ; the AP_RESET_HOLD SW_EXITCODE will be overwritten.
+ ;
+ mov rcx, 0xc0010130
+ rdmsr ; Retrieve current GHCB address
+ shl rdx, 32
+ or rdx, rax
+
+ mov rdi, rdx
+ xor rax, rax
+ mov rcx, 0x800
+ shr rcx, 3
+ rep stosq ; Clear the GHCB
+
+ mov rax, 0x80000004 ; VMGEXIT AP_RESET_HOLD
+ mov [rdx + 0x390], rax
+ mov rax, 114 ; Set SwExitCode valid bit
+ bts [rdx + 0x3f0], rax
+ inc rax ; Set SwExitInfo1 valid bit
+ bts [rdx + 0x3f0], rax
+ inc rax ; Set SwExitInfo2 valid bit
+ bts [rdx + 0x3f0], rax
+
+ pop rdx
+ pop rcx
+
+NoSevEsAmdSev:
+ cli ; Disable interrupt before switching to 32-bit mode
+ mov rax, [rsp + 40] ; CountTofinish
+ lock dec dword [rax] ; (*CountTofinish)--
+
+ mov r10, [rsp + 48] ; Pm16CodeSegment
+ mov rax, [rsp + 56] ; SevEsAPJumpTable
+ mov rbx, [rsp + 64] ; WakeupBuffer
+ mov rsp, r9 ; TopOfApStack
+
+ push rax ; Save SevEsAPJumpTable
+ push rbx ; Save WakeupBuffer
+ push r10 ; Save Pm16CodeSegment
+ push rcx ; Save MwaitSupport
+ push rdx ; Save ApTargetCState
+
+ lea rax, [PmEntryAmdSev] ; rax <- The start address of transition code
+
+ push r8
+ push rax
+
+ ;
+ ; Clear R8 - R15, for reset, before going into 32-bit mode
+ ;
+ xor r8, r8
+ xor r9, r9
+ xor r10, r10
+ xor r11, r11
+ xor r12, r12
+ xor r13, r13
+ xor r14, r14
+ xor r15, r15
+
+ ;
+ ; Far return into 32-bit mode
+ ;
+o64 retf
+
+BITS 32
+PmEntryAmdSev:
+ mov eax, cr0
+ btr eax, 31 ; Clear CR0.PG
+ mov cr0, eax ; Disable paging and caches
+
+ mov ecx, 0xc0000080
+ rdmsr
+ and ah, ~ 1 ; Clear LME
+ wrmsr
+ mov eax, cr4
+ and al, ~ (1 << 5) ; Clear PAE
+ mov cr4, eax
+
+ pop edx
+ add esp, 4
+ pop ecx,
+ add esp, 4
+
+MwaitCheckAmdSev:
+ cmp cl, 1 ; Check mwait-monitor support
+ jnz HltLoopAmdSev
+ mov ebx, edx ; Save C-State to ebx
+MwaitLoopAmdSev:
+ cli
+ mov eax, esp ; Set Monitor Address
+ xor ecx, ecx ; ecx = 0
+ xor edx, edx ; edx = 0
+ monitor
+ mov eax, ebx ; Mwait Cx, Target C-State per eax[7:4]
+ shl eax, 4
+ mwait
+ jmp MwaitLoopAmdSev
+
+HltLoopAmdSev:
+ pop edx ; PM16CodeSegment
+ add esp, 4
+ pop ebx ; WakeupBuffer
+ add esp, 4
+ pop eax ; SevEsAPJumpTable
+ add esp, 4
+ cmp eax, 0 ; Check for SEV-ES
+ je DoHltAmdSev
+
+ cli
+ ;
+ ; SEV-ES is enabled, use VMGEXIT (GHCB information already
+ ; set by caller)
+ ;
+BITS 64
+ rep vmmcall
+BITS 32
+
+ ;
+ ; Back from VMGEXIT AP_HLT_LOOP
+ ; Push the FLAGS/CS/IP values to use
+ ;
+ push word 0x0002 ; EFLAGS
+ xor ecx, ecx
+ mov cx, [eax + 2] ; CS
+ push cx
+ mov cx, [eax] ; IP
+ push cx
+ push word 0x0000 ; For alignment, will be discarded
+
+ push edx
+ push ebx
+
+ mov edx, esi ; Restore RDX reset value
+
+ retf
+
+DoHltAmdSev:
+ cli
+ hlt
+ jmp DoHltAmdSev
+
+BITS 64
+AsmRelocateApLoopEndAmdSev:
diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
index 5d71995bf8..d36f8ba06d 100644
--- a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
+++ b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
@@ -1,5 +1,5 @@
;------------------------------------------------------------------------------ ;
-; Copyright (c) 2015 - 2022, Intel Corporation. All rights reserved.<BR>
+; Copyright (c) 2015 - 2023, Intel Corporation. All rights reserved.<BR>
; SPDX-License-Identifier: BSD-2-Clause-Patent
;
; Module Name:
@@ -459,6 +459,9 @@ ASM_PFX(AsmGetAddressMap):
lea rax, [AsmRelocateApLoopStart]
mov qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.RelocateApLoopFuncAddress], rax
mov qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.RelocateApLoopFuncSize], AsmRelocateApLoopEnd - AsmRelocateApLoopStart
+ lea rax, [AsmRelocateApLoopStartAmdSev]
+ mov qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.RelocateApLoopFuncAddressAmdSev], rax
+ mov qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.RelocateApLoopFuncSizeAmdSev], AsmRelocateApLoopEndAmdSev - AsmRelocateApLoopStartAmdSev
mov qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.ModeTransitionOffset], Flat32Start - RendezvousFunnelProcStart
mov qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.SwitchToRealNoNxOffset], SwitchToRealProcStart - Flat32Start
mov qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.SwitchToRealPM16ModeOffset], PM16Mode - RendezvousFunnelProcStart
--
2.36.1.windows.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Patch V2 2/5] UefiCpuPkg: Contiguous memory allocation and code clean-up.
2023-02-20 5:20 [PATCH 0/5] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
2023-02-20 5:20 ` [Patch V2 1/5] UefiCpuPkg: Duplicate RelocateApLoop for the processors with SEV-ES Yuanhao Xie
@ 2023-02-20 5:20 ` Yuanhao Xie
2023-02-21 9:26 ` [edk2-devel] " Gerd Hoffmann
2023-02-20 5:20 ` [Patch V2 3/5] OvmfPkg: Add CpuPageTableLib required by MpInitLib Yuanhao Xie
` (2 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Yuanhao Xie @ 2023-02-20 5:20 UTC (permalink / raw)
To: devel; +Cc: Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo
Allocate the memory for stacks and APs loop at contiguous address. The
memory is calculated taking into account the size difference of
RelocateApLoopFunc under different cases.
This patch also includes the code refactoring to eliminate the
duplication, non-descriptive variable, etc.
Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
---
UefiCpuPkg/Library/MpInitLib/DxeMpLib.c | 157 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------------------------------------------------------------------
UefiCpuPkg/Library/MpInitLib/MpLib.h | 6 ++++++
2 files changed, 79 insertions(+), 84 deletions(-)
diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
index dd935a79d3..75743faf5f 100644
--- a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
+++ b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
@@ -20,14 +20,15 @@
#define AP_SAFE_STACK_SIZE 128
-CPU_MP_DATA *mCpuMpData = NULL;
-EFI_EVENT mCheckAllApsEvent = NULL;
-EFI_EVENT mMpInitExitBootServicesEvent = NULL;
-EFI_EVENT mLegacyBootEvent = NULL;
-volatile BOOLEAN mStopCheckAllApsStatus = TRUE;
-VOID *mReservedApLoopFunc = NULL;
-UINTN mReservedTopOfApStack;
-volatile UINT32 mNumberToFinish = 0;
+CPU_MP_DATA *mCpuMpData = NULL;
+EFI_EVENT mCheckAllApsEvent = NULL;
+EFI_EVENT mMpInitExitBootServicesEvent = NULL;
+EFI_EVENT mLegacyBootEvent = NULL;
+volatile BOOLEAN mStopCheckAllApsStatus = TRUE;
+UINTN mReservedTopOfApStack;
+volatile UINT32 mNumberToFinish = 0;
+RELOCATE_AP_LOOP_ENTRY mReservedApLoop;
+
//
// Begin wakeup buffer allocation below 0x88000
@@ -380,8 +381,6 @@ RelocateApLoop (
{
CPU_MP_DATA *CpuMpData;
BOOLEAN MwaitSupport;
- ASM_RELOCATE_AP_LOOP AsmRelocateApLoopFunc;
- ASM_RELOCATE_AP_LOOP_AMDSEV AsmRelocateApLoopFuncAmdSev;
UINTN ProcessorNumber;
UINTN StackStart;
@@ -389,31 +388,29 @@ RelocateApLoop (
CpuMpData = GetCpuMpData ();
MwaitSupport = IsMwaitSupport ();
if (CpuMpData->UseSevEsAPMethod) {
- StackStart = CpuMpData->SevEsAPResetStackStart;
- AsmRelocateApLoopFuncAmdSev = (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;
- AsmRelocateApLoopFuncAmdSev (
- MwaitSupport,
- CpuMpData->ApTargetCState,
- CpuMpData->PmCodeSegment,
- StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
- (UINTN)&mNumberToFinish,
- CpuMpData->Pm16CodeSegment,
- CpuMpData->SevEsAPBuffer,
- CpuMpData->WakeupBuffer
- );
+ StackStart = CpuMpData->SevEsAPResetStackStart;
+ mReservedApLoop.AmdSevEntry (
+ MwaitSupport,
+ CpuMpData->ApTargetCState,
+ CpuMpData->PmCodeSegment,
+ StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
+ (UINTN)&mNumberToFinish,
+ CpuMpData->Pm16CodeSegment,
+ CpuMpData->SevEsAPBuffer,
+ CpuMpData->WakeupBuffer
+ );
} else {
- StackStart = mReservedTopOfApStack;
- AsmRelocateApLoopFunc = (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;
- AsmRelocateApLoopFunc (
- MwaitSupport,
- CpuMpData->ApTargetCState,
- CpuMpData->PmCodeSegment,
- StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
- (UINTN)&mNumberToFinish,
- CpuMpData->Pm16CodeSegment,
- CpuMpData->SevEsAPBuffer,
- CpuMpData->WakeupBuffer
- );
+ StackStart = mReservedTopOfApStack;
+ mReservedApLoop.GenericEntry (
+ MwaitSupport,
+ CpuMpData->ApTargetCState,
+ CpuMpData->PmCodeSegment,
+ StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
+ (UINTN)&mNumberToFinish,
+ CpuMpData->Pm16CodeSegment,
+ CpuMpData->SevEsAPBuffer,
+ CpuMpData->WakeupBuffer
+ );
}
//
@@ -477,12 +474,16 @@ InitMpGlobalData (
)
{
EFI_STATUS Status;
- EFI_PHYSICAL_ADDRESS Address;
- UINTN ApSafeBufferSize;
+ MP_ASSEMBLY_ADDRESS_MAP *AddressMap;
+ UINTN StackPages;
+ UINTN FuncPages;
UINTN Index;
EFI_GCD_MEMORY_SPACE_DESCRIPTOR MemDesc;
UINTN StackBase;
CPU_INFO_IN_HOB *CpuInfoInHob;
+ EFI_PHYSICAL_ADDRESS Address;
+ UINT8 *ApLoopFunc;
+ UINTN ApLoopFuncSize;
SaveCpuMpData (CpuMpData);
@@ -537,6 +538,15 @@ InitMpGlobalData (
}
}
+ AddressMap = &CpuMpData->AddressMap;
+ if (CpuMpData->UseSevEsAPMethod) {
+ ApLoopFunc = AddressMap->RelocateApLoopFuncAddressAmdSev;
+ ApLoopFuncSize = AddressMap->RelocateApLoopFuncSizeAmdSev;
+ } else {
+ ApLoopFunc = AddressMap->RelocateApLoopFuncAddress;
+ ApLoopFuncSize = AddressMap->RelocateApLoopFuncSize;
+ }
+
//
// Avoid APs access invalid buffer data which allocated by BootServices,
// so we will allocate reserved data for AP loop code. We also need to
@@ -545,26 +555,31 @@ InitMpGlobalData (
// Allocating it in advance since memory services are not available in
// Exit Boot Services callback function.
//
- ApSafeBufferSize = EFI_PAGES_TO_SIZE (
- EFI_SIZE_TO_PAGES (
- CpuMpData->AddressMap.RelocateApLoopFuncSize
- )
- );
- Address = BASE_4GB - 1;
- Status = gBS->AllocatePages (
- AllocateMaxAddress,
- EfiReservedMemoryType,
- EFI_SIZE_TO_PAGES (ApSafeBufferSize),
- &Address
- );
- ASSERT_EFI_ERROR (Status);
+ // +------------+ (TopOfApStack)
+ // | Stack * N |
+ // +------------+
+ // | Padding |
+ // +------------+
+ // | Ap Loop |
+ // +------------+ (low address )
+ //
- mReservedApLoopFunc = (VOID *)(UINTN)Address;
- ASSERT (mReservedApLoopFunc != NULL);
+ Address = BASE_4GB - 1;
+ StackPages = EFI_SIZE_TO_PAGES (CpuMpData->CpuCount * AP_SAFE_STACK_SIZE);
+ FuncPages = EFI_SIZE_TO_PAGES (ALIGN_VALUE (ApLoopFuncSize, EFI_PAGE_SIZE));
- //
- // Make sure that the buffer memory is executable if NX protection is enabled
- // for EfiReservedMemoryType.
+ Status = gBS->AllocatePages (
+ AllocateMaxAddress,
+ EfiReservedMemoryType,
+ StackPages+FuncPages,
+ &Address
+ );
+ ASSERT_EFI_ERROR (Status);
+ // If a memory range has the EFI_MEMORY_XP attribute, OS loader
+ // may set the IA32_EFER.NXE (No-eXecution Enable) bit in IA32_EFER MSR,
+ // then set the XD (eXecution Disable) bit in the CPU PAE page table.
+ // Here is to make sure that the memory is executable if NX protection is
+ // enabled for EfiReservedMemoryType.
//
// TODO: Check EFI_MEMORY_XP bit set or not once it's available in DXE GCD
// service.
@@ -573,41 +588,15 @@ InitMpGlobalData (
if (!EFI_ERROR (Status)) {
gDS->SetMemorySpaceAttributes (
Address,
- ApSafeBufferSize,
+ ALIGN_VALUE (ApLoopFuncSize, EFI_PAGE_SIZE),
MemDesc.Attributes & (~EFI_MEMORY_XP)
);
}
- ApSafeBufferSize = EFI_PAGES_TO_SIZE (
- EFI_SIZE_TO_PAGES (
- CpuMpData->CpuCount * AP_SAFE_STACK_SIZE
- )
- );
- Address = BASE_4GB - 1;
- Status = gBS->AllocatePages (
- AllocateMaxAddress,
- EfiReservedMemoryType,
- EFI_SIZE_TO_PAGES (ApSafeBufferSize),
- &Address
- );
- ASSERT_EFI_ERROR (Status);
-
- mReservedTopOfApStack = (UINTN)Address + ApSafeBufferSize;
+ mReservedTopOfApStack = (UINTN)Address + EFI_PAGES_TO_SIZE (StackPages+FuncPages);
ASSERT ((mReservedTopOfApStack & (UINTN)(CPU_STACK_ALIGNMENT - 1)) == 0);
- if (CpuMpData->UseSevEsAPMethod) {
- CopyMem (
- mReservedApLoopFunc,
- CpuMpData->AddressMap.RelocateApLoopFuncAddressAmdSev,
- CpuMpData->AddressMap.RelocateApLoopFuncSizeAmdSev
- );
- } else {
- CopyMem (
- mReservedApLoopFunc,
- CpuMpData->AddressMap.RelocateApLoopFuncAddress,
- CpuMpData->AddressMap.RelocateApLoopFuncSize
- );
- }
-
+ mReservedApLoop.Data = (VOID *)(UINTN)Address;
+ CopyMem (mReservedApLoop.Data, ApLoopFunc, ApLoopFuncSize);
Status = gBS->CreateEvent (
EVT_TIMER | EVT_NOTIFY_SIGNAL,
TPL_NOTIFY,
diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.h b/UefiCpuPkg/Library/MpInitLib/MpLib.h
index fe60084333..772a828045 100644
--- a/UefiCpuPkg/Library/MpInitLib/MpLib.h
+++ b/UefiCpuPkg/Library/MpInitLib/MpLib.h
@@ -401,6 +401,12 @@ typedef
IN UINTN WakeupBuffer
);
+typedef union {
+ VOID *Data;
+ ASM_RELOCATE_AP_LOOP_AMDSEV AmdSevEntry; // 64-bit AMD Sev processors
+ ASM_RELOCATE_AP_LOOP GenericEntry; // Intel processors (32-bit or 64-bit), 32-bit AMD processors, or AMD non-Sev processors
+} RELOCATE_AP_LOOP_ENTRY;
+
/**
Assembly code to get starting address and size of the rendezvous entry for APs.
Information for fixing a jump instruction in the code is also returned.
--
2.36.1.windows.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Patch V2 3/5] OvmfPkg: Add CpuPageTableLib required by MpInitLib.
2023-02-20 5:20 [PATCH 0/5] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
2023-02-20 5:20 ` [Patch V2 1/5] UefiCpuPkg: Duplicate RelocateApLoop for the processors with SEV-ES Yuanhao Xie
2023-02-20 5:20 ` [Patch V2 2/5] UefiCpuPkg: Contiguous memory allocation and code clean-up Yuanhao Xie
@ 2023-02-20 5:20 ` Yuanhao Xie
2023-02-20 5:20 ` [Patch V2 4/5] UefiPayloadPkg: " Yuanhao Xie
2023-02-20 5:20 ` [Patch V2 5/5] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS Yuanhao Xie
4 siblings, 0 replies; 14+ messages in thread
From: Yuanhao Xie @ 2023-02-20 5:20 UTC (permalink / raw)
To: devel; +Cc: Ard Biesheuvel, Jiewen Yao, Jordan Justen, Gerd Hoffmann
Add CpuPageTableLib required by MpInitLib in OvmfPkg.
Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
---
OvmfPkg/AmdSev/AmdSevX64.dsc | 3 ++-
OvmfPkg/CloudHv/CloudHvX64.dsc | 3 ++-
OvmfPkg/IntelTdx/IntelTdxX64.dsc | 4 +++-
OvmfPkg/Microvm/MicrovmX64.dsc | 3 ++-
OvmfPkg/OvmfPkgIa32X64.dsc | 3 ++-
OvmfPkg/OvmfPkgX64.dsc | 4 +++-
OvmfPkg/OvmfXen.dsc | 3 ++-
7 files changed, 16 insertions(+), 7 deletions(-)
diff --git a/OvmfPkg/AmdSev/AmdSevX64.dsc b/OvmfPkg/AmdSev/AmdSevX64.dsc
index 1cebd6b4bc..f0c4dc2310 100644
--- a/OvmfPkg/AmdSev/AmdSevX64.dsc
+++ b/OvmfPkg/AmdSev/AmdSevX64.dsc
@@ -3,7 +3,7 @@
# virtual machine remote attestation and secret injection
#
# Copyright (c) 2020 James Bottomley, IBM Corporation.
-# Copyright (c) 2006 - 2021, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
#
# SPDX-License-Identifier: BSD-2-Clause-Patent
@@ -353,6 +353,7 @@
DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
!endif
PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
diff --git a/OvmfPkg/CloudHv/CloudHvX64.dsc b/OvmfPkg/CloudHv/CloudHvX64.dsc
index fda7d2b9e5..327f53ff3c 100644
--- a/OvmfPkg/CloudHv/CloudHvX64.dsc
+++ b/OvmfPkg/CloudHv/CloudHvX64.dsc
@@ -1,7 +1,7 @@
## @file
# EFI/Framework Open Virtual Machine Firmware (OVMF) platform
#
-# Copyright (c) 2006 - 2022, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
# Copyright (c) Microsoft Corporation.
#
@@ -404,6 +404,7 @@
DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
!endif
PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
diff --git a/OvmfPkg/IntelTdx/IntelTdxX64.dsc b/OvmfPkg/IntelTdx/IntelTdxX64.dsc
index 95b9594ddc..d093660283 100644
--- a/OvmfPkg/IntelTdx/IntelTdxX64.dsc
+++ b/OvmfPkg/IntelTdx/IntelTdxX64.dsc
@@ -1,7 +1,7 @@
## @file
# EFI/Framework Open Virtual Machine Firmware (OVMF) platform
#
-# Copyright (c) 2006 - 2021, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
# Copyright (c) Microsoft Corporation.
#
@@ -320,6 +320,7 @@
CpuExceptionHandlerLib|UefiCpuPkg/Library/CpuExceptionHandlerLib/DxeCpuExceptionHandlerLib.inf
LockBoxLib|OvmfPkg/Library/LockBoxLib/LockBoxDxeLib.inf
PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
@@ -590,6 +591,7 @@
# Directly use DxeMpInitLib. It depends on DxeMpInitLibMpDepLib which
# checks the Protocol of gEfiMpInitLibMpDepProtocolGuid.
#
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
NULL|OvmfPkg/Library/MpInitLibDepLib/DxeMpInitLibMpDepLib.inf
}
diff --git a/OvmfPkg/Microvm/MicrovmX64.dsc b/OvmfPkg/Microvm/MicrovmX64.dsc
index 0d65d21e65..76fc548650 100644
--- a/OvmfPkg/Microvm/MicrovmX64.dsc
+++ b/OvmfPkg/Microvm/MicrovmX64.dsc
@@ -1,7 +1,7 @@
## @file
# EFI/Framework Open Virtual Machine Firmware (OVMF) platform
#
-# Copyright (c) 2006 - 2021, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
# Copyright (c) Microsoft Corporation.
#
@@ -403,6 +403,7 @@
PciLib|MdePkg/Library/BasePciLibPciExpress/BasePciLibPciExpress.inf
PciPcdProducerLib|OvmfPkg/Fdt/FdtPciPcdProducerLib/FdtPciPcdProducerLib.inf
PciExpressLib|OvmfPkg/Library/BaseCachingPciExpressLib/BaseCachingPciExpressLib.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
diff --git a/OvmfPkg/OvmfPkgIa32X64.dsc b/OvmfPkg/OvmfPkgIa32X64.dsc
index 6b539814bd..51db692b10 100644
--- a/OvmfPkg/OvmfPkgIa32X64.dsc
+++ b/OvmfPkg/OvmfPkgIa32X64.dsc
@@ -1,7 +1,7 @@
## @file
# EFI/Framework Open Virtual Machine Firmware (OVMF) platform
#
-# Copyright (c) 2006 - 2022, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
# Copyright (c) Microsoft Corporation.
#
@@ -414,6 +414,7 @@
DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
!endif
PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
diff --git a/OvmfPkg/OvmfPkgX64.dsc b/OvmfPkg/OvmfPkgX64.dsc
index e3c64456df..04d50704c7 100644
--- a/OvmfPkg/OvmfPkgX64.dsc
+++ b/OvmfPkg/OvmfPkgX64.dsc
@@ -1,7 +1,7 @@
## @file
# EFI/Framework Open Virtual Machine Firmware (OVMF) platform
#
-# Copyright (c) 2006 - 2022, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
# Copyright (c) Microsoft Corporation.
#
@@ -435,6 +435,7 @@
DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
!endif
PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
@@ -826,6 +827,7 @@
# Directly use DxeMpInitLib. It depends on DxeMpInitLibMpDepLib which
# checks the Protocol of gEfiMpInitLibMpDepProtocolGuid.
#
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
NULL|OvmfPkg/Library/MpInitLibDepLib/DxeMpInitLibMpDepLib.inf
}
diff --git a/OvmfPkg/OvmfXen.dsc b/OvmfPkg/OvmfXen.dsc
index c328987e84..f1f02d969f 100644
--- a/OvmfPkg/OvmfXen.dsc
+++ b/OvmfPkg/OvmfXen.dsc
@@ -1,7 +1,7 @@
## @file
# EFI/Framework Open Virtual Machine Firmware (OVMF) platform
#
-# Copyright (c) 2006 - 2021, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
# Copyright (c) 2019, Citrix Systems, Inc.
# Copyright (c) Microsoft Corporation.
@@ -339,6 +339,7 @@
DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
!endif
PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
--
2.36.1.windows.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Patch V2 4/5] UefiPayloadPkg: Add CpuPageTableLib required by MpInitLib.
2023-02-20 5:20 [PATCH 0/5] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
` (2 preceding siblings ...)
2023-02-20 5:20 ` [Patch V2 3/5] OvmfPkg: Add CpuPageTableLib required by MpInitLib Yuanhao Xie
@ 2023-02-20 5:20 ` Yuanhao Xie
2023-02-20 5:20 ` [Patch V2 5/5] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS Yuanhao Xie
4 siblings, 0 replies; 14+ messages in thread
From: Yuanhao Xie @ 2023-02-20 5:20 UTC (permalink / raw)
To: devel; +Cc: Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo
Add CpuPageTableLib required by MpInitLib in UefiPayloadPkg.
Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
---
UefiPayloadPkg/UefiPayloadPkg.dsc | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/UefiPayloadPkg/UefiPayloadPkg.dsc b/UefiPayloadPkg/UefiPayloadPkg.dsc
index 2dbd875f37..8cbbbe9a05 100644
--- a/UefiPayloadPkg/UefiPayloadPkg.dsc
+++ b/UefiPayloadPkg/UefiPayloadPkg.dsc
@@ -3,7 +3,7 @@
#
# Provides drivers and definitions to create uefi payload for bootloaders.
#
-# Copyright (c) 2014 - 2022, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2014 - 2023, Intel Corporation. All rights reserved.<BR>
# Copyright (c) Microsoft Corporation.
# SPDX-License-Identifier: BSD-2-Clause-Patent
#
@@ -340,6 +340,7 @@
DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
!endif
CpuExceptionHandlerLib|UefiCpuPkg/Library/CpuExceptionHandlerLib/DxeCpuExceptionHandlerLib.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
!if $(PERFORMANCE_MEASUREMENT_ENABLE)
PerformanceLib|MdeModulePkg/Library/DxePerformanceLib/DxePerformanceLib.inf
--
2.36.1.windows.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Patch V2 5/5] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS.
2023-02-20 5:20 [PATCH 0/5] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
` (3 preceding siblings ...)
2023-02-20 5:20 ` [Patch V2 4/5] UefiPayloadPkg: " Yuanhao Xie
@ 2023-02-20 5:20 ` Yuanhao Xie
2023-02-20 14:11 ` Lendacky, Thomas
4 siblings, 1 reply; 14+ messages in thread
From: Yuanhao Xie @ 2023-02-20 5:20 UTC (permalink / raw)
To: devel
Cc: Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo, Gerd Hoffmann,
Tom Lendacky, Laszlo Ersek
Update the address of the allocated memory, only keep 4GB limitation for
the case that APs still need to be transferred to 32-bit mode before
OS.
Remove the unused arguments of AsmRelocateApLoopStart, update the
stack offset. For the processors other than with SEV-ES enabled, keep
APs in 64 bit mode before handoff to OS.
Create PageTable for the allocated reserved memory.
Tested on the OVMF package use of the support on AMD processors.
Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
Tested-by: Yuanhao Xie <yuanhao.xie@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
---
UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf | 6 +++++-
UefiCpuPkg/Library/MpInitLib/DxeMpLib.c | 28 ++++++++++++++++++----------
UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c | 23 +++++++++++++++++++++++
UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm | 11 ++++-------
UefiCpuPkg/Library/MpInitLib/MpEqu.inc | 22 +++++++++++-----------
UefiCpuPkg/Library/MpInitLib/MpLib.h | 17 +++++++++++++----
UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 173 ++++++++++++++++++++++++++++-------------------------------------------------------------------------------------------------------------------------------------------------
UefiCpuPkg/UefiCpuPkg.dsc | 3 ++-
9 files changed, 186 insertions(+), 179 deletions(-)
diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf b/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
index cd07de3a3c..4285dd06b4 100644
--- a/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
+++ b/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
@@ -1,7 +1,7 @@
## @file
# MP Initialize Library instance for DXE driver.
#
-# Copyright (c) 2016 - 2021, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2016 - 2023, Intel Corporation. All rights reserved.<BR>
# SPDX-License-Identifier: BSD-2-Clause-Patent
#
##
@@ -24,10 +24,12 @@
[Sources.IA32]
Ia32/AmdSev.c
Ia32/MpFuncs.nasm
+ Ia32/CreatePageTable.c
[Sources.X64]
X64/AmdSev.c
X64/MpFuncs.nasm
+ X64/CreatePageTable.c
[Sources.common]
AmdSev.c
@@ -56,6 +58,8 @@
PcdLib
CcExitLib
MicrocodeLib
+[LibraryClasses.X64]
+ CpuPageTableLib
[Protocols]
gEfiTimerArchProtocolGuid ## SOMETIMES_CONSUMES
diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
index 75743faf5f..76953206ca 100644
--- a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
+++ b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
@@ -28,7 +28,7 @@ volatile BOOLEAN mStopCheckAllApsStatus = TRUE;
UINTN mReservedTopOfApStack;
volatile UINT32 mNumberToFinish = 0;
RELOCATE_AP_LOOP_ENTRY mReservedApLoop;
-
+UINTN mApPageTable;
//
// Begin wakeup buffer allocation below 0x88000
@@ -379,10 +379,10 @@ RelocateApLoop (
IN OUT VOID *Buffer
)
{
- CPU_MP_DATA *CpuMpData;
- BOOLEAN MwaitSupport;
- UINTN ProcessorNumber;
- UINTN StackStart;
+ CPU_MP_DATA *CpuMpData;
+ BOOLEAN MwaitSupport;
+ UINTN ProcessorNumber;
+ UINTN StackStart;
MpInitLibWhoAmI (&ProcessorNumber);
CpuMpData = GetCpuMpData ();
@@ -404,12 +404,9 @@ RelocateApLoop (
mReservedApLoop.GenericEntry (
MwaitSupport,
CpuMpData->ApTargetCState,
- CpuMpData->PmCodeSegment,
StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
(UINTN)&mNumberToFinish,
- CpuMpData->Pm16CodeSegment,
- CpuMpData->SevEsAPBuffer,
- CpuMpData->WakeupBuffer
+ mApPageTable
);
}
@@ -540,9 +537,11 @@ InitMpGlobalData (
AddressMap = &CpuMpData->AddressMap;
if (CpuMpData->UseSevEsAPMethod) {
+ Address = BASE_4GB - 1;
ApLoopFunc = AddressMap->RelocateApLoopFuncAddressAmdSev;
ApLoopFuncSize = AddressMap->RelocateApLoopFuncSizeAmdSev;
} else {
+ Address = MAX_ADDRESS;
ApLoopFunc = AddressMap->RelocateApLoopFuncAddress;
ApLoopFuncSize = AddressMap->RelocateApLoopFuncSize;
}
@@ -564,7 +563,6 @@ InitMpGlobalData (
// +------------+ (low address )
//
- Address = BASE_4GB - 1;
StackPages = EFI_SIZE_TO_PAGES (CpuMpData->CpuCount * AP_SAFE_STACK_SIZE);
FuncPages = EFI_SIZE_TO_PAGES (ALIGN_VALUE (ApLoopFuncSize, EFI_PAGE_SIZE));
@@ -597,6 +595,16 @@ InitMpGlobalData (
ASSERT ((mReservedTopOfApStack & (UINTN)(CPU_STACK_ALIGNMENT - 1)) == 0);
mReservedApLoop.Data = (VOID *)(UINTN)Address;
CopyMem (mReservedApLoop.Data, ApLoopFunc, ApLoopFuncSize);
+ if (!CpuMpData->UseSevEsAPMethod) {
+ //
+ // non-Sev Processor
+ //
+ mApPageTable = CreatePageTable (
+ (UINTN)Address,
+ EFI_PAGES_TO_SIZE (StackPages+FuncPages)
+ );
+ }
+
Status = gBS->CreateEvent (
EVT_TIMER | EVT_NOTIFY_SIGNAL,
TPL_NOTIFY,
diff --git a/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c b/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
new file mode 100644
index 0000000000..bec9b247c0
--- /dev/null
+++ b/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
@@ -0,0 +1,23 @@
+/** @file
+ Function to create page talbe.
+ Only create page table for x64, and leave the CreatePageTable empty for Ia32.
+ Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
+ SPDX-License-Identifier: BSD-2-Clause-Patent
+**/
+
+#include <Base.h>
+
+/**
+ Only create page table for x64, and leave the CreatePageTable empty for Ia32.
+ @param[in] LinearAddress The start of the linear address range.
+ @param[in] Length The length of the linear address range.
+ @return The page table to be created.
+**/
+UINTN
+CreatePageTable (
+ IN UINTN Address,
+ IN UINTN Length
+ )
+{
+ return 0;
+}
diff --git a/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm b/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
index bfcdbd31c1..c65a825a23 100644
--- a/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
+++ b/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
@@ -1,5 +1,5 @@
;------------------------------------------------------------------------------ ;
-; Copyright (c) 2015 - 2022, Intel Corporation. All rights reserved.<BR>
+; Copyright (c) 2015 - 2023, Intel Corporation. All rights reserved.<BR>
; SPDX-License-Identifier: BSD-2-Clause-Patent
;
; Module Name:
@@ -219,20 +219,17 @@ SwitchToRealProcEnd:
RendezvousFunnelProcEnd:
;-------------------------------------------------------------------------------------
-; AsmRelocateApLoop (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
-;
-; The last three parameters (Pm16CodeSegment, SevEsAPJumpTable and WakeupBuffer) are
-; specific to SEV-ES support and are not applicable on IA32.
+; AsmRelocateApLoop (MwaitSupport, ApTargetCState, TopOfApStack, CountTofinish, Cr3);
;-------------------------------------------------------------------------------------
AsmRelocateApLoopStart:
mov eax, esp
- mov esp, [eax + 16] ; TopOfApStack
+ mov esp, [eax + 12] ; TopOfApStack
push dword [eax] ; push return address for stack trace
push ebp
mov ebp, esp
mov ebx, [eax + 8] ; ApTargetCState
mov ecx, [eax + 4] ; MwaitSupport
- mov eax, [eax + 20] ; CountTofinish
+ mov eax, [eax + 16] ; CountTofinish
lock dec dword [eax] ; (*CountTofinish)--
cmp cl, 1 ; Check mwait-monitor support
jnz HltLoop
diff --git a/UefiCpuPkg/Library/MpInitLib/MpEqu.inc b/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
index 1472ef2024..6730f2f411 100644
--- a/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
+++ b/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
@@ -21,17 +21,17 @@ CPU_SWITCH_STATE_LOADED equ 2
; Equivalent NASM structure of MP_ASSEMBLY_ADDRESS_MAP
;
struc MP_ASSEMBLY_ADDRESS_MAP
- .RendezvousFunnelAddress CTYPE_UINTN 1
- .ModeEntryOffset CTYPE_UINTN 1
- .RendezvousFunnelSize CTYPE_UINTN 1
- .RelocateApLoopFuncAddress CTYPE_UINTN 1
- .RelocateApLoopFuncSize CTYPE_UINTN 1
- .RelocateApLoopFuncAddressAmdSev CTYPE_UINTN 1
- .RelocateApLoopFuncSizeAmdSev CTYPE_UINTN 1
- .ModeTransitionOffset CTYPE_UINTN 1
- .SwitchToRealNoNxOffset CTYPE_UINTN 1
- .SwitchToRealPM16ModeOffset CTYPE_UINTN 1
- .SwitchToRealPM16ModeSize CTYPE_UINTN 1
+ .RendezvousFunnelAddress CTYPE_UINTN 1
+ .ModeEntryOffset CTYPE_UINTN 1
+ .RendezvousFunnelSize CTYPE_UINTN 1
+ .RelocateApLoopFuncAddress CTYPE_UINTN 1
+ .RelocateApLoopFuncSize CTYPE_UINTN 1
+ .RelocateApLoopFuncAddressAmdSev CTYPE_UINTN 1
+ .RelocateApLoopFuncSizeAmdSev CTYPE_UINTN 1
+ .ModeTransitionOffset CTYPE_UINTN 1
+ .SwitchToRealNoNxOffset CTYPE_UINTN 1
+ .SwitchToRealPM16ModeOffset CTYPE_UINTN 1
+ .SwitchToRealPM16ModeSize CTYPE_UINTN 1
endstruc
;
diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.h b/UefiCpuPkg/Library/MpInitLib/MpLib.h
index 772a828045..cc77843bea 100644
--- a/UefiCpuPkg/Library/MpInitLib/MpLib.h
+++ b/UefiCpuPkg/Library/MpInitLib/MpLib.h
@@ -368,12 +368,9 @@ typedef
(EFIAPI *ASM_RELOCATE_AP_LOOP)(
IN BOOLEAN MwaitSupport,
IN UINTN ApTargetCState,
- IN UINTN PmCodeSegment,
IN UINTN TopOfApStack,
IN UINTN NumberToFinish,
- IN UINTN Pm16CodeSegment,
- IN UINTN SevEsAPJumpTable,
- IN UINTN WakeupBuffer
+ IN UINTN Cr3
);
/**
@@ -498,6 +495,18 @@ GetSevEsAPMemory (
VOID
);
+/**
+ Create 1:1 mapping page table in reserved memory to map the specified address range.
+ @param[in] LinearAddress The start of the linear address range.
+ @param[in] Length The length of the linear address range.
+ @return The page table to be created.
+**/
+UINTN
+CreatePageTable (
+ IN UINTN Address,
+ IN UINTN Length
+ );
+
/**
This function will be called by BSP to wakeup AP.
diff --git a/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c b/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
new file mode 100644
index 0000000000..7cf91ed9c4
--- /dev/null
+++ b/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
@@ -0,0 +1,82 @@
+/** @file
+ Function to create page talbe.
+ Only create page table for x64, and leave the CreatePageTable empty for Ia32.
+ Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
+ SPDX-License-Identifier: BSD-2-Clause-Patent
+**/
+#include <Library/CpuPageTableLib.h>
+#include <Library/MemoryAllocationLib.h>
+#include <Base.h>
+#include <Library/BaseMemoryLib.h>
+#include <Library/DebugLib.h>
+#include <Library/BaseLib.h>
+
+/**
+ Create 1:1 mapping page table in reserved memory to map the specified address range.
+ @param[in] LinearAddress The start of the linear address range.
+ @param[in] Length The length of the linear address range.
+ @return The page table to be created.
+**/
+UINTN
+CreatePageTable (
+ IN UINTN Address,
+ IN UINTN Length
+ )
+{
+ EFI_STATUS Status;
+ VOID *PageTableBuffer;
+ UINTN PageTableBufferSize;
+ UINTN PageTable;
+ PAGING_MODE PagingMode;
+ IA32_CR4 Cr4;
+
+ IA32_MAP_ATTRIBUTE MapAttribute;
+ IA32_MAP_ATTRIBUTE MapMask;
+
+ MapAttribute.Uint64 = Address;
+ MapAttribute.Bits.Present = 1;
+ MapAttribute.Bits.ReadWrite = 1;
+
+ MapMask.Bits.PageTableBaseAddress = 1;
+ MapMask.Bits.Present = 1;
+ MapMask.Bits.ReadWrite = 1;
+
+ PageTable = 0;
+ PageTableBufferSize = 0;
+
+ Cr4.UintN = AsmReadCr4 ();
+
+ if (Cr4.Bits.LA57 == 1) {
+ PagingMode = Paging5Level;
+ } else {
+ PagingMode = Paging4Level;
+ }
+
+ Status = PageTableMap (
+ &PageTable,
+ PagingMode,
+ NULL,
+ &PageTableBufferSize,
+ Address,
+ Length,
+ &MapAttribute,
+ &MapMask
+ );
+ ASSERT (Status == EFI_BUFFER_TOO_SMALL);
+ DEBUG ((DEBUG_INFO, "AP Page Table Buffer Size = %x\n", PageTableBufferSize));
+
+ PageTableBuffer = AllocateReservedPages (EFI_SIZE_TO_PAGES (PageTableBufferSize));
+ ASSERT (PageTableBuffer != NULL);
+ Status = PageTableMap (
+ &PageTable,
+ PagingMode,
+ PageTableBuffer,
+ &PageTableBufferSize,
+ Address,
+ Length,
+ &MapAttribute,
+ &MapMask
+ );
+ ASSERT_EFI_ERROR (Status);
+ return PageTable;
+}
diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
index d36f8ba06d..2bce04d99c 100644
--- a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
+++ b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
@@ -279,172 +279,55 @@ CProcedureInvoke:
RendezvousFunnelProcEnd:
;-------------------------------------------------------------------------------------
-; AsmRelocateApLoop (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
+; AsmRelocateApLoop (MwaitSupport, ApTargetCState, TopOfApStack, CountTofinish, Cr3);
+; This function is called during the finalizaiton of Mp initialization before booting
+; to OS, and aim to put Aps either in Mwait or HLT.
;-------------------------------------------------------------------------------------
-AsmRelocateApLoopStart:
-BITS 64
- cmp qword [rsp + 56], 0 ; SevEsAPJumpTable
- je NoSevEs
-
- ;
- ; Perform some SEV-ES related setup before leaving 64-bit mode
- ;
- push rcx
- push rdx
-
- ;
- ; Get the RDX reset value using CPUID
- ;
- mov rax, 1
- cpuid
- mov rsi, rax ; Save off the reset value for RDX
-
- ;
- ; Prepare the GHCB for the AP_HLT_LOOP VMGEXIT call
- ; - Must be done while in 64-bit long mode so that writes to
- ; the GHCB memory will be unencrypted.
- ; - No NAE events can be generated once this is set otherwise
- ; the AP_RESET_HOLD SW_EXITCODE will be overwritten.
- ;
- mov rcx, 0xc0010130
- rdmsr ; Retrieve current GHCB address
- shl rdx, 32
- or rdx, rax
-
- mov rdi, rdx
- xor rax, rax
- mov rcx, 0x800
- shr rcx, 3
- rep stosq ; Clear the GHCB
-
- mov rax, 0x80000004 ; VMGEXIT AP_RESET_HOLD
- mov [rdx + 0x390], rax
- mov rax, 114 ; Set SwExitCode valid bit
- bts [rdx + 0x3f0], rax
- inc rax ; Set SwExitInfo1 valid bit
- bts [rdx + 0x3f0], rax
- inc rax ; Set SwExitInfo2 valid bit
- bts [rdx + 0x3f0], rax
+; +----------------+
+; | Cr3 | rsp+40
+; +----------------+
+; | CountTofinish | r9
+; +----------------+
+; | TopOfApStack | r8
+; +----------------+
+; | ApTargetCState | rdx
+; +----------------+
+; | MwaitSupport | rcx
+; +----------------+
+; | the return |
+; +----------------+ low address
- pop rdx
- pop rcx
-
-NoSevEs:
- cli ; Disable interrupt before switching to 32-bit mode
- mov rax, [rsp + 40] ; CountTofinish
+AsmRelocateApLoopStart:
+ mov rax, r9 ; CountTofinish
lock dec dword [rax] ; (*CountTofinish)--
- mov r10, [rsp + 48] ; Pm16CodeSegment
- mov rax, [rsp + 56] ; SevEsAPJumpTable
- mov rbx, [rsp + 64] ; WakeupBuffer
- mov rsp, r9 ; TopOfApStack
-
- push rax ; Save SevEsAPJumpTable
- push rbx ; Save WakeupBuffer
- push r10 ; Save Pm16CodeSegment
- push rcx ; Save MwaitSupport
- push rdx ; Save ApTargetCState
-
- lea rax, [PmEntry] ; rax <- The start address of transition code
-
- push r8
- push rax
-
- ;
- ; Clear R8 - R15, for reset, before going into 32-bit mode
- ;
- xor r8, r8
- xor r9, r9
- xor r10, r10
- xor r11, r11
- xor r12, r12
- xor r13, r13
- xor r14, r14
- xor r15, r15
-
- ;
- ; Far return into 32-bit mode
- ;
- retfq
-
-BITS 32
-PmEntry:
- mov eax, cr0
- btr eax, 31 ; Clear CR0.PG
- mov cr0, eax ; Disable paging and caches
-
- mov ecx, 0xc0000080
- rdmsr
- and ah, ~ 1 ; Clear LME
- wrmsr
- mov eax, cr4
- and al, ~ (1 << 5) ; Clear PAE
- mov cr4, eax
-
- pop edx
- add esp, 4
- pop ecx,
- add esp, 4
+ mov rax, [rsp + 40] ; Cr3
+ ; Do not push on old stack, since old stack is not mapped
+ ; in the page table pointed by cr3
+ mov cr3, rax
+ mov rsp, r8 ; TopOfApStack
MwaitCheck:
cmp cl, 1 ; Check mwait-monitor support
jnz HltLoop
- mov ebx, edx ; Save C-State to ebx
+ mov rbx, rdx ; Save C-State to ebx
+
MwaitLoop:
cli
- mov eax, esp ; Set Monitor Address
+ mov rax, rsp ; Set Monitor Address
xor ecx, ecx ; ecx = 0
xor edx, edx ; edx = 0
monitor
- mov eax, ebx ; Mwait Cx, Target C-State per eax[7:4]
+ mov rax, rbx ; Mwait Cx, Target C-State per eax[7:4]
shl eax, 4
mwait
jmp MwaitLoop
HltLoop:
- pop edx ; PM16CodeSegment
- add esp, 4
- pop ebx ; WakeupBuffer
- add esp, 4
- pop eax ; SevEsAPJumpTable
- add esp, 4
- cmp eax, 0 ; Check for SEV-ES
- je DoHlt
-
- cli
- ;
- ; SEV-ES is enabled, use VMGEXIT (GHCB information already
- ; set by caller)
- ;
-BITS 64
- rep vmmcall
-BITS 32
-
- ;
- ; Back from VMGEXIT AP_HLT_LOOP
- ; Push the FLAGS/CS/IP values to use
- ;
- push word 0x0002 ; EFLAGS
- xor ecx, ecx
- mov cx, [eax + 2] ; CS
- push cx
- mov cx, [eax] ; IP
- push cx
- push word 0x0000 ; For alignment, will be discarded
-
- push edx
- push ebx
-
- mov edx, esi ; Restore RDX reset value
-
- retf
-
-DoHlt:
cli
hlt
- jmp DoHlt
+ jmp HltLoop
-BITS 64
AsmRelocateApLoopEnd:
;-------------------------------------------------------------------------------------
diff --git a/UefiCpuPkg/UefiCpuPkg.dsc b/UefiCpuPkg/UefiCpuPkg.dsc
index f9a46089d2..062a43d7a7 100644
--- a/UefiCpuPkg/UefiCpuPkg.dsc
+++ b/UefiCpuPkg/UefiCpuPkg.dsc
@@ -1,7 +1,7 @@
## @file
# UefiCpuPkg Package
#
-# Copyright (c) 2007 - 2022, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2007 - 2023, Intel Corporation. All rights reserved.<BR>
#
# SPDX-License-Identifier: BSD-2-Clause-Patent
#
@@ -94,6 +94,7 @@
MemoryAllocationLib|MdePkg/Library/UefiMemoryAllocationLib/UefiMemoryAllocationLib.inf
HobLib|MdePkg/Library/DxeHobLib/DxeHobLib.inf
CpuExceptionHandlerLib|UefiCpuPkg/Library/CpuExceptionHandlerLib/DxeCpuExceptionHandlerLib.inf
+ CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
RegisterCpuFeaturesLib|UefiCpuPkg/Library/RegisterCpuFeaturesLib/DxeRegisterCpuFeaturesLib.inf
CpuCacheInfoLib|UefiCpuPkg/Library/CpuCacheInfoLib/DxeCpuCacheInfoLib.inf
--
2.36.1.windows.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [Patch V2 5/5] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS.
2023-02-20 5:20 ` [Patch V2 5/5] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS Yuanhao Xie
@ 2023-02-20 14:11 ` Lendacky, Thomas
2023-02-20 17:43 ` [edk2-devel] " Yuanhao Xie
0 siblings, 1 reply; 14+ messages in thread
From: Lendacky, Thomas @ 2023-02-20 14:11 UTC (permalink / raw)
To: Yuanhao Xie, devel
Cc: Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo, Gerd Hoffmann,
Laszlo Ersek
On 2/19/23 23:20, Yuanhao Xie wrote:
> Update the address of the allocated memory, only keep 4GB limitation for
> the case that APs still need to be transferred to 32-bit mode before
> OS.
>
> Remove the unused arguments of AsmRelocateApLoopStart, update the
> stack offset. For the processors other than with SEV-ES enabled, keep
> APs in 64 bit mode before handoff to OS.
>
> Create PageTable for the allocated reserved memory.
>
> Tested on the OVMF package use of the support on AMD processors.
>
> Cc: Guo Dong <guo.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Sean Rhodes <sean@starlabs.systems>
> Cc: James Lu <james.lu@intel.com>
> Cc: Gua Guo <gua.guo@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
I only tested the pull request that you pointed me at which didn't have
all these patches (basically it was only patch #1), so you can't really
put my Tested-by: on this.
You didn't send an updated link, so I didn't test the full series. But I
will test this full series. Do you have a link to a tree?
Thanks,
Tom
> Tested-by: Yuanhao Xie <yuanhao.xie@intel.com>
> Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
> ---
> UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf | 6 +++++-
> UefiCpuPkg/Library/MpInitLib/DxeMpLib.c | 28 ++++++++++++++++++----------
> UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c | 23 +++++++++++++++++++++++
> UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm | 11 ++++-------
> UefiCpuPkg/Library/MpInitLib/MpEqu.inc | 22 +++++++++++-----------
> UefiCpuPkg/Library/MpInitLib/MpLib.h | 17 +++++++++++++----
> UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 173 ++++++++++++++++++++++++++++-------------------------------------------------------------------------------------------------------------------------------------------------
> UefiCpuPkg/UefiCpuPkg.dsc | 3 ++-
> 9 files changed, 186 insertions(+), 179 deletions(-)
>
> diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf b/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
> index cd07de3a3c..4285dd06b4 100644
> --- a/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
> +++ b/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
> @@ -1,7 +1,7 @@
> ## @file
> # MP Initialize Library instance for DXE driver.
> #
> -# Copyright (c) 2016 - 2021, Intel Corporation. All rights reserved.<BR>
> +# Copyright (c) 2016 - 2023, Intel Corporation. All rights reserved.<BR>
> # SPDX-License-Identifier: BSD-2-Clause-Patent
> #
> ##
> @@ -24,10 +24,12 @@
> [Sources.IA32]
> Ia32/AmdSev.c
> Ia32/MpFuncs.nasm
> + Ia32/CreatePageTable.c
>
> [Sources.X64]
> X64/AmdSev.c
> X64/MpFuncs.nasm
> + X64/CreatePageTable.c
>
> [Sources.common]
> AmdSev.c
> @@ -56,6 +58,8 @@
> PcdLib
> CcExitLib
> MicrocodeLib
> +[LibraryClasses.X64]
> + CpuPageTableLib
>
> [Protocols]
> gEfiTimerArchProtocolGuid ## SOMETIMES_CONSUMES
> diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
> index 75743faf5f..76953206ca 100644
> --- a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
> +++ b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
> @@ -28,7 +28,7 @@ volatile BOOLEAN mStopCheckAllApsStatus = TRUE;
> UINTN mReservedTopOfApStack;
> volatile UINT32 mNumberToFinish = 0;
> RELOCATE_AP_LOOP_ENTRY mReservedApLoop;
> -
> +UINTN mApPageTable;
>
> //
> // Begin wakeup buffer allocation below 0x88000
> @@ -379,10 +379,10 @@ RelocateApLoop (
> IN OUT VOID *Buffer
> )
> {
> - CPU_MP_DATA *CpuMpData;
> - BOOLEAN MwaitSupport;
> - UINTN ProcessorNumber;
> - UINTN StackStart;
> + CPU_MP_DATA *CpuMpData;
> + BOOLEAN MwaitSupport;
> + UINTN ProcessorNumber;
> + UINTN StackStart;
>
> MpInitLibWhoAmI (&ProcessorNumber);
> CpuMpData = GetCpuMpData ();
> @@ -404,12 +404,9 @@ RelocateApLoop (
> mReservedApLoop.GenericEntry (
> MwaitSupport,
> CpuMpData->ApTargetCState,
> - CpuMpData->PmCodeSegment,
> StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
> (UINTN)&mNumberToFinish,
> - CpuMpData->Pm16CodeSegment,
> - CpuMpData->SevEsAPBuffer,
> - CpuMpData->WakeupBuffer
> + mApPageTable
> );
> }
>
> @@ -540,9 +537,11 @@ InitMpGlobalData (
>
> AddressMap = &CpuMpData->AddressMap;
> if (CpuMpData->UseSevEsAPMethod) {
> + Address = BASE_4GB - 1;
> ApLoopFunc = AddressMap->RelocateApLoopFuncAddressAmdSev;
> ApLoopFuncSize = AddressMap->RelocateApLoopFuncSizeAmdSev;
> } else {
> + Address = MAX_ADDRESS;
> ApLoopFunc = AddressMap->RelocateApLoopFuncAddress;
> ApLoopFuncSize = AddressMap->RelocateApLoopFuncSize;
> }
> @@ -564,7 +563,6 @@ InitMpGlobalData (
> // +------------+ (low address )
> //
>
> - Address = BASE_4GB - 1;
> StackPages = EFI_SIZE_TO_PAGES (CpuMpData->CpuCount * AP_SAFE_STACK_SIZE);
> FuncPages = EFI_SIZE_TO_PAGES (ALIGN_VALUE (ApLoopFuncSize, EFI_PAGE_SIZE));
>
> @@ -597,6 +595,16 @@ InitMpGlobalData (
> ASSERT ((mReservedTopOfApStack & (UINTN)(CPU_STACK_ALIGNMENT - 1)) == 0);
> mReservedApLoop.Data = (VOID *)(UINTN)Address;
> CopyMem (mReservedApLoop.Data, ApLoopFunc, ApLoopFuncSize);
> + if (!CpuMpData->UseSevEsAPMethod) {
> + //
> + // non-Sev Processor
> + //
> + mApPageTable = CreatePageTable (
> + (UINTN)Address,
> + EFI_PAGES_TO_SIZE (StackPages+FuncPages)
> + );
> + }
> +
> Status = gBS->CreateEvent (
> EVT_TIMER | EVT_NOTIFY_SIGNAL,
> TPL_NOTIFY,
> diff --git a/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c b/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
> new file mode 100644
> index 0000000000..bec9b247c0
> --- /dev/null
> +++ b/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
> @@ -0,0 +1,23 @@
> +/** @file
> + Function to create page talbe.
> + Only create page table for x64, and leave the CreatePageTable empty for Ia32.
> + Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
> + SPDX-License-Identifier: BSD-2-Clause-Patent
> +**/
> +
> +#include <Base.h>
> +
> +/**
> + Only create page table for x64, and leave the CreatePageTable empty for Ia32.
> + @param[in] LinearAddress The start of the linear address range.
> + @param[in] Length The length of the linear address range.
> + @return The page table to be created.
> +**/
> +UINTN
> +CreatePageTable (
> + IN UINTN Address,
> + IN UINTN Length
> + )
> +{
> + return 0;
> +}
> diff --git a/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm b/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
> index bfcdbd31c1..c65a825a23 100644
> --- a/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
> +++ b/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
> @@ -1,5 +1,5 @@
> ;------------------------------------------------------------------------------ ;
> -; Copyright (c) 2015 - 2022, Intel Corporation. All rights reserved.<BR>
> +; Copyright (c) 2015 - 2023, Intel Corporation. All rights reserved.<BR>
> ; SPDX-License-Identifier: BSD-2-Clause-Patent
> ;
> ; Module Name:
> @@ -219,20 +219,17 @@ SwitchToRealProcEnd:
> RendezvousFunnelProcEnd:
>
> ;-------------------------------------------------------------------------------------
> -; AsmRelocateApLoop (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
> -;
> -; The last three parameters (Pm16CodeSegment, SevEsAPJumpTable and WakeupBuffer) are
> -; specific to SEV-ES support and are not applicable on IA32.
> +; AsmRelocateApLoop (MwaitSupport, ApTargetCState, TopOfApStack, CountTofinish, Cr3);
> ;-------------------------------------------------------------------------------------
> AsmRelocateApLoopStart:
> mov eax, esp
> - mov esp, [eax + 16] ; TopOfApStack
> + mov esp, [eax + 12] ; TopOfApStack
> push dword [eax] ; push return address for stack trace
> push ebp
> mov ebp, esp
> mov ebx, [eax + 8] ; ApTargetCState
> mov ecx, [eax + 4] ; MwaitSupport
> - mov eax, [eax + 20] ; CountTofinish
> + mov eax, [eax + 16] ; CountTofinish
> lock dec dword [eax] ; (*CountTofinish)--
> cmp cl, 1 ; Check mwait-monitor support
> jnz HltLoop
> diff --git a/UefiCpuPkg/Library/MpInitLib/MpEqu.inc b/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
> index 1472ef2024..6730f2f411 100644
> --- a/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
> +++ b/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
> @@ -21,17 +21,17 @@ CPU_SWITCH_STATE_LOADED equ 2
> ; Equivalent NASM structure of MP_ASSEMBLY_ADDRESS_MAP
> ;
> struc MP_ASSEMBLY_ADDRESS_MAP
> - .RendezvousFunnelAddress CTYPE_UINTN 1
> - .ModeEntryOffset CTYPE_UINTN 1
> - .RendezvousFunnelSize CTYPE_UINTN 1
> - .RelocateApLoopFuncAddress CTYPE_UINTN 1
> - .RelocateApLoopFuncSize CTYPE_UINTN 1
> - .RelocateApLoopFuncAddressAmdSev CTYPE_UINTN 1
> - .RelocateApLoopFuncSizeAmdSev CTYPE_UINTN 1
> - .ModeTransitionOffset CTYPE_UINTN 1
> - .SwitchToRealNoNxOffset CTYPE_UINTN 1
> - .SwitchToRealPM16ModeOffset CTYPE_UINTN 1
> - .SwitchToRealPM16ModeSize CTYPE_UINTN 1
> + .RendezvousFunnelAddress CTYPE_UINTN 1
> + .ModeEntryOffset CTYPE_UINTN 1
> + .RendezvousFunnelSize CTYPE_UINTN 1
> + .RelocateApLoopFuncAddress CTYPE_UINTN 1
> + .RelocateApLoopFuncSize CTYPE_UINTN 1
> + .RelocateApLoopFuncAddressAmdSev CTYPE_UINTN 1
> + .RelocateApLoopFuncSizeAmdSev CTYPE_UINTN 1
> + .ModeTransitionOffset CTYPE_UINTN 1
> + .SwitchToRealNoNxOffset CTYPE_UINTN 1
> + .SwitchToRealPM16ModeOffset CTYPE_UINTN 1
> + .SwitchToRealPM16ModeSize CTYPE_UINTN 1
> endstruc
>
> ;
> diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.h b/UefiCpuPkg/Library/MpInitLib/MpLib.h
> index 772a828045..cc77843bea 100644
> --- a/UefiCpuPkg/Library/MpInitLib/MpLib.h
> +++ b/UefiCpuPkg/Library/MpInitLib/MpLib.h
> @@ -368,12 +368,9 @@ typedef
> (EFIAPI *ASM_RELOCATE_AP_LOOP)(
> IN BOOLEAN MwaitSupport,
> IN UINTN ApTargetCState,
> - IN UINTN PmCodeSegment,
> IN UINTN TopOfApStack,
> IN UINTN NumberToFinish,
> - IN UINTN Pm16CodeSegment,
> - IN UINTN SevEsAPJumpTable,
> - IN UINTN WakeupBuffer
> + IN UINTN Cr3
> );
>
> /**
> @@ -498,6 +495,18 @@ GetSevEsAPMemory (
> VOID
> );
>
> +/**
> + Create 1:1 mapping page table in reserved memory to map the specified address range.
> + @param[in] LinearAddress The start of the linear address range.
> + @param[in] Length The length of the linear address range.
> + @return The page table to be created.
> +**/
> +UINTN
> +CreatePageTable (
> + IN UINTN Address,
> + IN UINTN Length
> + );
> +
> /**
> This function will be called by BSP to wakeup AP.
>
> diff --git a/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c b/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
> new file mode 100644
> index 0000000000..7cf91ed9c4
> --- /dev/null
> +++ b/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
> @@ -0,0 +1,82 @@
> +/** @file
> + Function to create page talbe.
> + Only create page table for x64, and leave the CreatePageTable empty for Ia32.
> + Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
> + SPDX-License-Identifier: BSD-2-Clause-Patent
> +**/
> +#include <Library/CpuPageTableLib.h>
> +#include <Library/MemoryAllocationLib.h>
> +#include <Base.h>
> +#include <Library/BaseMemoryLib.h>
> +#include <Library/DebugLib.h>
> +#include <Library/BaseLib.h>
> +
> +/**
> + Create 1:1 mapping page table in reserved memory to map the specified address range.
> + @param[in] LinearAddress The start of the linear address range.
> + @param[in] Length The length of the linear address range.
> + @return The page table to be created.
> +**/
> +UINTN
> +CreatePageTable (
> + IN UINTN Address,
> + IN UINTN Length
> + )
> +{
> + EFI_STATUS Status;
> + VOID *PageTableBuffer;
> + UINTN PageTableBufferSize;
> + UINTN PageTable;
> + PAGING_MODE PagingMode;
> + IA32_CR4 Cr4;
> +
> + IA32_MAP_ATTRIBUTE MapAttribute;
> + IA32_MAP_ATTRIBUTE MapMask;
> +
> + MapAttribute.Uint64 = Address;
> + MapAttribute.Bits.Present = 1;
> + MapAttribute.Bits.ReadWrite = 1;
> +
> + MapMask.Bits.PageTableBaseAddress = 1;
> + MapMask.Bits.Present = 1;
> + MapMask.Bits.ReadWrite = 1;
> +
> + PageTable = 0;
> + PageTableBufferSize = 0;
> +
> + Cr4.UintN = AsmReadCr4 ();
> +
> + if (Cr4.Bits.LA57 == 1) {
> + PagingMode = Paging5Level;
> + } else {
> + PagingMode = Paging4Level;
> + }
> +
> + Status = PageTableMap (
> + &PageTable,
> + PagingMode,
> + NULL,
> + &PageTableBufferSize,
> + Address,
> + Length,
> + &MapAttribute,
> + &MapMask
> + );
> + ASSERT (Status == EFI_BUFFER_TOO_SMALL);
> + DEBUG ((DEBUG_INFO, "AP Page Table Buffer Size = %x\n", PageTableBufferSize));
> +
> + PageTableBuffer = AllocateReservedPages (EFI_SIZE_TO_PAGES (PageTableBufferSize));
> + ASSERT (PageTableBuffer != NULL);
> + Status = PageTableMap (
> + &PageTable,
> + PagingMode,
> + PageTableBuffer,
> + &PageTableBufferSize,
> + Address,
> + Length,
> + &MapAttribute,
> + &MapMask
> + );
> + ASSERT_EFI_ERROR (Status);
> + return PageTable;
> +}
> diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
> index d36f8ba06d..2bce04d99c 100644
> --- a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
> +++ b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
> @@ -279,172 +279,55 @@ CProcedureInvoke:
> RendezvousFunnelProcEnd:
>
> ;-------------------------------------------------------------------------------------
> -; AsmRelocateApLoop (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
> +; AsmRelocateApLoop (MwaitSupport, ApTargetCState, TopOfApStack, CountTofinish, Cr3);
> +; This function is called during the finalizaiton of Mp initialization before booting
> +; to OS, and aim to put Aps either in Mwait or HLT.
> ;-------------------------------------------------------------------------------------
> -AsmRelocateApLoopStart:
> -BITS 64
> - cmp qword [rsp + 56], 0 ; SevEsAPJumpTable
> - je NoSevEs
> -
> - ;
> - ; Perform some SEV-ES related setup before leaving 64-bit mode
> - ;
> - push rcx
> - push rdx
> -
> - ;
> - ; Get the RDX reset value using CPUID
> - ;
> - mov rax, 1
> - cpuid
> - mov rsi, rax ; Save off the reset value for RDX
> -
> - ;
> - ; Prepare the GHCB for the AP_HLT_LOOP VMGEXIT call
> - ; - Must be done while in 64-bit long mode so that writes to
> - ; the GHCB memory will be unencrypted.
> - ; - No NAE events can be generated once this is set otherwise
> - ; the AP_RESET_HOLD SW_EXITCODE will be overwritten.
> - ;
> - mov rcx, 0xc0010130
> - rdmsr ; Retrieve current GHCB address
> - shl rdx, 32
> - or rdx, rax
> -
> - mov rdi, rdx
> - xor rax, rax
> - mov rcx, 0x800
> - shr rcx, 3
> - rep stosq ; Clear the GHCB
> -
> - mov rax, 0x80000004 ; VMGEXIT AP_RESET_HOLD
> - mov [rdx + 0x390], rax
> - mov rax, 114 ; Set SwExitCode valid bit
> - bts [rdx + 0x3f0], rax
> - inc rax ; Set SwExitInfo1 valid bit
> - bts [rdx + 0x3f0], rax
> - inc rax ; Set SwExitInfo2 valid bit
> - bts [rdx + 0x3f0], rax
> +; +----------------+
> +; | Cr3 | rsp+40
> +; +----------------+
> +; | CountTofinish | r9
> +; +----------------+
> +; | TopOfApStack | r8
> +; +----------------+
> +; | ApTargetCState | rdx
> +; +----------------+
> +; | MwaitSupport | rcx
> +; +----------------+
> +; | the return |
> +; +----------------+ low address
>
> - pop rdx
> - pop rcx
> -
> -NoSevEs:
> - cli ; Disable interrupt before switching to 32-bit mode
> - mov rax, [rsp + 40] ; CountTofinish
> +AsmRelocateApLoopStart:
> + mov rax, r9 ; CountTofinish
> lock dec dword [rax] ; (*CountTofinish)--
>
> - mov r10, [rsp + 48] ; Pm16CodeSegment
> - mov rax, [rsp + 56] ; SevEsAPJumpTable
> - mov rbx, [rsp + 64] ; WakeupBuffer
> - mov rsp, r9 ; TopOfApStack
> -
> - push rax ; Save SevEsAPJumpTable
> - push rbx ; Save WakeupBuffer
> - push r10 ; Save Pm16CodeSegment
> - push rcx ; Save MwaitSupport
> - push rdx ; Save ApTargetCState
> -
> - lea rax, [PmEntry] ; rax <- The start address of transition code
> -
> - push r8
> - push rax
> -
> - ;
> - ; Clear R8 - R15, for reset, before going into 32-bit mode
> - ;
> - xor r8, r8
> - xor r9, r9
> - xor r10, r10
> - xor r11, r11
> - xor r12, r12
> - xor r13, r13
> - xor r14, r14
> - xor r15, r15
> -
> - ;
> - ; Far return into 32-bit mode
> - ;
> - retfq
> -
> -BITS 32
> -PmEntry:
> - mov eax, cr0
> - btr eax, 31 ; Clear CR0.PG
> - mov cr0, eax ; Disable paging and caches
> -
> - mov ecx, 0xc0000080
> - rdmsr
> - and ah, ~ 1 ; Clear LME
> - wrmsr
> - mov eax, cr4
> - and al, ~ (1 << 5) ; Clear PAE
> - mov cr4, eax
> -
> - pop edx
> - add esp, 4
> - pop ecx,
> - add esp, 4
> + mov rax, [rsp + 40] ; Cr3
> + ; Do not push on old stack, since old stack is not mapped
> + ; in the page table pointed by cr3
> + mov cr3, rax
> + mov rsp, r8 ; TopOfApStack
>
> MwaitCheck:
> cmp cl, 1 ; Check mwait-monitor support
> jnz HltLoop
> - mov ebx, edx ; Save C-State to ebx
> + mov rbx, rdx ; Save C-State to ebx
> +
> MwaitLoop:
> cli
> - mov eax, esp ; Set Monitor Address
> + mov rax, rsp ; Set Monitor Address
> xor ecx, ecx ; ecx = 0
> xor edx, edx ; edx = 0
> monitor
> - mov eax, ebx ; Mwait Cx, Target C-State per eax[7:4]
> + mov rax, rbx ; Mwait Cx, Target C-State per eax[7:4]
> shl eax, 4
> mwait
> jmp MwaitLoop
>
> HltLoop:
> - pop edx ; PM16CodeSegment
> - add esp, 4
> - pop ebx ; WakeupBuffer
> - add esp, 4
> - pop eax ; SevEsAPJumpTable
> - add esp, 4
> - cmp eax, 0 ; Check for SEV-ES
> - je DoHlt
> -
> - cli
> - ;
> - ; SEV-ES is enabled, use VMGEXIT (GHCB information already
> - ; set by caller)
> - ;
> -BITS 64
> - rep vmmcall
> -BITS 32
> -
> - ;
> - ; Back from VMGEXIT AP_HLT_LOOP
> - ; Push the FLAGS/CS/IP values to use
> - ;
> - push word 0x0002 ; EFLAGS
> - xor ecx, ecx
> - mov cx, [eax + 2] ; CS
> - push cx
> - mov cx, [eax] ; IP
> - push cx
> - push word 0x0000 ; For alignment, will be discarded
> -
> - push edx
> - push ebx
> -
> - mov edx, esi ; Restore RDX reset value
> -
> - retf
> -
> -DoHlt:
> cli
> hlt
> - jmp DoHlt
> + jmp HltLoop
>
> -BITS 64
> AsmRelocateApLoopEnd:
>
> ;-------------------------------------------------------------------------------------
> diff --git a/UefiCpuPkg/UefiCpuPkg.dsc b/UefiCpuPkg/UefiCpuPkg.dsc
> index f9a46089d2..062a43d7a7 100644
> --- a/UefiCpuPkg/UefiCpuPkg.dsc
> +++ b/UefiCpuPkg/UefiCpuPkg.dsc
> @@ -1,7 +1,7 @@
> ## @file
> # UefiCpuPkg Package
> #
> -# Copyright (c) 2007 - 2022, Intel Corporation. All rights reserved.<BR>
> +# Copyright (c) 2007 - 2023, Intel Corporation. All rights reserved.<BR>
> #
> # SPDX-License-Identifier: BSD-2-Clause-Patent
> #
> @@ -94,6 +94,7 @@
> MemoryAllocationLib|MdePkg/Library/UefiMemoryAllocationLib/UefiMemoryAllocationLib.inf
> HobLib|MdePkg/Library/DxeHobLib/DxeHobLib.inf
> CpuExceptionHandlerLib|UefiCpuPkg/Library/CpuExceptionHandlerLib/DxeCpuExceptionHandlerLib.inf
> + CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
> MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
> RegisterCpuFeaturesLib|UefiCpuPkg/Library/RegisterCpuFeaturesLib/DxeRegisterCpuFeaturesLib.inf
> CpuCacheInfoLib|UefiCpuPkg/Library/CpuCacheInfoLib/DxeCpuCacheInfoLib.inf
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [edk2-devel] [Patch V2 5/5] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS.
2023-02-20 14:11 ` Lendacky, Thomas
@ 2023-02-20 17:43 ` Yuanhao Xie
2023-02-20 18:05 ` Lendacky, Thomas
0 siblings, 1 reply; 14+ messages in thread
From: Yuanhao Xie @ 2023-02-20 17:43 UTC (permalink / raw)
To: thomas.lendacky@amd.com
Cc: Dong, Guo, Ni, Ray, Rhodes, Sean, Lu, James, Guo, Gua,
Gerd Hoffmann, Laszlo Ersek, devel@edk2.groups.io
Hi Tom,
This series is the same on the link that I sent to you last Wednesday with the latest update 5 days ago: https://github.com/tianocore/edk2/pull/4012.
Last request email is on 17th, and a quick response on 18th, so I expect the testing is on the update version.
Let's test it again. Please wait me one more day for the update, since I would like to have a bit more change based on Marvin Häuser's feedback.
Thanks in advanced.
Yuanhao
-----Original Message-----
From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Lendacky, Thomas via groups.io
Sent: Monday, February 20, 2023 10:11 PM
To: Xie, Yuanhao <yuanhao.xie@intel.com>; devel@edk2.groups.io
Cc: Dong, Guo <guo.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Rhodes, Sean <sean@starlabs.systems>; Lu, James <james.lu@intel.com>; Guo, Gua <gua.guo@intel.com>; Gerd Hoffmann <kraxel@redhat.com>; Laszlo Ersek <lersek@redhat.com>
Subject: Re: [edk2-devel] [Patch V2 5/5] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS.
On 2/19/23 23:20, Yuanhao Xie wrote:
> Update the address of the allocated memory, only keep 4GB limitation for
> the case that APs still need to be transferred to 32-bit mode before
> OS.
>
> Remove the unused arguments of AsmRelocateApLoopStart, update the
> stack offset. For the processors other than with SEV-ES enabled, keep
> APs in 64 bit mode before handoff to OS.
>
> Create PageTable for the allocated reserved memory.
>
> Tested on the OVMF package use of the support on AMD processors.
>
> Cc: Guo Dong <guo.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Sean Rhodes <sean@starlabs.systems>
> Cc: James Lu <james.lu@intel.com>
> Cc: Gua Guo <gua.guo@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
I only tested the pull request that you pointed me at which didn't have all these patches (basically it was only patch #1), so you can't really put my Tested-by: on this.
You didn't send an updated link, so I didn't test the full series. But I will test this full series. Do you have a link to a tree?
Thanks,
Tom
> Tested-by: Yuanhao Xie <yuanhao.xie@intel.com>
> Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
> ---
> UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf | 6 +++++-
> UefiCpuPkg/Library/MpInitLib/DxeMpLib.c | 28 ++++++++++++++++++----------
> UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c | 23 +++++++++++++++++++++++
> UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm | 11 ++++-------
> UefiCpuPkg/Library/MpInitLib/MpEqu.inc | 22 +++++++++++-----------
> UefiCpuPkg/Library/MpInitLib/MpLib.h | 17 +++++++++++++----
> UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 173 ++++++++++++++++++++++++++++-------------------------------------------------------------------------------------------------------------------------------------------------
> UefiCpuPkg/UefiCpuPkg.dsc | 3 ++-
> 9 files changed, 186 insertions(+), 179 deletions(-)
>
> diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
> b/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
> index cd07de3a3c..4285dd06b4 100644
> --- a/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
> +++ b/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
> @@ -1,7 +1,7 @@
> ## @file
> # MP Initialize Library instance for DXE driver.
> #
> -# Copyright (c) 2016 - 2021, Intel Corporation. All rights
> reserved.<BR>
> +# Copyright (c) 2016 - 2023, Intel Corporation. All rights
> +reserved.<BR>
> # SPDX-License-Identifier: BSD-2-Clause-Patent
> #
> ##
> @@ -24,10 +24,12 @@
> [Sources.IA32]
> Ia32/AmdSev.c
> Ia32/MpFuncs.nasm
> + Ia32/CreatePageTable.c
>
> [Sources.X64]
> X64/AmdSev.c
> X64/MpFuncs.nasm
> + X64/CreatePageTable.c
>
> [Sources.common]
> AmdSev.c
> @@ -56,6 +58,8 @@
> PcdLib
> CcExitLib
> MicrocodeLib
> +[LibraryClasses.X64]
> + CpuPageTableLib
>
> [Protocols]
> gEfiTimerArchProtocolGuid ## SOMETIMES_CONSUMES
> diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
> b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
> index 75743faf5f..76953206ca 100644
> --- a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
> +++ b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
> @@ -28,7 +28,7 @@ volatile BOOLEAN mStopCheckAllApsStatus = TRUE;
> UINTN mReservedTopOfApStack;
> volatile UINT32 mNumberToFinish = 0;
> RELOCATE_AP_LOOP_ENTRY mReservedApLoop;
> -
> +UINTN mApPageTable;
>
> //
> // Begin wakeup buffer allocation below 0x88000 @@ -379,10 +379,10
> @@ RelocateApLoop (
> IN OUT VOID *Buffer
> )
> {
> - CPU_MP_DATA *CpuMpData;
> - BOOLEAN MwaitSupport;
> - UINTN ProcessorNumber;
> - UINTN StackStart;
> + CPU_MP_DATA *CpuMpData;
> + BOOLEAN MwaitSupport;
> + UINTN ProcessorNumber;
> + UINTN StackStart;
>
> MpInitLibWhoAmI (&ProcessorNumber);
> CpuMpData = GetCpuMpData ();
> @@ -404,12 +404,9 @@ RelocateApLoop (
> mReservedApLoop.GenericEntry (
> MwaitSupport,
> CpuMpData->ApTargetCState,
> - CpuMpData->PmCodeSegment,
> StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
> (UINTN)&mNumberToFinish,
> - CpuMpData->Pm16CodeSegment,
> - CpuMpData->SevEsAPBuffer,
> - CpuMpData->WakeupBuffer
> + mApPageTable
> );
> }
>
> @@ -540,9 +537,11 @@ InitMpGlobalData (
>
> AddressMap = &CpuMpData->AddressMap;
> if (CpuMpData->UseSevEsAPMethod) {
> + Address = BASE_4GB - 1;
> ApLoopFunc = AddressMap->RelocateApLoopFuncAddressAmdSev;
> ApLoopFuncSize = AddressMap->RelocateApLoopFuncSizeAmdSev;
> } else {
> + Address = MAX_ADDRESS;
> ApLoopFunc = AddressMap->RelocateApLoopFuncAddress;
> ApLoopFuncSize = AddressMap->RelocateApLoopFuncSize;
> }
> @@ -564,7 +563,6 @@ InitMpGlobalData (
> // +------------+ (low address )
> //
>
> - Address = BASE_4GB - 1;
> StackPages = EFI_SIZE_TO_PAGES (CpuMpData->CpuCount * AP_SAFE_STACK_SIZE);
> FuncPages = EFI_SIZE_TO_PAGES (ALIGN_VALUE (ApLoopFuncSize,
> EFI_PAGE_SIZE));
>
> @@ -597,6 +595,16 @@ InitMpGlobalData (
> ASSERT ((mReservedTopOfApStack & (UINTN)(CPU_STACK_ALIGNMENT - 1)) == 0);
> mReservedApLoop.Data = (VOID *)(UINTN)Address;
> CopyMem (mReservedApLoop.Data, ApLoopFunc, ApLoopFuncSize);
> + if (!CpuMpData->UseSevEsAPMethod) {
> + //
> + // non-Sev Processor
> + //
> + mApPageTable = CreatePageTable (
> + (UINTN)Address,
> + EFI_PAGES_TO_SIZE (StackPages+FuncPages)
> + );
> + }
> +
> Status = gBS->CreateEvent (
> EVT_TIMER | EVT_NOTIFY_SIGNAL,
> TPL_NOTIFY,
> diff --git a/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
> b/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
> new file mode 100644
> index 0000000000..bec9b247c0
> --- /dev/null
> +++ b/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
> @@ -0,0 +1,23 @@
> +/** @file
> + Function to create page talbe.
> + Only create page table for x64, and leave the CreatePageTable empty for Ia32.
> + Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
> + SPDX-License-Identifier: BSD-2-Clause-Patent **/
> +
> +#include <Base.h>
> +
> +/**
> + Only create page table for x64, and leave the CreatePageTable empty for Ia32.
> + @param[in] LinearAddress The start of the linear address range.
> + @param[in] Length The length of the linear address range.
> + @return The page table to be created.
> +**/
> +UINTN
> +CreatePageTable (
> + IN UINTN Address,
> + IN UINTN Length
> + )
> +{
> + return 0;
> +}
> diff --git a/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
> b/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
> index bfcdbd31c1..c65a825a23 100644
> --- a/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
> +++ b/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
> @@ -1,5 +1,5 @@
>
> ;---------------------------------------------------------------------
> --------- ; -; Copyright (c) 2015 - 2022, Intel Corporation. All
> rights reserved.<BR>
> +; Copyright (c) 2015 - 2023, Intel Corporation. All rights
> +reserved.<BR>
> ; SPDX-License-Identifier: BSD-2-Clause-Patent
> ;
> ; Module Name:
> @@ -219,20 +219,17 @@ SwitchToRealProcEnd:
> RendezvousFunnelProcEnd:
>
>
> ;---------------------------------------------------------------------
> ---------------- -; AsmRelocateApLoop (MwaitSupport, ApTargetCState,
> PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment,
> SevEsAPJumpTable, WakeupBuffer); -; -; The last three parameters
> (Pm16CodeSegment, SevEsAPJumpTable and WakeupBuffer) are -; specific
> to SEV-ES support and are not applicable on IA32.
> +; AsmRelocateApLoop (MwaitSupport, ApTargetCState, TopOfApStack,
> +CountTofinish, Cr3);
> ;-------------------------------------------------------------------------------------
> AsmRelocateApLoopStart:
> mov eax, esp
> - mov esp, [eax + 16] ; TopOfApStack
> + mov esp, [eax + 12] ; TopOfApStack
> push dword [eax] ; push return address for stack trace
> push ebp
> mov ebp, esp
> mov ebx, [eax + 8] ; ApTargetCState
> mov ecx, [eax + 4] ; MwaitSupport
> - mov eax, [eax + 20] ; CountTofinish
> + mov eax, [eax + 16] ; CountTofinish
> lock dec dword [eax] ; (*CountTofinish)--
> cmp cl, 1 ; Check mwait-monitor support
> jnz HltLoop
> diff --git a/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
> b/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
> index 1472ef2024..6730f2f411 100644
> --- a/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
> +++ b/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
> @@ -21,17 +21,17 @@ CPU_SWITCH_STATE_LOADED equ 2
> ; Equivalent NASM structure of MP_ASSEMBLY_ADDRESS_MAP
> ;
> struc MP_ASSEMBLY_ADDRESS_MAP
> - .RendezvousFunnelAddress CTYPE_UINTN 1
> - .ModeEntryOffset CTYPE_UINTN 1
> - .RendezvousFunnelSize CTYPE_UINTN 1
> - .RelocateApLoopFuncAddress CTYPE_UINTN 1
> - .RelocateApLoopFuncSize CTYPE_UINTN 1
> - .RelocateApLoopFuncAddressAmdSev CTYPE_UINTN 1
> - .RelocateApLoopFuncSizeAmdSev CTYPE_UINTN 1
> - .ModeTransitionOffset CTYPE_UINTN 1
> - .SwitchToRealNoNxOffset CTYPE_UINTN 1
> - .SwitchToRealPM16ModeOffset CTYPE_UINTN 1
> - .SwitchToRealPM16ModeSize CTYPE_UINTN 1
> + .RendezvousFunnelAddress CTYPE_UINTN 1
> + .ModeEntryOffset CTYPE_UINTN 1
> + .RendezvousFunnelSize CTYPE_UINTN 1
> + .RelocateApLoopFuncAddress CTYPE_UINTN 1
> + .RelocateApLoopFuncSize CTYPE_UINTN 1
> + .RelocateApLoopFuncAddressAmdSev CTYPE_UINTN 1
> + .RelocateApLoopFuncSizeAmdSev CTYPE_UINTN 1
> + .ModeTransitionOffset CTYPE_UINTN 1
> + .SwitchToRealNoNxOffset CTYPE_UINTN 1
> + .SwitchToRealPM16ModeOffset CTYPE_UINTN 1
> + .SwitchToRealPM16ModeSize CTYPE_UINTN 1
> endstruc
>
> ;
> diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.h
> b/UefiCpuPkg/Library/MpInitLib/MpLib.h
> index 772a828045..cc77843bea 100644
> --- a/UefiCpuPkg/Library/MpInitLib/MpLib.h
> +++ b/UefiCpuPkg/Library/MpInitLib/MpLib.h
> @@ -368,12 +368,9 @@ typedef
> (EFIAPI *ASM_RELOCATE_AP_LOOP)(
> IN BOOLEAN MwaitSupport,
> IN UINTN ApTargetCState,
> - IN UINTN PmCodeSegment,
> IN UINTN TopOfApStack,
> IN UINTN NumberToFinish,
> - IN UINTN Pm16CodeSegment,
> - IN UINTN SevEsAPJumpTable,
> - IN UINTN WakeupBuffer
> + IN UINTN Cr3
> );
>
> /**
> @@ -498,6 +495,18 @@ GetSevEsAPMemory (
> VOID
> );
>
> +/**
> + Create 1:1 mapping page table in reserved memory to map the specified address range.
> + @param[in] LinearAddress The start of the linear address range.
> + @param[in] Length The length of the linear address range.
> + @return The page table to be created.
> +**/
> +UINTN
> +CreatePageTable (
> + IN UINTN Address,
> + IN UINTN Length
> + );
> +
> /**
> This function will be called by BSP to wakeup AP.
>
> diff --git a/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
> b/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
> new file mode 100644
> index 0000000000..7cf91ed9c4
> --- /dev/null
> +++ b/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
> @@ -0,0 +1,82 @@
> +/** @file
> + Function to create page talbe.
> + Only create page table for x64, and leave the CreatePageTable empty for Ia32.
> + Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
> + SPDX-License-Identifier: BSD-2-Clause-Patent **/ #include
> +<Library/CpuPageTableLib.h> #include <Library/MemoryAllocationLib.h>
> +#include <Base.h> #include <Library/BaseMemoryLib.h> #include
> +<Library/DebugLib.h> #include <Library/BaseLib.h>
> +
> +/**
> + Create 1:1 mapping page table in reserved memory to map the specified address range.
> + @param[in] LinearAddress The start of the linear address range.
> + @param[in] Length The length of the linear address range.
> + @return The page table to be created.
> +**/
> +UINTN
> +CreatePageTable (
> + IN UINTN Address,
> + IN UINTN Length
> + )
> +{
> + EFI_STATUS Status;
> + VOID *PageTableBuffer;
> + UINTN PageTableBufferSize;
> + UINTN PageTable;
> + PAGING_MODE PagingMode;
> + IA32_CR4 Cr4;
> +
> + IA32_MAP_ATTRIBUTE MapAttribute;
> + IA32_MAP_ATTRIBUTE MapMask;
> +
> + MapAttribute.Uint64 = Address;
> + MapAttribute.Bits.Present = 1;
> + MapAttribute.Bits.ReadWrite = 1;
> +
> + MapMask.Bits.PageTableBaseAddress = 1;
> + MapMask.Bits.Present = 1;
> + MapMask.Bits.ReadWrite = 1;
> +
> + PageTable = 0;
> + PageTableBufferSize = 0;
> +
> + Cr4.UintN = AsmReadCr4 ();
> +
> + if (Cr4.Bits.LA57 == 1) {
> + PagingMode = Paging5Level;
> + } else {
> + PagingMode = Paging4Level;
> + }
> +
> + Status = PageTableMap (
> + &PageTable,
> + PagingMode,
> + NULL,
> + &PageTableBufferSize,
> + Address,
> + Length,
> + &MapAttribute,
> + &MapMask
> + );
> + ASSERT (Status == EFI_BUFFER_TOO_SMALL); DEBUG ((DEBUG_INFO, "AP
> + Page Table Buffer Size = %x\n", PageTableBufferSize));
> +
> + PageTableBuffer = AllocateReservedPages (EFI_SIZE_TO_PAGES
> +(PageTableBufferSize));
> + ASSERT (PageTableBuffer != NULL);
> + Status = PageTableMap (
> + &PageTable,
> + PagingMode,
> + PageTableBuffer,
> + &PageTableBufferSize,
> + Address,
> + Length,
> + &MapAttribute,
> + &MapMask
> + );
> + ASSERT_EFI_ERROR (Status);
> + return PageTable;
> +}
> diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
> b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
> index d36f8ba06d..2bce04d99c 100644
> --- a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
> +++ b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
> @@ -279,172 +279,55 @@ CProcedureInvoke:
> RendezvousFunnelProcEnd:
>
>
> ;---------------------------------------------------------------------
> ---------------- -; AsmRelocateApLoop (MwaitSupport, ApTargetCState,
> PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment,
> SevEsAPJumpTable, WakeupBuffer);
> +; AsmRelocateApLoop (MwaitSupport, ApTargetCState, TopOfApStack,
> +CountTofinish, Cr3); ; This function is called during the
> +finalizaiton of Mp initialization before booting ; to OS, and aim to put Aps either in Mwait or HLT.
>
> ;---------------------------------------------------------------------
> ----------------
> -AsmRelocateApLoopStart:
> -BITS 64
> - cmp qword [rsp + 56], 0 ; SevEsAPJumpTable
> - je NoSevEs
> -
> - ;
> - ; Perform some SEV-ES related setup before leaving 64-bit mode
> - ;
> - push rcx
> - push rdx
> -
> - ;
> - ; Get the RDX reset value using CPUID
> - ;
> - mov rax, 1
> - cpuid
> - mov rsi, rax ; Save off the reset value for RDX
> -
> - ;
> - ; Prepare the GHCB for the AP_HLT_LOOP VMGEXIT call
> - ; - Must be done while in 64-bit long mode so that writes to
> - ; the GHCB memory will be unencrypted.
> - ; - No NAE events can be generated once this is set otherwise
> - ; the AP_RESET_HOLD SW_EXITCODE will be overwritten.
> - ;
> - mov rcx, 0xc0010130
> - rdmsr ; Retrieve current GHCB address
> - shl rdx, 32
> - or rdx, rax
> -
> - mov rdi, rdx
> - xor rax, rax
> - mov rcx, 0x800
> - shr rcx, 3
> - rep stosq ; Clear the GHCB
> -
> - mov rax, 0x80000004 ; VMGEXIT AP_RESET_HOLD
> - mov [rdx + 0x390], rax
> - mov rax, 114 ; Set SwExitCode valid bit
> - bts [rdx + 0x3f0], rax
> - inc rax ; Set SwExitInfo1 valid bit
> - bts [rdx + 0x3f0], rax
> - inc rax ; Set SwExitInfo2 valid bit
> - bts [rdx + 0x3f0], rax
> +; +----------------+
> +; | Cr3 | rsp+40
> +; +----------------+
> +; | CountTofinish | r9
> +; +----------------+
> +; | TopOfApStack | r8
> +; +----------------+
> +; | ApTargetCState | rdx
> +; +----------------+
> +; | MwaitSupport | rcx
> +; +----------------+
> +; | the return |
> +; +----------------+ low address
>
> - pop rdx
> - pop rcx
> -
> -NoSevEs:
> - cli ; Disable interrupt before switching to 32-bit mode
> - mov rax, [rsp + 40] ; CountTofinish
> +AsmRelocateApLoopStart:
> + mov rax, r9 ; CountTofinish
> lock dec dword [rax] ; (*CountTofinish)--
>
> - mov r10, [rsp + 48] ; Pm16CodeSegment
> - mov rax, [rsp + 56] ; SevEsAPJumpTable
> - mov rbx, [rsp + 64] ; WakeupBuffer
> - mov rsp, r9 ; TopOfApStack
> -
> - push rax ; Save SevEsAPJumpTable
> - push rbx ; Save WakeupBuffer
> - push r10 ; Save Pm16CodeSegment
> - push rcx ; Save MwaitSupport
> - push rdx ; Save ApTargetCState
> -
> - lea rax, [PmEntry] ; rax <- The start address of transition code
> -
> - push r8
> - push rax
> -
> - ;
> - ; Clear R8 - R15, for reset, before going into 32-bit mode
> - ;
> - xor r8, r8
> - xor r9, r9
> - xor r10, r10
> - xor r11, r11
> - xor r12, r12
> - xor r13, r13
> - xor r14, r14
> - xor r15, r15
> -
> - ;
> - ; Far return into 32-bit mode
> - ;
> - retfq
> -
> -BITS 32
> -PmEntry:
> - mov eax, cr0
> - btr eax, 31 ; Clear CR0.PG
> - mov cr0, eax ; Disable paging and caches
> -
> - mov ecx, 0xc0000080
> - rdmsr
> - and ah, ~ 1 ; Clear LME
> - wrmsr
> - mov eax, cr4
> - and al, ~ (1 << 5) ; Clear PAE
> - mov cr4, eax
> -
> - pop edx
> - add esp, 4
> - pop ecx,
> - add esp, 4
> + mov rax, [rsp + 40] ; Cr3
> + ; Do not push on old stack, since old stack is not mapped
> + ; in the page table pointed by cr3
> + mov cr3, rax
> + mov rsp, r8 ; TopOfApStack
>
> MwaitCheck:
> cmp cl, 1 ; Check mwait-monitor support
> jnz HltLoop
> - mov ebx, edx ; Save C-State to ebx
> + mov rbx, rdx ; Save C-State to ebx
> +
> MwaitLoop:
> cli
> - mov eax, esp ; Set Monitor Address
> + mov rax, rsp ; Set Monitor Address
> xor ecx, ecx ; ecx = 0
> xor edx, edx ; edx = 0
> monitor
> - mov eax, ebx ; Mwait Cx, Target C-State per eax[7:4]
> + mov rax, rbx ; Mwait Cx, Target C-State per eax[7:4]
> shl eax, 4
> mwait
> jmp MwaitLoop
>
> HltLoop:
> - pop edx ; PM16CodeSegment
> - add esp, 4
> - pop ebx ; WakeupBuffer
> - add esp, 4
> - pop eax ; SevEsAPJumpTable
> - add esp, 4
> - cmp eax, 0 ; Check for SEV-ES
> - je DoHlt
> -
> - cli
> - ;
> - ; SEV-ES is enabled, use VMGEXIT (GHCB information already
> - ; set by caller)
> - ;
> -BITS 64
> - rep vmmcall
> -BITS 32
> -
> - ;
> - ; Back from VMGEXIT AP_HLT_LOOP
> - ; Push the FLAGS/CS/IP values to use
> - ;
> - push word 0x0002 ; EFLAGS
> - xor ecx, ecx
> - mov cx, [eax + 2] ; CS
> - push cx
> - mov cx, [eax] ; IP
> - push cx
> - push word 0x0000 ; For alignment, will be discarded
> -
> - push edx
> - push ebx
> -
> - mov edx, esi ; Restore RDX reset value
> -
> - retf
> -
> -DoHlt:
> cli
> hlt
> - jmp DoHlt
> + jmp HltLoop
>
> -BITS 64
> AsmRelocateApLoopEnd:
>
>
> ;---------------------------------------------------------------------
> ---------------- diff --git a/UefiCpuPkg/UefiCpuPkg.dsc
> b/UefiCpuPkg/UefiCpuPkg.dsc index f9a46089d2..062a43d7a7 100644
> --- a/UefiCpuPkg/UefiCpuPkg.dsc
> +++ b/UefiCpuPkg/UefiCpuPkg.dsc
> @@ -1,7 +1,7 @@
> ## @file
> # UefiCpuPkg Package
> #
> -# Copyright (c) 2007 - 2022, Intel Corporation. All rights
> reserved.<BR>
> +# Copyright (c) 2007 - 2023, Intel Corporation. All rights
> +reserved.<BR>
> #
> # SPDX-License-Identifier: BSD-2-Clause-Patent
> #
> @@ -94,6 +94,7 @@
> MemoryAllocationLib|MdePkg/Library/UefiMemoryAllocationLib/UefiMemoryAllocationLib.inf
> HobLib|MdePkg/Library/DxeHobLib/DxeHobLib.inf
>
> CpuExceptionHandlerLib|UefiCpuPkg/Library/CpuExceptionHandlerLib/DxeCp
> uExceptionHandlerLib.inf
> +
> + CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.i
> + nf
> MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
> RegisterCpuFeaturesLib|UefiCpuPkg/Library/RegisterCpuFeaturesLib/DxeRegisterCpuFeaturesLib.inf
>
> CpuCacheInfoLib|UefiCpuPkg/Library/CpuCacheInfoLib/DxeCpuCacheInfoLib.
> inf
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [edk2-devel] [Patch V2 5/5] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS.
2023-02-20 17:43 ` [edk2-devel] " Yuanhao Xie
@ 2023-02-20 18:05 ` Lendacky, Thomas
0 siblings, 0 replies; 14+ messages in thread
From: Lendacky, Thomas @ 2023-02-20 18:05 UTC (permalink / raw)
To: Xie, Yuanhao
Cc: Dong, Guo, Ni, Ray, Rhodes, Sean, Lu, James, Guo, Gua,
Gerd Hoffmann, Laszlo Ersek, devel@edk2.groups.io
On 2/20/23 11:43, Xie, Yuanhao wrote:
> Hi Tom,
>
> This series is the same on the link that I sent to you last Wednesday with the latest update 5 days ago: https://github.com/tianocore/edk2/pull/4012.
> Last request email is on 17th, and a quick response on 18th, so I expect the testing is on the update version.
That was strange, when I pulled the tree, there was only one other commit
above the first patch in the pull request. Not sure what happened there...
I pulled the tree again and now see all the commits.
>
>
> Let's test it again. Please wait me one more day for the update, since I would like to have a bit more change based on Marvin Häuser's feedback.
Ok, I'll wait another day and re-test everything.
Thanks,
Tom
>
> Thanks in advanced.
> Yuanhao
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Lendacky, Thomas via groups.io
> Sent: Monday, February 20, 2023 10:11 PM
> To: Xie, Yuanhao <yuanhao.xie@intel.com>; devel@edk2.groups.io
> Cc: Dong, Guo <guo.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Rhodes, Sean <sean@starlabs.systems>; Lu, James <james.lu@intel.com>; Guo, Gua <gua.guo@intel.com>; Gerd Hoffmann <kraxel@redhat.com>; Laszlo Ersek <lersek@redhat.com>
> Subject: Re: [edk2-devel] [Patch V2 5/5] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS.
>
> On 2/19/23 23:20, Yuanhao Xie wrote:
>> Update the address of the allocated memory, only keep 4GB limitation for
>> the case that APs still need to be transferred to 32-bit mode before
>> OS.
>>
>> Remove the unused arguments of AsmRelocateApLoopStart, update the
>> stack offset. For the processors other than with SEV-ES enabled, keep
>> APs in 64 bit mode before handoff to OS.
>>
>> Create PageTable for the allocated reserved memory.
>>
>> Tested on the OVMF package use of the support on AMD processors.
>>
>> Cc: Guo Dong <guo.dong@intel.com>
>> Cc: Ray Ni <ray.ni@intel.com>
>> Cc: Sean Rhodes <sean@starlabs.systems>
>> Cc: James Lu <james.lu@intel.com>
>> Cc: Gua Guo <gua.guo@intel.com>
>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>> Cc: Tom Lendacky <thomas.lendacky@amd.com>
>> Cc: Laszlo Ersek <lersek@redhat.com>
>> Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
>
> I only tested the pull request that you pointed me at which didn't have all these patches (basically it was only patch #1), so you can't really put my Tested-by: on this.
>
> You didn't send an updated link, so I didn't test the full series. But I will test this full series. Do you have a link to a tree?
>
> Thanks,
> Tom
>
>> Tested-by: Yuanhao Xie <yuanhao.xie@intel.com>
>> Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
>> ---
>> UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf | 6 +++++-
>> UefiCpuPkg/Library/MpInitLib/DxeMpLib.c | 28 ++++++++++++++++++----------
>> UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c | 23 +++++++++++++++++++++++
>> UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm | 11 ++++-------
>> UefiCpuPkg/Library/MpInitLib/MpEqu.inc | 22 +++++++++++-----------
>> UefiCpuPkg/Library/MpInitLib/MpLib.h | 17 +++++++++++++----
>> UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>> UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 173 ++++++++++++++++++++++++++++-------------------------------------------------------------------------------------------------------------------------------------------------
>> UefiCpuPkg/UefiCpuPkg.dsc | 3 ++-
>> 9 files changed, 186 insertions(+), 179 deletions(-)
>>
>> diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
>> b/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
>> index cd07de3a3c..4285dd06b4 100644
>> --- a/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
>> +++ b/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
>> @@ -1,7 +1,7 @@
>> ## @file
>> # MP Initialize Library instance for DXE driver.
>> #
>> -# Copyright (c) 2016 - 2021, Intel Corporation. All rights
>> reserved.<BR>
>> +# Copyright (c) 2016 - 2023, Intel Corporation. All rights
>> +reserved.<BR>
>> # SPDX-License-Identifier: BSD-2-Clause-Patent
>> #
>> ##
>> @@ -24,10 +24,12 @@
>> [Sources.IA32]
>> Ia32/AmdSev.c
>> Ia32/MpFuncs.nasm
>> + Ia32/CreatePageTable.c
>>
>> [Sources.X64]
>> X64/AmdSev.c
>> X64/MpFuncs.nasm
>> + X64/CreatePageTable.c
>>
>> [Sources.common]
>> AmdSev.c
>> @@ -56,6 +58,8 @@
>> PcdLib
>> CcExitLib
>> MicrocodeLib
>> +[LibraryClasses.X64]
>> + CpuPageTableLib
>>
>> [Protocols]
>> gEfiTimerArchProtocolGuid ## SOMETIMES_CONSUMES
>> diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
>> b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
>> index 75743faf5f..76953206ca 100644
>> --- a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
>> +++ b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
>> @@ -28,7 +28,7 @@ volatile BOOLEAN mStopCheckAllApsStatus = TRUE;
>> UINTN mReservedTopOfApStack;
>> volatile UINT32 mNumberToFinish = 0;
>> RELOCATE_AP_LOOP_ENTRY mReservedApLoop;
>> -
>> +UINTN mApPageTable;
>>
>> //
>> // Begin wakeup buffer allocation below 0x88000 @@ -379,10 +379,10
>> @@ RelocateApLoop (
>> IN OUT VOID *Buffer
>> )
>> {
>> - CPU_MP_DATA *CpuMpData;
>> - BOOLEAN MwaitSupport;
>> - UINTN ProcessorNumber;
>> - UINTN StackStart;
>> + CPU_MP_DATA *CpuMpData;
>> + BOOLEAN MwaitSupport;
>> + UINTN ProcessorNumber;
>> + UINTN StackStart;
>>
>> MpInitLibWhoAmI (&ProcessorNumber);
>> CpuMpData = GetCpuMpData ();
>> @@ -404,12 +404,9 @@ RelocateApLoop (
>> mReservedApLoop.GenericEntry (
>> MwaitSupport,
>> CpuMpData->ApTargetCState,
>> - CpuMpData->PmCodeSegment,
>> StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
>> (UINTN)&mNumberToFinish,
>> - CpuMpData->Pm16CodeSegment,
>> - CpuMpData->SevEsAPBuffer,
>> - CpuMpData->WakeupBuffer
>> + mApPageTable
>> );
>> }
>>
>> @@ -540,9 +537,11 @@ InitMpGlobalData (
>>
>> AddressMap = &CpuMpData->AddressMap;
>> if (CpuMpData->UseSevEsAPMethod) {
>> + Address = BASE_4GB - 1;
>> ApLoopFunc = AddressMap->RelocateApLoopFuncAddressAmdSev;
>> ApLoopFuncSize = AddressMap->RelocateApLoopFuncSizeAmdSev;
>> } else {
>> + Address = MAX_ADDRESS;
>> ApLoopFunc = AddressMap->RelocateApLoopFuncAddress;
>> ApLoopFuncSize = AddressMap->RelocateApLoopFuncSize;
>> }
>> @@ -564,7 +563,6 @@ InitMpGlobalData (
>> // +------------+ (low address )
>> //
>>
>> - Address = BASE_4GB - 1;
>> StackPages = EFI_SIZE_TO_PAGES (CpuMpData->CpuCount * AP_SAFE_STACK_SIZE);
>> FuncPages = EFI_SIZE_TO_PAGES (ALIGN_VALUE (ApLoopFuncSize,
>> EFI_PAGE_SIZE));
>>
>> @@ -597,6 +595,16 @@ InitMpGlobalData (
>> ASSERT ((mReservedTopOfApStack & (UINTN)(CPU_STACK_ALIGNMENT - 1)) == 0);
>> mReservedApLoop.Data = (VOID *)(UINTN)Address;
>> CopyMem (mReservedApLoop.Data, ApLoopFunc, ApLoopFuncSize);
>> + if (!CpuMpData->UseSevEsAPMethod) {
>> + //
>> + // non-Sev Processor
>> + //
>> + mApPageTable = CreatePageTable (
>> + (UINTN)Address,
>> + EFI_PAGES_TO_SIZE (StackPages+FuncPages)
>> + );
>> + }
>> +
>> Status = gBS->CreateEvent (
>> EVT_TIMER | EVT_NOTIFY_SIGNAL,
>> TPL_NOTIFY,
>> diff --git a/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
>> b/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
>> new file mode 100644
>> index 0000000000..bec9b247c0
>> --- /dev/null
>> +++ b/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
>> @@ -0,0 +1,23 @@
>> +/** @file
>> + Function to create page talbe.
>> + Only create page table for x64, and leave the CreatePageTable empty for Ia32.
>> + Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
>> + SPDX-License-Identifier: BSD-2-Clause-Patent **/
>> +
>> +#include <Base.h>
>> +
>> +/**
>> + Only create page table for x64, and leave the CreatePageTable empty for Ia32.
>> + @param[in] LinearAddress The start of the linear address range.
>> + @param[in] Length The length of the linear address range.
>> + @return The page table to be created.
>> +**/
>> +UINTN
>> +CreatePageTable (
>> + IN UINTN Address,
>> + IN UINTN Length
>> + )
>> +{
>> + return 0;
>> +}
>> diff --git a/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
>> b/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
>> index bfcdbd31c1..c65a825a23 100644
>> --- a/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
>> +++ b/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
>> @@ -1,5 +1,5 @@
>>
>> ;---------------------------------------------------------------------
>> --------- ; -; Copyright (c) 2015 - 2022, Intel Corporation. All
>> rights reserved.<BR>
>> +; Copyright (c) 2015 - 2023, Intel Corporation. All rights
>> +reserved.<BR>
>> ; SPDX-License-Identifier: BSD-2-Clause-Patent
>> ;
>> ; Module Name:
>> @@ -219,20 +219,17 @@ SwitchToRealProcEnd:
>> RendezvousFunnelProcEnd:
>>
>>
>> ;---------------------------------------------------------------------
>> ---------------- -; AsmRelocateApLoop (MwaitSupport, ApTargetCState,
>> PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment,
>> SevEsAPJumpTable, WakeupBuffer); -; -; The last three parameters
>> (Pm16CodeSegment, SevEsAPJumpTable and WakeupBuffer) are -; specific
>> to SEV-ES support and are not applicable on IA32.
>> +; AsmRelocateApLoop (MwaitSupport, ApTargetCState, TopOfApStack,
>> +CountTofinish, Cr3);
>> ;-------------------------------------------------------------------------------------
>> AsmRelocateApLoopStart:
>> mov eax, esp
>> - mov esp, [eax + 16] ; TopOfApStack
>> + mov esp, [eax + 12] ; TopOfApStack
>> push dword [eax] ; push return address for stack trace
>> push ebp
>> mov ebp, esp
>> mov ebx, [eax + 8] ; ApTargetCState
>> mov ecx, [eax + 4] ; MwaitSupport
>> - mov eax, [eax + 20] ; CountTofinish
>> + mov eax, [eax + 16] ; CountTofinish
>> lock dec dword [eax] ; (*CountTofinish)--
>> cmp cl, 1 ; Check mwait-monitor support
>> jnz HltLoop
>> diff --git a/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
>> b/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
>> index 1472ef2024..6730f2f411 100644
>> --- a/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
>> +++ b/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
>> @@ -21,17 +21,17 @@ CPU_SWITCH_STATE_LOADED equ 2
>> ; Equivalent NASM structure of MP_ASSEMBLY_ADDRESS_MAP
>> ;
>> struc MP_ASSEMBLY_ADDRESS_MAP
>> - .RendezvousFunnelAddress CTYPE_UINTN 1
>> - .ModeEntryOffset CTYPE_UINTN 1
>> - .RendezvousFunnelSize CTYPE_UINTN 1
>> - .RelocateApLoopFuncAddress CTYPE_UINTN 1
>> - .RelocateApLoopFuncSize CTYPE_UINTN 1
>> - .RelocateApLoopFuncAddressAmdSev CTYPE_UINTN 1
>> - .RelocateApLoopFuncSizeAmdSev CTYPE_UINTN 1
>> - .ModeTransitionOffset CTYPE_UINTN 1
>> - .SwitchToRealNoNxOffset CTYPE_UINTN 1
>> - .SwitchToRealPM16ModeOffset CTYPE_UINTN 1
>> - .SwitchToRealPM16ModeSize CTYPE_UINTN 1
>> + .RendezvousFunnelAddress CTYPE_UINTN 1
>> + .ModeEntryOffset CTYPE_UINTN 1
>> + .RendezvousFunnelSize CTYPE_UINTN 1
>> + .RelocateApLoopFuncAddress CTYPE_UINTN 1
>> + .RelocateApLoopFuncSize CTYPE_UINTN 1
>> + .RelocateApLoopFuncAddressAmdSev CTYPE_UINTN 1
>> + .RelocateApLoopFuncSizeAmdSev CTYPE_UINTN 1
>> + .ModeTransitionOffset CTYPE_UINTN 1
>> + .SwitchToRealNoNxOffset CTYPE_UINTN 1
>> + .SwitchToRealPM16ModeOffset CTYPE_UINTN 1
>> + .SwitchToRealPM16ModeSize CTYPE_UINTN 1
>> endstruc
>>
>> ;
>> diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.h
>> b/UefiCpuPkg/Library/MpInitLib/MpLib.h
>> index 772a828045..cc77843bea 100644
>> --- a/UefiCpuPkg/Library/MpInitLib/MpLib.h
>> +++ b/UefiCpuPkg/Library/MpInitLib/MpLib.h
>> @@ -368,12 +368,9 @@ typedef
>> (EFIAPI *ASM_RELOCATE_AP_LOOP)(
>> IN BOOLEAN MwaitSupport,
>> IN UINTN ApTargetCState,
>> - IN UINTN PmCodeSegment,
>> IN UINTN TopOfApStack,
>> IN UINTN NumberToFinish,
>> - IN UINTN Pm16CodeSegment,
>> - IN UINTN SevEsAPJumpTable,
>> - IN UINTN WakeupBuffer
>> + IN UINTN Cr3
>> );
>>
>> /**
>> @@ -498,6 +495,18 @@ GetSevEsAPMemory (
>> VOID
>> );
>>
>> +/**
>> + Create 1:1 mapping page table in reserved memory to map the specified address range.
>> + @param[in] LinearAddress The start of the linear address range.
>> + @param[in] Length The length of the linear address range.
>> + @return The page table to be created.
>> +**/
>> +UINTN
>> +CreatePageTable (
>> + IN UINTN Address,
>> + IN UINTN Length
>> + );
>> +
>> /**
>> This function will be called by BSP to wakeup AP.
>>
>> diff --git a/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
>> b/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
>> new file mode 100644
>> index 0000000000..7cf91ed9c4
>> --- /dev/null
>> +++ b/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
>> @@ -0,0 +1,82 @@
>> +/** @file
>> + Function to create page talbe.
>> + Only create page table for x64, and leave the CreatePageTable empty for Ia32.
>> + Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
>> + SPDX-License-Identifier: BSD-2-Clause-Patent **/ #include
>> +<Library/CpuPageTableLib.h> #include <Library/MemoryAllocationLib.h>
>> +#include <Base.h> #include <Library/BaseMemoryLib.h> #include
>> +<Library/DebugLib.h> #include <Library/BaseLib.h>
>> +
>> +/**
>> + Create 1:1 mapping page table in reserved memory to map the specified address range.
>> + @param[in] LinearAddress The start of the linear address range.
>> + @param[in] Length The length of the linear address range.
>> + @return The page table to be created.
>> +**/
>> +UINTN
>> +CreatePageTable (
>> + IN UINTN Address,
>> + IN UINTN Length
>> + )
>> +{
>> + EFI_STATUS Status;
>> + VOID *PageTableBuffer;
>> + UINTN PageTableBufferSize;
>> + UINTN PageTable;
>> + PAGING_MODE PagingMode;
>> + IA32_CR4 Cr4;
>> +
>> + IA32_MAP_ATTRIBUTE MapAttribute;
>> + IA32_MAP_ATTRIBUTE MapMask;
>> +
>> + MapAttribute.Uint64 = Address;
>> + MapAttribute.Bits.Present = 1;
>> + MapAttribute.Bits.ReadWrite = 1;
>> +
>> + MapMask.Bits.PageTableBaseAddress = 1;
>> + MapMask.Bits.Present = 1;
>> + MapMask.Bits.ReadWrite = 1;
>> +
>> + PageTable = 0;
>> + PageTableBufferSize = 0;
>> +
>> + Cr4.UintN = AsmReadCr4 ();
>> +
>> + if (Cr4.Bits.LA57 == 1) {
>> + PagingMode = Paging5Level;
>> + } else {
>> + PagingMode = Paging4Level;
>> + }
>> +
>> + Status = PageTableMap (
>> + &PageTable,
>> + PagingMode,
>> + NULL,
>> + &PageTableBufferSize,
>> + Address,
>> + Length,
>> + &MapAttribute,
>> + &MapMask
>> + );
>> + ASSERT (Status == EFI_BUFFER_TOO_SMALL); DEBUG ((DEBUG_INFO, "AP
>> + Page Table Buffer Size = %x\n", PageTableBufferSize));
>> +
>> + PageTableBuffer = AllocateReservedPages (EFI_SIZE_TO_PAGES
>> +(PageTableBufferSize));
>> + ASSERT (PageTableBuffer != NULL);
>> + Status = PageTableMap (
>> + &PageTable,
>> + PagingMode,
>> + PageTableBuffer,
>> + &PageTableBufferSize,
>> + Address,
>> + Length,
>> + &MapAttribute,
>> + &MapMask
>> + );
>> + ASSERT_EFI_ERROR (Status);
>> + return PageTable;
>> +}
>> diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
>> b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
>> index d36f8ba06d..2bce04d99c 100644
>> --- a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
>> +++ b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
>> @@ -279,172 +279,55 @@ CProcedureInvoke:
>> RendezvousFunnelProcEnd:
>>
>>
>> ;---------------------------------------------------------------------
>> ---------------- -; AsmRelocateApLoop (MwaitSupport, ApTargetCState,
>> PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment,
>> SevEsAPJumpTable, WakeupBuffer);
>> +; AsmRelocateApLoop (MwaitSupport, ApTargetCState, TopOfApStack,
>> +CountTofinish, Cr3); ; This function is called during the
>> +finalizaiton of Mp initialization before booting ; to OS, and aim to put Aps either in Mwait or HLT.
>>
>> ;---------------------------------------------------------------------
>> ----------------
>> -AsmRelocateApLoopStart:
>> -BITS 64
>> - cmp qword [rsp + 56], 0 ; SevEsAPJumpTable
>> - je NoSevEs
>> -
>> - ;
>> - ; Perform some SEV-ES related setup before leaving 64-bit mode
>> - ;
>> - push rcx
>> - push rdx
>> -
>> - ;
>> - ; Get the RDX reset value using CPUID
>> - ;
>> - mov rax, 1
>> - cpuid
>> - mov rsi, rax ; Save off the reset value for RDX
>> -
>> - ;
>> - ; Prepare the GHCB for the AP_HLT_LOOP VMGEXIT call
>> - ; - Must be done while in 64-bit long mode so that writes to
>> - ; the GHCB memory will be unencrypted.
>> - ; - No NAE events can be generated once this is set otherwise
>> - ; the AP_RESET_HOLD SW_EXITCODE will be overwritten.
>> - ;
>> - mov rcx, 0xc0010130
>> - rdmsr ; Retrieve current GHCB address
>> - shl rdx, 32
>> - or rdx, rax
>> -
>> - mov rdi, rdx
>> - xor rax, rax
>> - mov rcx, 0x800
>> - shr rcx, 3
>> - rep stosq ; Clear the GHCB
>> -
>> - mov rax, 0x80000004 ; VMGEXIT AP_RESET_HOLD
>> - mov [rdx + 0x390], rax
>> - mov rax, 114 ; Set SwExitCode valid bit
>> - bts [rdx + 0x3f0], rax
>> - inc rax ; Set SwExitInfo1 valid bit
>> - bts [rdx + 0x3f0], rax
>> - inc rax ; Set SwExitInfo2 valid bit
>> - bts [rdx + 0x3f0], rax
>> +; +----------------+
>> +; | Cr3 | rsp+40
>> +; +----------------+
>> +; | CountTofinish | r9
>> +; +----------------+
>> +; | TopOfApStack | r8
>> +; +----------------+
>> +; | ApTargetCState | rdx
>> +; +----------------+
>> +; | MwaitSupport | rcx
>> +; +----------------+
>> +; | the return |
>> +; +----------------+ low address
>>
>> - pop rdx
>> - pop rcx
>> -
>> -NoSevEs:
>> - cli ; Disable interrupt before switching to 32-bit mode
>> - mov rax, [rsp + 40] ; CountTofinish
>> +AsmRelocateApLoopStart:
>> + mov rax, r9 ; CountTofinish
>> lock dec dword [rax] ; (*CountTofinish)--
>>
>> - mov r10, [rsp + 48] ; Pm16CodeSegment
>> - mov rax, [rsp + 56] ; SevEsAPJumpTable
>> - mov rbx, [rsp + 64] ; WakeupBuffer
>> - mov rsp, r9 ; TopOfApStack
>> -
>> - push rax ; Save SevEsAPJumpTable
>> - push rbx ; Save WakeupBuffer
>> - push r10 ; Save Pm16CodeSegment
>> - push rcx ; Save MwaitSupport
>> - push rdx ; Save ApTargetCState
>> -
>> - lea rax, [PmEntry] ; rax <- The start address of transition code
>> -
>> - push r8
>> - push rax
>> -
>> - ;
>> - ; Clear R8 - R15, for reset, before going into 32-bit mode
>> - ;
>> - xor r8, r8
>> - xor r9, r9
>> - xor r10, r10
>> - xor r11, r11
>> - xor r12, r12
>> - xor r13, r13
>> - xor r14, r14
>> - xor r15, r15
>> -
>> - ;
>> - ; Far return into 32-bit mode
>> - ;
>> - retfq
>> -
>> -BITS 32
>> -PmEntry:
>> - mov eax, cr0
>> - btr eax, 31 ; Clear CR0.PG
>> - mov cr0, eax ; Disable paging and caches
>> -
>> - mov ecx, 0xc0000080
>> - rdmsr
>> - and ah, ~ 1 ; Clear LME
>> - wrmsr
>> - mov eax, cr4
>> - and al, ~ (1 << 5) ; Clear PAE
>> - mov cr4, eax
>> -
>> - pop edx
>> - add esp, 4
>> - pop ecx,
>> - add esp, 4
>> + mov rax, [rsp + 40] ; Cr3
>> + ; Do not push on old stack, since old stack is not mapped
>> + ; in the page table pointed by cr3
>> + mov cr3, rax
>> + mov rsp, r8 ; TopOfApStack
>>
>> MwaitCheck:
>> cmp cl, 1 ; Check mwait-monitor support
>> jnz HltLoop
>> - mov ebx, edx ; Save C-State to ebx
>> + mov rbx, rdx ; Save C-State to ebx
>> +
>> MwaitLoop:
>> cli
>> - mov eax, esp ; Set Monitor Address
>> + mov rax, rsp ; Set Monitor Address
>> xor ecx, ecx ; ecx = 0
>> xor edx, edx ; edx = 0
>> monitor
>> - mov eax, ebx ; Mwait Cx, Target C-State per eax[7:4]
>> + mov rax, rbx ; Mwait Cx, Target C-State per eax[7:4]
>> shl eax, 4
>> mwait
>> jmp MwaitLoop
>>
>> HltLoop:
>> - pop edx ; PM16CodeSegment
>> - add esp, 4
>> - pop ebx ; WakeupBuffer
>> - add esp, 4
>> - pop eax ; SevEsAPJumpTable
>> - add esp, 4
>> - cmp eax, 0 ; Check for SEV-ES
>> - je DoHlt
>> -
>> - cli
>> - ;
>> - ; SEV-ES is enabled, use VMGEXIT (GHCB information already
>> - ; set by caller)
>> - ;
>> -BITS 64
>> - rep vmmcall
>> -BITS 32
>> -
>> - ;
>> - ; Back from VMGEXIT AP_HLT_LOOP
>> - ; Push the FLAGS/CS/IP values to use
>> - ;
>> - push word 0x0002 ; EFLAGS
>> - xor ecx, ecx
>> - mov cx, [eax + 2] ; CS
>> - push cx
>> - mov cx, [eax] ; IP
>> - push cx
>> - push word 0x0000 ; For alignment, will be discarded
>> -
>> - push edx
>> - push ebx
>> -
>> - mov edx, esi ; Restore RDX reset value
>> -
>> - retf
>> -
>> -DoHlt:
>> cli
>> hlt
>> - jmp DoHlt
>> + jmp HltLoop
>>
>> -BITS 64
>> AsmRelocateApLoopEnd:
>>
>>
>> ;---------------------------------------------------------------------
>> ---------------- diff --git a/UefiCpuPkg/UefiCpuPkg.dsc
>> b/UefiCpuPkg/UefiCpuPkg.dsc index f9a46089d2..062a43d7a7 100644
>> --- a/UefiCpuPkg/UefiCpuPkg.dsc
>> +++ b/UefiCpuPkg/UefiCpuPkg.dsc
>> @@ -1,7 +1,7 @@
>> ## @file
>> # UefiCpuPkg Package
>> #
>> -# Copyright (c) 2007 - 2022, Intel Corporation. All rights
>> reserved.<BR>
>> +# Copyright (c) 2007 - 2023, Intel Corporation. All rights
>> +reserved.<BR>
>> #
>> # SPDX-License-Identifier: BSD-2-Clause-Patent
>> #
>> @@ -94,6 +94,7 @@
>> MemoryAllocationLib|MdePkg/Library/UefiMemoryAllocationLib/UefiMemoryAllocationLib.inf
>> HobLib|MdePkg/Library/DxeHobLib/DxeHobLib.inf
>>
>> CpuExceptionHandlerLib|UefiCpuPkg/Library/CpuExceptionHandlerLib/DxeCp
>> uExceptionHandlerLib.inf
>> +
>> + CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.i
>> + nf
>> MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
>> RegisterCpuFeaturesLib|UefiCpuPkg/Library/RegisterCpuFeaturesLib/DxeRegisterCpuFeaturesLib.inf
>>
>> CpuCacheInfoLib|UefiCpuPkg/Library/CpuCacheInfoLib/DxeCpuCacheInfoLib.
>> inf
>
>
>
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Patch V2 1/5] UefiCpuPkg: Duplicate RelocateApLoop for the processors with SEV-ES.
2023-02-20 5:20 ` [Patch V2 1/5] UefiCpuPkg: Duplicate RelocateApLoop for the processors with SEV-ES Yuanhao Xie
@ 2023-02-21 9:22 ` Gerd Hoffmann
2023-02-23 5:54 ` Yuanhao Xie
0 siblings, 1 reply; 14+ messages in thread
From: Gerd Hoffmann @ 2023-02-21 9:22 UTC (permalink / raw)
To: Yuanhao Xie
Cc: devel, Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo,
Tom Lendacky, Laszlo Ersek
Hi,
> if (CpuMpData->UseSevEsAPMethod) {
> - StackStart = CpuMpData->SevEsAPResetStackStart;
> + StackStart = CpuMpData->SevEsAPResetStackStart;
> + AsmRelocateApLoopFuncAmdSev = (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;
> + AsmRelocateApLoopFuncAmdSev (
> + MwaitSupport,
> + CpuMpData->ApTargetCState,
> + CpuMpData->PmCodeSegment,
> + StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
> + (UINTN)&mNumberToFinish,
> + CpuMpData->Pm16CodeSegment,
> + CpuMpData->SevEsAPBuffer,
> + CpuMpData->WakeupBuffer
> + );
Good. Thanks for updating it.
> diff --git a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
> index 7c2469f9c5..6b48913306 100644
> --- a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
> +++ b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
> @@ -346,3 +346,172 @@ PM16Mode:
> iret
>
> SwitchToRealProcEnd:
> +;-------------------------------------------------------------------------------------
> +; AsmRelocateApLoopAmdSev (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
> +;-------------------------------------------------------------------------------------
> +
> +AsmRelocateApLoopStartAmdSev:
> +BITS 64
Hmm, so here you are adding a renamed copy of the AP loop.
Then, in patch #5, you are rewriting the code at the old location.
I'd suggest to move the code instead of copying it, i.e. have one patch
moving (and renaming) the AmdSev AP loop. Have another patch adding the
new code for the generic AP loop. That should result in patches which
are easier to read, especially the new generic AP loop code is not mixed
with AmdSev code removal.
take care,
Gerd
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [edk2-devel] [Patch V2 2/5] UefiCpuPkg: Contiguous memory allocation and code clean-up.
2023-02-20 5:20 ` [Patch V2 2/5] UefiCpuPkg: Contiguous memory allocation and code clean-up Yuanhao Xie
@ 2023-02-21 9:26 ` Gerd Hoffmann
2023-02-23 5:53 ` Yuanhao Xie
0 siblings, 1 reply; 14+ messages in thread
From: Gerd Hoffmann @ 2023-02-21 9:26 UTC (permalink / raw)
To: devel, yuanhao.xie; +Cc: Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo
Hi,
> + Address = BASE_4GB - 1;
Why do you set Address here ...
> + Status = gBS->AllocatePages (
> + AllocateMaxAddress,
> + EfiReservedMemoryType,
> + StackPages+FuncPages,
> + &Address
> + );
... when it is overridden here anyway?
Otherwise the patch looks good to me.
take care,
Gerd
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [edk2-devel] [Patch V2 2/5] UefiCpuPkg: Contiguous memory allocation and code clean-up.
2023-02-21 9:26 ` [edk2-devel] " Gerd Hoffmann
@ 2023-02-23 5:53 ` Yuanhao Xie
2023-02-23 13:32 ` Ard Biesheuvel
0 siblings, 1 reply; 14+ messages in thread
From: Yuanhao Xie @ 2023-02-23 5:53 UTC (permalink / raw)
To: Gerd Hoffmann, devel@edk2.groups.io
Cc: Dong, Guo, Ni, Ray, Rhodes, Sean, Lu, James, Guo, Gua
Hi,
Set "Address = BASE_4GB - 1" first is to ensure the maximum address value is 4GB, since APs will be switched to 32 bit before OS, I mean in the original implementation.
By the way, in patch 5, only SEV-ES processor keep this limitation, since for the generic processors they keep in 64 bit before handing off the process to OS, and for those 32-bit processors, they don't need this limitation.
Regards,
Yuanhao
-----Original Message-----
From: Gerd Hoffmann <kraxel@redhat.com>
Sent: Tuesday, February 21, 2023 5:26 PM
To: devel@edk2.groups.io; Xie, Yuanhao <yuanhao.xie@intel.com>
Cc: Dong, Guo <guo.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Rhodes, Sean <sean@starlabs.systems>; Lu, James <james.lu@intel.com>; Guo, Gua <gua.guo@intel.com>
Subject: Re: [edk2-devel] [Patch V2 2/5] UefiCpuPkg: Contiguous memory allocation and code clean-up.
Hi,
> + Address = BASE_4GB - 1;
Why do you set Address here ...
> + Status = gBS->AllocatePages (
> + AllocateMaxAddress,
> + EfiReservedMemoryType,
> + StackPages+FuncPages,
> + &Address
> + );
... when it is overridden here anyway?
Otherwise the patch looks good to me.
take care,
Gerd
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Patch V2 1/5] UefiCpuPkg: Duplicate RelocateApLoop for the processors with SEV-ES.
2023-02-21 9:22 ` Gerd Hoffmann
@ 2023-02-23 5:54 ` Yuanhao Xie
0 siblings, 0 replies; 14+ messages in thread
From: Yuanhao Xie @ 2023-02-23 5:54 UTC (permalink / raw)
To: Gerd Hoffmann
Cc: devel@edk2.groups.io, Dong, Guo, Ni, Ray, Rhodes, Sean, Lu, James,
Guo, Gua, Tom Lendacky, Laszlo Ersek
I'd suggest to move the code instead of copying it, i.e. have one patch moving (and renaming) the AmdSev AP loop. Have another patch adding the new code for the generic AP loop. That should result in patches which are easier to read, especially the new generic AP loop code is not mixed with AmdSev code removal.
Agree. I will do so.
Thanks
Yuanhao
-----Original Message-----
From: Gerd Hoffmann <kraxel@redhat.com>
Sent: Tuesday, February 21, 2023 5:23 PM
To: Xie, Yuanhao <yuanhao.xie@intel.com>
Cc: devel@edk2.groups.io; Dong, Guo <guo.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Rhodes, Sean <sean@starlabs.systems>; Lu, James <james.lu@intel.com>; Guo, Gua <gua.guo@intel.com>; Tom Lendacky <thomas.lendacky@amd.com>; Laszlo Ersek <lersek@redhat.com>
Subject: Re: [Patch V2 1/5] UefiCpuPkg: Duplicate RelocateApLoop for the processors with SEV-ES.
Hi,
> if (CpuMpData->UseSevEsAPMethod) {
> - StackStart = CpuMpData->SevEsAPResetStackStart;
> + StackStart = CpuMpData->SevEsAPResetStackStart;
> + AsmRelocateApLoopFuncAmdSev = (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;
> + AsmRelocateApLoopFuncAmdSev (
> + MwaitSupport,
> + CpuMpData->ApTargetCState,
> + CpuMpData->PmCodeSegment,
> + StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
> + (UINTN)&mNumberToFinish,
> + CpuMpData->Pm16CodeSegment,
> + CpuMpData->SevEsAPBuffer,
> + CpuMpData->WakeupBuffer
> + );
Good. Thanks for updating it.
> diff --git a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
> b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
> index 7c2469f9c5..6b48913306 100644
> --- a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
> +++ b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
> @@ -346,3 +346,172 @@ PM16Mode:
> iret
>
> SwitchToRealProcEnd:
> +;--------------------------------------------------------------------
> +----------------- ; AsmRelocateApLoopAmdSev (MwaitSupport,
> +ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish,
> +Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
> +;--------------------------------------------------------------------
> +-----------------
> +
> +AsmRelocateApLoopStartAmdSev:
> +BITS 64
Hmm, so here you are adding a renamed copy of the AP loop.
Then, in patch #5, you are rewriting the code at the old location.
I'd suggest to move the code instead of copying it, i.e. have one patch moving (and renaming) the AmdSev AP loop. Have another patch adding the new code for the generic AP loop. That should result in patches which are easier to read, especially the new generic AP loop code is not mixed with AmdSev code removal.
take care,
Gerd
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [edk2-devel] [Patch V2 2/5] UefiCpuPkg: Contiguous memory allocation and code clean-up.
2023-02-23 5:53 ` Yuanhao Xie
@ 2023-02-23 13:32 ` Ard Biesheuvel
0 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2023-02-23 13:32 UTC (permalink / raw)
To: devel, yuanhao.xie
Cc: Gerd Hoffmann, Dong, Guo, Ni, Ray, Rhodes, Sean, Lu, James,
Guo, Gua
On Thu, 23 Feb 2023 at 06:53, Yuanhao Xie <yuanhao.xie@intel.com> wrote:
>
> Hi,
>
> Set "Address = BASE_4GB - 1" first is to ensure the maximum address value is 4GB, since APs will be switched to 32 bit before OS, I mean in the original implementation.
> By the way, in patch 5, only SEV-ES processor keep this limitation, since for the generic processors they keep in 64 bit before handing off the process to OS, and for those 32-bit processors, they don't need this limitation.
>
> Regards,
> Yuanhao
> -----Original Message-----
> From: Gerd Hoffmann <kraxel@redhat.com>
> Sent: Tuesday, February 21, 2023 5:26 PM
> To: devel@edk2.groups.io; Xie, Yuanhao <yuanhao.xie@intel.com>
> Cc: Dong, Guo <guo.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Rhodes, Sean <sean@starlabs.systems>; Lu, James <james.lu@intel.com>; Guo, Gua <gua.guo@intel.com>
> Subject: Re: [edk2-devel] [Patch V2 2/5] UefiCpuPkg: Contiguous memory allocation and code clean-up.
>
> Hi,
>
> > + Address = BASE_4GB - 1;
>
> Why do you set Address here ...
>
> > + Status = gBS->AllocatePages (
> > + AllocateMaxAddress,
> > + EfiReservedMemoryType,
> > + StackPages+FuncPages,
> > + &Address
> > + );
>
> ... when it is overridden here anyway?
>
AllocateMaxAddress uses the provided address as the upper bound for
the allocation.
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2023-02-23 13:33 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-20 5:20 [PATCH 0/5] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
2023-02-20 5:20 ` [Patch V2 1/5] UefiCpuPkg: Duplicate RelocateApLoop for the processors with SEV-ES Yuanhao Xie
2023-02-21 9:22 ` Gerd Hoffmann
2023-02-23 5:54 ` Yuanhao Xie
2023-02-20 5:20 ` [Patch V2 2/5] UefiCpuPkg: Contiguous memory allocation and code clean-up Yuanhao Xie
2023-02-21 9:26 ` [edk2-devel] " Gerd Hoffmann
2023-02-23 5:53 ` Yuanhao Xie
2023-02-23 13:32 ` Ard Biesheuvel
2023-02-20 5:20 ` [Patch V2 3/5] OvmfPkg: Add CpuPageTableLib required by MpInitLib Yuanhao Xie
2023-02-20 5:20 ` [Patch V2 4/5] UefiPayloadPkg: " Yuanhao Xie
2023-02-20 5:20 ` [Patch V2 5/5] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS Yuanhao Xie
2023-02-20 14:11 ` Lendacky, Thomas
2023-02-20 17:43 ` [edk2-devel] " Yuanhao Xie
2023-02-20 18:05 ` Lendacky, Thomas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox