public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
* [Patch V3 0/6] Put APs in 64 bit mode before handoff to OS.
@ 2023-02-23 18:05 Yuanhao Xie
  2023-02-23 18:05 ` [Patch V3 1/6] UefiCpuPkg: Move AsmRelocateApLoop to AmdSev.nasm Yuanhao Xie
                   ` (6 more replies)
  0 siblings, 7 replies; 12+ messages in thread
From: Yuanhao Xie @ 2023-02-23 18:05 UTC (permalink / raw)
  To: devel

The purpose of this patch series is to put the AP in 64-bit mode 
before handing off the boot process to the OS. To do this, 
duplicate relocateApLoop for processors with SEV-ES, allocate 
contiguous memory, then create page tables and keep AP in 64-bit
 mode.

Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=4234

Yuanhao Xie (6):
  UefiCpuPkg: Move AsmRelocateApLoop to AmdSev.nasm.
  UefiCpuPkg: Duplicate AsmRelocateApLoopAmd.
  UefiCpuPkg: Contiguous memory allocation and code clean-up.
  OvmfPkg: Add CpuPageTableLib required by MpInitLib.
  UefiPayloadPkg: Add CpuPageTableLib required by MpInitLib.
  UefiCpuPkg: Put APs in 64 bit mode before handoff to OS.

 OvmfPkg/AmdSev/AmdSevX64.dsc                        |   3 ++-
 OvmfPkg/CloudHv/CloudHvX64.dsc                      |   3 ++-
 OvmfPkg/IntelTdx/IntelTdxX64.dsc                    |   4 +++-
 OvmfPkg/Microvm/MicrovmX64.dsc                      |   3 ++-
 OvmfPkg/OvmfPkgIa32X64.dsc                          |   3 ++-
 OvmfPkg/OvmfPkgX64.dsc                              |   4 +++-
 OvmfPkg/OvmfXen.dsc                                 |   3 ++-
 UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf       |   6 +++++-
 UefiCpuPkg/Library/MpInitLib/DxeMpLib.c             | 161 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------------------------------------------------
 UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c |  23 +++++++++++++++++++++++
 UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm      |  11 ++++-------
 UefiCpuPkg/Library/MpInitLib/MpEqu.inc              |  22 ++++++++++++----------
 UefiCpuPkg/Library/MpInitLib/MpLib.h                |  46 ++++++++++++++++++++++++++++++++++++++++++++--
 UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm        | 169 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c  |  82 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm       | 178 ++++++++++++++++++++++++++++++++--------------------------------------------------------------------------------------------------------------------------------------------------
 UefiCpuPkg/UefiCpuPkg.dsc                           |   3 ++-
 UefiPayloadPkg/UefiPayloadPkg.dsc                   |   3 ++-
 18 files changed, 486 insertions(+), 241 deletions(-)
 create mode 100644 UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
 create mode 100644 UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c

-- 
2.36.1.windows.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Patch V3 1/6] UefiCpuPkg: Move AsmRelocateApLoop to AmdSev.nasm.
  2023-02-23 18:05 [Patch V3 0/6] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
@ 2023-02-23 18:05 ` Yuanhao Xie
  2023-02-23 18:05 ` [Patch V3 2/6] UefiCpuPkg: Duplicate AsmRelocateApLoopAmd Yuanhao Xie
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Yuanhao Xie @ 2023-02-23 18:05 UTC (permalink / raw)
  To: devel; +Cc: Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo

AMD processors with SEV-ES enabled follow the original logic, for the
other cases the APs will be put in 64-bit mode before handing off the
boot process to the OS. In order to do so, in this patch,
AsmRelocateApLoop is moved to AmdSev.nasm.

Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
Test-by: Yuanhao Xie <yuanhao.xie@intel.com>
---
 UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm  | 168 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 170 +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 2 files changed, 169 insertions(+), 169 deletions(-)

diff --git a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
index 7c2469f9c5..c1e8a045a4 100644
--- a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
+++ b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
@@ -346,3 +346,171 @@ PM16Mode:
     iret
 
 SwitchToRealProcEnd:
+;-------------------------------------------------------------------------------------
+;  AsmRelocateApLoop (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
+;-------------------------------------------------------------------------------------
+AsmRelocateApLoopStart:
+BITS 64
+    cmp        qword [rsp + 56], 0  ; SevEsAPJumpTable
+    je         NoSevEs
+
+    ;
+    ; Perform some SEV-ES related setup before leaving 64-bit mode
+    ;
+    push       rcx
+    push       rdx
+
+    ;
+    ; Get the RDX reset value using CPUID
+    ;
+    mov        rax, 1
+    cpuid
+    mov        rsi, rax          ; Save off the reset value for RDX
+
+    ;
+    ; Prepare the GHCB for the AP_HLT_LOOP VMGEXIT call
+    ;   - Must be done while in 64-bit long mode so that writes to
+    ;     the GHCB memory will be unencrypted.
+    ;   - No NAE events can be generated once this is set otherwise
+    ;     the AP_RESET_HOLD SW_EXITCODE will be overwritten.
+    ;
+    mov        rcx, 0xc0010130
+    rdmsr                        ; Retrieve current GHCB address
+    shl        rdx, 32
+    or         rdx, rax
+
+    mov        rdi, rdx
+    xor        rax, rax
+    mov        rcx, 0x800
+    shr        rcx, 3
+    rep stosq                    ; Clear the GHCB
+
+    mov        rax, 0x80000004   ; VMGEXIT AP_RESET_HOLD
+    mov        [rdx + 0x390], rax
+    mov        rax, 114          ; Set SwExitCode valid bit
+    bts        [rdx + 0x3f0], rax
+    inc        rax               ; Set SwExitInfo1 valid bit
+    bts        [rdx + 0x3f0], rax
+    inc        rax               ; Set SwExitInfo2 valid bit
+    bts        [rdx + 0x3f0], rax
+
+    pop        rdx
+    pop        rcx
+
+NoSevEs:
+    cli                          ; Disable interrupt before switching to 32-bit mode
+    mov        rax, [rsp + 40]   ; CountTofinish
+    lock dec   dword [rax]       ; (*CountTofinish)--
+
+    mov        r10, [rsp + 48]   ; Pm16CodeSegment
+    mov        rax, [rsp + 56]   ; SevEsAPJumpTable
+    mov        rbx, [rsp + 64]   ; WakeupBuffer
+    mov        rsp, r9           ; TopOfApStack
+
+    push       rax               ; Save SevEsAPJumpTable
+    push       rbx               ; Save WakeupBuffer
+    push       r10               ; Save Pm16CodeSegment
+    push       rcx               ; Save MwaitSupport
+    push       rdx               ; Save ApTargetCState
+
+    lea        rax, [PmEntry]    ; rax <- The start address of transition code
+
+    push       r8
+    push       rax
+
+    ;
+    ; Clear R8 - R15, for reset, before going into 32-bit mode
+    ;
+    xor        r8, r8
+    xor        r9, r9
+    xor        r10, r10
+    xor        r11, r11
+    xor        r12, r12
+    xor        r13, r13
+    xor        r14, r14
+    xor        r15, r15
+
+    ;
+    ; Far return into 32-bit mode
+    ;
+    retfq
+
+BITS 32
+PmEntry:
+    mov        eax, cr0
+    btr        eax, 31           ; Clear CR0.PG
+    mov        cr0, eax          ; Disable paging and caches
+
+    mov        ecx, 0xc0000080
+    rdmsr
+    and        ah, ~ 1           ; Clear LME
+    wrmsr
+    mov        eax, cr4
+    and        al, ~ (1 << 5)    ; Clear PAE
+    mov        cr4, eax
+
+    pop        edx
+    add        esp, 4
+    pop        ecx,
+    add        esp, 4
+
+MwaitCheck:
+    cmp        cl, 1              ; Check mwait-monitor support
+    jnz        HltLoop
+    mov        ebx, edx           ; Save C-State to ebx
+MwaitLoop:
+    cli
+    mov        eax, esp           ; Set Monitor Address
+    xor        ecx, ecx           ; ecx = 0
+    xor        edx, edx           ; edx = 0
+    monitor
+    mov        eax, ebx           ; Mwait Cx, Target C-State per eax[7:4]
+    shl        eax, 4
+    mwait
+    jmp        MwaitLoop
+
+HltLoop:
+    pop        edx                ; PM16CodeSegment
+    add        esp, 4
+    pop        ebx                ; WakeupBuffer
+    add        esp, 4
+    pop        eax                ; SevEsAPJumpTable
+    add        esp, 4
+    cmp        eax, 0             ; Check for SEV-ES
+    je         DoHlt
+
+    cli
+    ;
+    ; SEV-ES is enabled, use VMGEXIT (GHCB information already
+    ; set by caller)
+    ;
+BITS 64
+    rep        vmmcall
+BITS 32
+
+    ;
+    ; Back from VMGEXIT AP_HLT_LOOP
+    ;   Push the FLAGS/CS/IP values to use
+    ;
+    push       word 0x0002        ; EFLAGS
+    xor        ecx, ecx
+    mov        cx, [eax + 2]      ; CS
+    push       cx
+    mov        cx, [eax]          ; IP
+    push       cx
+    push       word 0x0000        ; For alignment, will be discarded
+
+    push       edx
+    push       ebx
+
+    mov        edx, esi           ; Restore RDX reset value
+
+    retf
+
+DoHlt:
+    cli
+    hlt
+    jmp        DoHlt
+
+BITS 64
+AsmRelocateApLoopEnd:
diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
index 5d71995bf8..eb42bbff96 100644
--- a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
+++ b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
@@ -1,5 +1,5 @@
 ;------------------------------------------------------------------------------ ;
-; Copyright (c) 2015 - 2022, Intel Corporation. All rights reserved.<BR>
+; Copyright (c) 2015 - 2023, Intel Corporation. All rights reserved.<BR>
 ; SPDX-License-Identifier: BSD-2-Clause-Patent
 ;
 ; Module Name:
@@ -278,174 +278,6 @@ CProcedureInvoke:
 
 RendezvousFunnelProcEnd:
 
-;-------------------------------------------------------------------------------------
-;  AsmRelocateApLoop (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
-;-------------------------------------------------------------------------------------
-AsmRelocateApLoopStart:
-BITS 64
-    cmp        qword [rsp + 56], 0  ; SevEsAPJumpTable
-    je         NoSevEs
-
-    ;
-    ; Perform some SEV-ES related setup before leaving 64-bit mode
-    ;
-    push       rcx
-    push       rdx
-
-    ;
-    ; Get the RDX reset value using CPUID
-    ;
-    mov        rax, 1
-    cpuid
-    mov        rsi, rax          ; Save off the reset value for RDX
-
-    ;
-    ; Prepare the GHCB for the AP_HLT_LOOP VMGEXIT call
-    ;   - Must be done while in 64-bit long mode so that writes to
-    ;     the GHCB memory will be unencrypted.
-    ;   - No NAE events can be generated once this is set otherwise
-    ;     the AP_RESET_HOLD SW_EXITCODE will be overwritten.
-    ;
-    mov        rcx, 0xc0010130
-    rdmsr                        ; Retrieve current GHCB address
-    shl        rdx, 32
-    or         rdx, rax
-
-    mov        rdi, rdx
-    xor        rax, rax
-    mov        rcx, 0x800
-    shr        rcx, 3
-    rep stosq                    ; Clear the GHCB
-
-    mov        rax, 0x80000004   ; VMGEXIT AP_RESET_HOLD
-    mov        [rdx + 0x390], rax
-    mov        rax, 114          ; Set SwExitCode valid bit
-    bts        [rdx + 0x3f0], rax
-    inc        rax               ; Set SwExitInfo1 valid bit
-    bts        [rdx + 0x3f0], rax
-    inc        rax               ; Set SwExitInfo2 valid bit
-    bts        [rdx + 0x3f0], rax
-
-    pop        rdx
-    pop        rcx
-
-NoSevEs:
-    cli                          ; Disable interrupt before switching to 32-bit mode
-    mov        rax, [rsp + 40]   ; CountTofinish
-    lock dec   dword [rax]       ; (*CountTofinish)--
-
-    mov        r10, [rsp + 48]   ; Pm16CodeSegment
-    mov        rax, [rsp + 56]   ; SevEsAPJumpTable
-    mov        rbx, [rsp + 64]   ; WakeupBuffer
-    mov        rsp, r9           ; TopOfApStack
-
-    push       rax               ; Save SevEsAPJumpTable
-    push       rbx               ; Save WakeupBuffer
-    push       r10               ; Save Pm16CodeSegment
-    push       rcx               ; Save MwaitSupport
-    push       rdx               ; Save ApTargetCState
-
-    lea        rax, [PmEntry]    ; rax <- The start address of transition code
-
-    push       r8
-    push       rax
-
-    ;
-    ; Clear R8 - R15, for reset, before going into 32-bit mode
-    ;
-    xor        r8, r8
-    xor        r9, r9
-    xor        r10, r10
-    xor        r11, r11
-    xor        r12, r12
-    xor        r13, r13
-    xor        r14, r14
-    xor        r15, r15
-
-    ;
-    ; Far return into 32-bit mode
-    ;
-    retfq
-
-BITS 32
-PmEntry:
-    mov        eax, cr0
-    btr        eax, 31           ; Clear CR0.PG
-    mov        cr0, eax          ; Disable paging and caches
-
-    mov        ecx, 0xc0000080
-    rdmsr
-    and        ah, ~ 1           ; Clear LME
-    wrmsr
-    mov        eax, cr4
-    and        al, ~ (1 << 5)    ; Clear PAE
-    mov        cr4, eax
-
-    pop        edx
-    add        esp, 4
-    pop        ecx,
-    add        esp, 4
-
-MwaitCheck:
-    cmp        cl, 1              ; Check mwait-monitor support
-    jnz        HltLoop
-    mov        ebx, edx           ; Save C-State to ebx
-MwaitLoop:
-    cli
-    mov        eax, esp           ; Set Monitor Address
-    xor        ecx, ecx           ; ecx = 0
-    xor        edx, edx           ; edx = 0
-    monitor
-    mov        eax, ebx           ; Mwait Cx, Target C-State per eax[7:4]
-    shl        eax, 4
-    mwait
-    jmp        MwaitLoop
-
-HltLoop:
-    pop        edx                ; PM16CodeSegment
-    add        esp, 4
-    pop        ebx                ; WakeupBuffer
-    add        esp, 4
-    pop        eax                ; SevEsAPJumpTable
-    add        esp, 4
-    cmp        eax, 0             ; Check for SEV-ES
-    je         DoHlt
-
-    cli
-    ;
-    ; SEV-ES is enabled, use VMGEXIT (GHCB information already
-    ; set by caller)
-    ;
-BITS 64
-    rep        vmmcall
-BITS 32
-
-    ;
-    ; Back from VMGEXIT AP_HLT_LOOP
-    ;   Push the FLAGS/CS/IP values to use
-    ;
-    push       word 0x0002        ; EFLAGS
-    xor        ecx, ecx
-    mov        cx, [eax + 2]      ; CS
-    push       cx
-    mov        cx, [eax]          ; IP
-    push       cx
-    push       word 0x0000        ; For alignment, will be discarded
-
-    push       edx
-    push       ebx
-
-    mov        edx, esi           ; Restore RDX reset value
-
-    retf
-
-DoHlt:
-    cli
-    hlt
-    jmp        DoHlt
-
-BITS 64
-AsmRelocateApLoopEnd:
 
 ;-------------------------------------------------------------------------------------
 ;  AsmGetAddressMap (&AddressMap);
-- 
2.36.1.windows.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [Patch V3 2/6] UefiCpuPkg: Duplicate AsmRelocateApLoopAmd.
  2023-02-23 18:05 [Patch V3 0/6] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
  2023-02-23 18:05 ` [Patch V3 1/6] UefiCpuPkg: Move AsmRelocateApLoop to AmdSev.nasm Yuanhao Xie
@ 2023-02-23 18:05 ` Yuanhao Xie
  2023-02-24  7:38   ` [edk2-devel] " Gerd Hoffmann
  2023-02-23 18:05 ` [Patch V3 3/6] UefiCpuPkg: Contiguous memory allocation and code clean-up Yuanhao Xie
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 12+ messages in thread
From: Yuanhao Xie @ 2023-02-23 18:05 UTC (permalink / raw)
  To: devel; +Cc: Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo

Duplicate AsmRelocateApLoopAmd for non-SEV-ES enabled processors.

Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
Test-by: Yuanhao Xie <yuanhao.xie@intel.com>
---
 UefiCpuPkg/Library/MpInitLib/DxeMpLib.c       |  68 ++++++++++++++++++++++++++++++++++++++++++++------------------------
 UefiCpuPkg/Library/MpInitLib/MpEqu.inc        |  22 ++++++++++++----------
 UefiCpuPkg/Library/MpInitLib/MpLib.h          |  31 +++++++++++++++++++++++++++++--
 UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm  |  33 +++++++++++++++++----------------
 UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 171 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 273 insertions(+), 52 deletions(-)

diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
index a84e9e33ba..dd935a79d3 100644
--- a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
+++ b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
@@ -1,7 +1,7 @@
 /** @file
   MP initialize support functions for DXE phase.
 
-  Copyright (c) 2016 - 2020, Intel Corporation. All rights reserved.<BR>
+  Copyright (c) 2016 - 2023, Intel Corporation. All rights reserved.<BR>
   SPDX-License-Identifier: BSD-2-Clause-Patent
 
 **/
@@ -378,32 +378,44 @@ RelocateApLoop (
   IN OUT VOID  *Buffer
   )
 {
-  CPU_MP_DATA           *CpuMpData;
-  BOOLEAN               MwaitSupport;
-  ASM_RELOCATE_AP_LOOP  AsmRelocateApLoopFunc;
-  UINTN                 ProcessorNumber;
-  UINTN                 StackStart;
+  CPU_MP_DATA                  *CpuMpData;
+  BOOLEAN                      MwaitSupport;
+  ASM_RELOCATE_AP_LOOP         AsmRelocateApLoopFunc;
+  ASM_RELOCATE_AP_LOOP_AMDSEV  AsmRelocateApLoopFuncAmdSev;
+  UINTN                        ProcessorNumber;
+  UINTN                        StackStart;
 
   MpInitLibWhoAmI (&ProcessorNumber);
   CpuMpData    = GetCpuMpData ();
   MwaitSupport = IsMwaitSupport ();
   if (CpuMpData->UseSevEsAPMethod) {
-    StackStart = CpuMpData->SevEsAPResetStackStart;
+    StackStart                  = CpuMpData->SevEsAPResetStackStart;
+    AsmRelocateApLoopFuncAmdSev = (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;
+    AsmRelocateApLoopFuncAmdSev (
+      MwaitSupport,
+      CpuMpData->ApTargetCState,
+      CpuMpData->PmCodeSegment,
+      StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
+      (UINTN)&mNumberToFinish,
+      CpuMpData->Pm16CodeSegment,
+      CpuMpData->SevEsAPBuffer,
+      CpuMpData->WakeupBuffer
+      );
   } else {
-    StackStart = mReservedTopOfApStack;
+    StackStart            = mReservedTopOfApStack;
+    AsmRelocateApLoopFunc = (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;
+    AsmRelocateApLoopFunc (
+      MwaitSupport,
+      CpuMpData->ApTargetCState,
+      CpuMpData->PmCodeSegment,
+      StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
+      (UINTN)&mNumberToFinish,
+      CpuMpData->Pm16CodeSegment,
+      CpuMpData->SevEsAPBuffer,
+      CpuMpData->WakeupBuffer
+      );
   }
 
-  AsmRelocateApLoopFunc = (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;
-  AsmRelocateApLoopFunc (
-    MwaitSupport,
-    CpuMpData->ApTargetCState,
-    CpuMpData->PmCodeSegment,
-    StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
-    (UINTN)&mNumberToFinish,
-    CpuMpData->Pm16CodeSegment,
-    CpuMpData->SevEsAPBuffer,
-    CpuMpData->WakeupBuffer
-    );
   //
   // It should never reach here
   //
@@ -582,11 +594,19 @@ InitMpGlobalData (
 
   mReservedTopOfApStack = (UINTN)Address + ApSafeBufferSize;
   ASSERT ((mReservedTopOfApStack & (UINTN)(CPU_STACK_ALIGNMENT - 1)) == 0);
-  CopyMem (
-    mReservedApLoopFunc,
-    CpuMpData->AddressMap.RelocateApLoopFuncAddress,
-    CpuMpData->AddressMap.RelocateApLoopFuncSize
-    );
+  if (CpuMpData->UseSevEsAPMethod) {
+    CopyMem (
+      mReservedApLoopFunc,
+      CpuMpData->AddressMap.RelocateApLoopFuncAddressAmdSev,
+      CpuMpData->AddressMap.RelocateApLoopFuncSizeAmdSev
+      );
+  } else {
+    CopyMem (
+      mReservedApLoopFunc,
+      CpuMpData->AddressMap.RelocateApLoopFuncAddress,
+      CpuMpData->AddressMap.RelocateApLoopFuncSize
+      );
+  }
 
   Status = gBS->CreateEvent (
                   EVT_TIMER | EVT_NOTIFY_SIGNAL,
diff --git a/UefiCpuPkg/Library/MpInitLib/MpEqu.inc b/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
index ebadcc6fb3..6730f2f411 100644
--- a/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
+++ b/UefiCpuPkg/Library/MpInitLib/MpEqu.inc
@@ -1,5 +1,5 @@
 ;------------------------------------------------------------------------------ ;
-; Copyright (c) 2015 - 2022, Intel Corporation. All rights reserved.<BR>
+; Copyright (c) 2015 - 2023, Intel Corporation. All rights reserved.<BR>
 ; SPDX-License-Identifier: BSD-2-Clause-Patent
 ;
 ; Module Name:
@@ -21,15 +21,17 @@ CPU_SWITCH_STATE_LOADED       equ        2
 ; Equivalent NASM structure of MP_ASSEMBLY_ADDRESS_MAP
 ;
 struc MP_ASSEMBLY_ADDRESS_MAP
-  .RendezvousFunnelAddress       CTYPE_UINTN 1
-  .ModeEntryOffset               CTYPE_UINTN 1
-  .RendezvousFunnelSize          CTYPE_UINTN 1
-  .RelocateApLoopFuncAddress     CTYPE_UINTN 1
-  .RelocateApLoopFuncSize        CTYPE_UINTN 1
-  .ModeTransitionOffset          CTYPE_UINTN 1
-  .SwitchToRealNoNxOffset        CTYPE_UINTN 1
-  .SwitchToRealPM16ModeOffset    CTYPE_UINTN 1
-  .SwitchToRealPM16ModeSize      CTYPE_UINTN 1
+  .RendezvousFunnelAddress            CTYPE_UINTN 1
+  .ModeEntryOffset                    CTYPE_UINTN 1
+  .RendezvousFunnelSize               CTYPE_UINTN 1
+  .RelocateApLoopFuncAddress          CTYPE_UINTN 1
+  .RelocateApLoopFuncSize             CTYPE_UINTN 1
+  .RelocateApLoopFuncAddressAmdSev    CTYPE_UINTN 1
+  .RelocateApLoopFuncSizeAmdSev       CTYPE_UINTN 1
+  .ModeTransitionOffset               CTYPE_UINTN 1
+  .SwitchToRealNoNxOffset             CTYPE_UINTN 1
+  .SwitchToRealPM16ModeOffset         CTYPE_UINTN 1
+  .SwitchToRealPM16ModeSize           CTYPE_UINTN 1
 endstruc
 
 ;
diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.h b/UefiCpuPkg/Library/MpInitLib/MpLib.h
index f5086e497e..5011533302 100644
--- a/UefiCpuPkg/Library/MpInitLib/MpLib.h
+++ b/UefiCpuPkg/Library/MpInitLib/MpLib.h
@@ -1,7 +1,7 @@
 /** @file
   Common header file for MP Initialize Library.
 
-  Copyright (c) 2016 - 2022, Intel Corporation. All rights reserved.<BR>
+  Copyright (c) 2016 - 2023, Intel Corporation. All rights reserved.<BR>
   Copyright (c) 2020, AMD Inc. All rights reserved.<BR>
 
   SPDX-License-Identifier: BSD-2-Clause-Patent
@@ -179,6 +179,8 @@ typedef struct {
   UINTN    RendezvousFunnelSize;
   UINT8    *RelocateApLoopFuncAddress;
   UINTN    RelocateApLoopFuncSize;
+  UINT8    *RelocateApLoopFuncAddressAmdSev;
+  UINTN    RelocateApLoopFuncSizeAmdSev;
   UINTN    ModeTransitionOffset;
   UINTN    SwitchToRealNoNxOffset;
   UINTN    SwitchToRealPM16ModeOffset;
@@ -311,7 +313,7 @@ typedef struct {
 
 #define AP_SAFE_STACK_SIZE   128
 #define AP_RESET_STACK_SIZE  AP_SAFE_STACK_SIZE
-
+STATIC_ASSERT ((AP_SAFE_STACK_SIZE & (CPU_STACK_ALIGNMENT - 1)) == 0, "AP_SAFE_STACK_SIZE is not aligned with CPU_STACK_ALIGNMENT");
 #pragma pack(1)
 
 typedef struct {
@@ -373,6 +375,31 @@ typedef
   IN UINTN                   WakeupBuffer
   );
 
+/**
+  Assembly code to place AP into safe loop mode for Amd processors with Sev enabled.
+  Place AP into targeted C-State if MONITOR is supported, otherwise
+  place AP into hlt state.
+  Place AP in protected mode if the current is long mode. Due to AP maybe
+  wakeup by some hardware event. It could avoid accessing page table that
+  may not available during booting to OS.
+  @param[in] MwaitSupport    TRUE indicates MONITOR is supported.
+                             FALSE indicates MONITOR is not supported.
+  @param[in] ApTargetCState  Target C-State value.
+  @param[in] PmCodeSegment   Protected mode code segment value.
+**/
+typedef
+  VOID
+(EFIAPI *ASM_RELOCATE_AP_LOOP_AMDSEV)(
+  IN BOOLEAN                 MwaitSupport,
+  IN UINTN                   ApTargetCState,
+  IN UINTN                   PmCodeSegment,
+  IN UINTN                   TopOfApStack,
+  IN UINTN                   NumberToFinish,
+  IN UINTN                   Pm16CodeSegment,
+  IN UINTN                   SevEsAPJumpTable,
+  IN UINTN                   WakeupBuffer
+  );
+
 /**
   Assembly code to get starting address and size of the rendezvous entry for APs.
   Information for fixing a jump instruction in the code is also returned.
diff --git a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
index c1e8a045a4..6b48913306 100644
--- a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
+++ b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
@@ -347,12 +347,13 @@ PM16Mode:
 
 SwitchToRealProcEnd:
 ;-------------------------------------------------------------------------------------
-;  AsmRelocateApLoop (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
+;  AsmRelocateApLoopAmdSev (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
 ;-------------------------------------------------------------------------------------
-AsmRelocateApLoopStart:
+
+AsmRelocateApLoopStartAmdSev:
 BITS 64
     cmp        qword [rsp + 56], 0  ; SevEsAPJumpTable
-    je         NoSevEs
+    je         NoSevEsAmdSev
 
     ;
     ; Perform some SEV-ES related setup before leaving 64-bit mode
@@ -397,7 +398,7 @@ BITS 64
     pop        rdx
     pop        rcx
 
-NoSevEs:
+NoSevEsAmdSev:
     cli                          ; Disable interrupt before switching to 32-bit mode
     mov        rax, [rsp + 40]   ; CountTofinish
     lock dec   dword [rax]       ; (*CountTofinish)--
@@ -413,7 +414,7 @@ NoSevEs:
     push       rcx               ; Save MwaitSupport
     push       rdx               ; Save ApTargetCState
 
-    lea        rax, [PmEntry]    ; rax <- The start address of transition code
+    lea        rax, [PmEntryAmdSev]    ; rax <- The start address of transition code
 
     push       r8
     push       rax
@@ -433,10 +434,10 @@ NoSevEs:
     ;
     ; Far return into 32-bit mode
     ;
-    retfq
+o64 retf
 
 BITS 32
-PmEntry:
+PmEntryAmdSev:
     mov        eax, cr0
     btr        eax, 31           ; Clear CR0.PG
     mov        cr0, eax          ; Disable paging and caches
@@ -454,11 +455,11 @@ PmEntry:
     pop        ecx,
     add        esp, 4
 
-MwaitCheck:
+MwaitCheckAmdSev:
     cmp        cl, 1              ; Check mwait-monitor support
-    jnz        HltLoop
+    jnz        HltLoopAmdSev
     mov        ebx, edx           ; Save C-State to ebx
-MwaitLoop:
+MwaitLoopAmdSev:
     cli
     mov        eax, esp           ; Set Monitor Address
     xor        ecx, ecx           ; ecx = 0
@@ -467,9 +468,9 @@ MwaitLoop:
     mov        eax, ebx           ; Mwait Cx, Target C-State per eax[7:4]
     shl        eax, 4
     mwait
-    jmp        MwaitLoop
+    jmp        MwaitLoopAmdSev
 
-HltLoop:
+HltLoopAmdSev:
     pop        edx                ; PM16CodeSegment
     add        esp, 4
     pop        ebx                ; WakeupBuffer
@@ -477,7 +478,7 @@ HltLoop:
     pop        eax                ; SevEsAPJumpTable
     add        esp, 4
     cmp        eax, 0             ; Check for SEV-ES
-    je         DoHlt
+    je         DoHltAmdSev
 
     cli
     ;
@@ -507,10 +508,10 @@ BITS 32
 
     retf
 
-DoHlt:
+DoHltAmdSev:
     cli
     hlt
-    jmp        DoHlt
+    jmp        DoHltAmdSev
 
 BITS 64
-AsmRelocateApLoopEnd:
+AsmRelocateApLoopEndAmdSev:
diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
index eb42bbff96..d36f8ba06d 100644
--- a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
+++ b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
@@ -278,6 +278,174 @@ CProcedureInvoke:
 
 RendezvousFunnelProcEnd:
 
+;-------------------------------------------------------------------------------------
+;  AsmRelocateApLoop (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
+;-------------------------------------------------------------------------------------
+AsmRelocateApLoopStart:
+BITS 64
+    cmp        qword [rsp + 56], 0  ; SevEsAPJumpTable
+    je         NoSevEs
+
+    ;
+    ; Perform some SEV-ES related setup before leaving 64-bit mode
+    ;
+    push       rcx
+    push       rdx
+
+    ;
+    ; Get the RDX reset value using CPUID
+    ;
+    mov        rax, 1
+    cpuid
+    mov        rsi, rax          ; Save off the reset value for RDX
+
+    ;
+    ; Prepare the GHCB for the AP_HLT_LOOP VMGEXIT call
+    ;   - Must be done while in 64-bit long mode so that writes to
+    ;     the GHCB memory will be unencrypted.
+    ;   - No NAE events can be generated once this is set otherwise
+    ;     the AP_RESET_HOLD SW_EXITCODE will be overwritten.
+    ;
+    mov        rcx, 0xc0010130
+    rdmsr                        ; Retrieve current GHCB address
+    shl        rdx, 32
+    or         rdx, rax
+
+    mov        rdi, rdx
+    xor        rax, rax
+    mov        rcx, 0x800
+    shr        rcx, 3
+    rep stosq                    ; Clear the GHCB
+
+    mov        rax, 0x80000004   ; VMGEXIT AP_RESET_HOLD
+    mov        [rdx + 0x390], rax
+    mov        rax, 114          ; Set SwExitCode valid bit
+    bts        [rdx + 0x3f0], rax
+    inc        rax               ; Set SwExitInfo1 valid bit
+    bts        [rdx + 0x3f0], rax
+    inc        rax               ; Set SwExitInfo2 valid bit
+    bts        [rdx + 0x3f0], rax
+
+    pop        rdx
+    pop        rcx
+
+NoSevEs:
+    cli                          ; Disable interrupt before switching to 32-bit mode
+    mov        rax, [rsp + 40]   ; CountTofinish
+    lock dec   dword [rax]       ; (*CountTofinish)--
+
+    mov        r10, [rsp + 48]   ; Pm16CodeSegment
+    mov        rax, [rsp + 56]   ; SevEsAPJumpTable
+    mov        rbx, [rsp + 64]   ; WakeupBuffer
+    mov        rsp, r9           ; TopOfApStack
+
+    push       rax               ; Save SevEsAPJumpTable
+    push       rbx               ; Save WakeupBuffer
+    push       r10               ; Save Pm16CodeSegment
+    push       rcx               ; Save MwaitSupport
+    push       rdx               ; Save ApTargetCState
+
+    lea        rax, [PmEntry]    ; rax <- The start address of transition code
+
+    push       r8
+    push       rax
+
+    ;
+    ; Clear R8 - R15, for reset, before going into 32-bit mode
+    ;
+    xor        r8, r8
+    xor        r9, r9
+    xor        r10, r10
+    xor        r11, r11
+    xor        r12, r12
+    xor        r13, r13
+    xor        r14, r14
+    xor        r15, r15
+
+    ;
+    ; Far return into 32-bit mode
+    ;
+    retfq
+
+BITS 32
+PmEntry:
+    mov        eax, cr0
+    btr        eax, 31           ; Clear CR0.PG
+    mov        cr0, eax          ; Disable paging and caches
+
+    mov        ecx, 0xc0000080
+    rdmsr
+    and        ah, ~ 1           ; Clear LME
+    wrmsr
+    mov        eax, cr4
+    and        al, ~ (1 << 5)    ; Clear PAE
+    mov        cr4, eax
+
+    pop        edx
+    add        esp, 4
+    pop        ecx,
+    add        esp, 4
+
+MwaitCheck:
+    cmp        cl, 1              ; Check mwait-monitor support
+    jnz        HltLoop
+    mov        ebx, edx           ; Save C-State to ebx
+MwaitLoop:
+    cli
+    mov        eax, esp           ; Set Monitor Address
+    xor        ecx, ecx           ; ecx = 0
+    xor        edx, edx           ; edx = 0
+    monitor
+    mov        eax, ebx           ; Mwait Cx, Target C-State per eax[7:4]
+    shl        eax, 4
+    mwait
+    jmp        MwaitLoop
+
+HltLoop:
+    pop        edx                ; PM16CodeSegment
+    add        esp, 4
+    pop        ebx                ; WakeupBuffer
+    add        esp, 4
+    pop        eax                ; SevEsAPJumpTable
+    add        esp, 4
+    cmp        eax, 0             ; Check for SEV-ES
+    je         DoHlt
+
+    cli
+    ;
+    ; SEV-ES is enabled, use VMGEXIT (GHCB information already
+    ; set by caller)
+    ;
+BITS 64
+    rep        vmmcall
+BITS 32
+
+    ;
+    ; Back from VMGEXIT AP_HLT_LOOP
+    ;   Push the FLAGS/CS/IP values to use
+    ;
+    push       word 0x0002        ; EFLAGS
+    xor        ecx, ecx
+    mov        cx, [eax + 2]      ; CS
+    push       cx
+    mov        cx, [eax]          ; IP
+    push       cx
+    push       word 0x0000        ; For alignment, will be discarded
+
+    push       edx
+    push       ebx
+
+    mov        edx, esi           ; Restore RDX reset value
+
+    retf
+
+DoHlt:
+    cli
+    hlt
+    jmp        DoHlt
+
+BITS 64
+AsmRelocateApLoopEnd:
 
 ;-------------------------------------------------------------------------------------
 ;  AsmGetAddressMap (&AddressMap);
@@ -291,6 +459,9 @@ ASM_PFX(AsmGetAddressMap):
     lea        rax, [AsmRelocateApLoopStart]
     mov        qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.RelocateApLoopFuncAddress], rax
     mov        qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.RelocateApLoopFuncSize], AsmRelocateApLoopEnd - AsmRelocateApLoopStart
+    lea        rax, [AsmRelocateApLoopStartAmdSev]
+    mov        qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.RelocateApLoopFuncAddressAmdSev], rax
+    mov        qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.RelocateApLoopFuncSizeAmdSev], AsmRelocateApLoopEndAmdSev - AsmRelocateApLoopStartAmdSev
     mov        qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.ModeTransitionOffset], Flat32Start - RendezvousFunnelProcStart
     mov        qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.SwitchToRealNoNxOffset], SwitchToRealProcStart - Flat32Start
     mov        qword [rcx + MP_ASSEMBLY_ADDRESS_MAP.SwitchToRealPM16ModeOffset], PM16Mode - RendezvousFunnelProcStart
-- 
2.36.1.windows.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [Patch V3 3/6] UefiCpuPkg: Contiguous memory allocation and code clean-up.
  2023-02-23 18:05 [Patch V3 0/6] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
  2023-02-23 18:05 ` [Patch V3 1/6] UefiCpuPkg: Move AsmRelocateApLoop to AmdSev.nasm Yuanhao Xie
  2023-02-23 18:05 ` [Patch V3 2/6] UefiCpuPkg: Duplicate AsmRelocateApLoopAmd Yuanhao Xie
@ 2023-02-23 18:05 ` Yuanhao Xie
  2023-02-23 18:05 ` [Patch V3 4/6] OvmfPkg: Add CpuPageTableLib required by MpInitLib Yuanhao Xie
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Yuanhao Xie @ 2023-02-23 18:05 UTC (permalink / raw)
  To: devel; +Cc: Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo

This patch includes the code refactoring to eliminate the
duplication, non-descriptive variable, etc.

The memory is calculated taking into account the size difference of
RelocateApLoopFunc under different cases.

Allocate the memory for stacks and APs loop at contiguous address.

Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
Test-by: Yuanhao Xie <yuanhao.xie@intel.com>
---
 UefiCpuPkg/Library/MpInitLib/DxeMpLib.c | 157 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------------------------------------------------------------------
 UefiCpuPkg/Library/MpInitLib/MpLib.h    |   6 ++++++
 2 files changed, 79 insertions(+), 84 deletions(-)

diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
index dd935a79d3..c095ee9f13 100644
--- a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
+++ b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
@@ -20,14 +20,15 @@
 
 #define  AP_SAFE_STACK_SIZE  128
 
-CPU_MP_DATA       *mCpuMpData                  = NULL;
-EFI_EVENT         mCheckAllApsEvent            = NULL;
-EFI_EVENT         mMpInitExitBootServicesEvent = NULL;
-EFI_EVENT         mLegacyBootEvent             = NULL;
-volatile BOOLEAN  mStopCheckAllApsStatus       = TRUE;
-VOID              *mReservedApLoopFunc         = NULL;
-UINTN             mReservedTopOfApStack;
-volatile UINT32   mNumberToFinish = 0;
+CPU_MP_DATA             *mCpuMpData                  = NULL;
+EFI_EVENT               mCheckAllApsEvent            = NULL;
+EFI_EVENT               mMpInitExitBootServicesEvent = NULL;
+EFI_EVENT               mLegacyBootEvent             = NULL;
+volatile BOOLEAN        mStopCheckAllApsStatus       = TRUE;
+UINTN                   mReservedTopOfApStack;
+volatile UINT32         mNumberToFinish = 0;
+RELOCATE_AP_LOOP_ENTRY  mReservedApLoop;
+
 
 //
 // Begin wakeup buffer allocation below 0x88000
@@ -380,8 +381,6 @@ RelocateApLoop (
 {
   CPU_MP_DATA                  *CpuMpData;
   BOOLEAN                      MwaitSupport;
-  ASM_RELOCATE_AP_LOOP         AsmRelocateApLoopFunc;
-  ASM_RELOCATE_AP_LOOP_AMDSEV  AsmRelocateApLoopFuncAmdSev;
   UINTN                        ProcessorNumber;
   UINTN                        StackStart;
 
@@ -389,31 +388,29 @@ RelocateApLoop (
   CpuMpData    = GetCpuMpData ();
   MwaitSupport = IsMwaitSupport ();
   if (CpuMpData->UseSevEsAPMethod) {
-    StackStart                  = CpuMpData->SevEsAPResetStackStart;
-    AsmRelocateApLoopFuncAmdSev = (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;
-    AsmRelocateApLoopFuncAmdSev (
-      MwaitSupport,
-      CpuMpData->ApTargetCState,
-      CpuMpData->PmCodeSegment,
-      StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
-      (UINTN)&mNumberToFinish,
-      CpuMpData->Pm16CodeSegment,
-      CpuMpData->SevEsAPBuffer,
-      CpuMpData->WakeupBuffer
-      );
+    StackStart = CpuMpData->SevEsAPResetStackStart;
+    mReservedApLoop.AmdSevEntry (
+                      MwaitSupport,
+                      CpuMpData->ApTargetCState,
+                      CpuMpData->PmCodeSegment,
+                      StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
+                      (UINTN)&mNumberToFinish,
+                      CpuMpData->Pm16CodeSegment,
+                      CpuMpData->SevEsAPBuffer,
+                      CpuMpData->WakeupBuffer
+                      );
   } else {
-    StackStart            = mReservedTopOfApStack;
-    AsmRelocateApLoopFunc = (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;
-    AsmRelocateApLoopFunc (
-      MwaitSupport,
-      CpuMpData->ApTargetCState,
-      CpuMpData->PmCodeSegment,
-      StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
-      (UINTN)&mNumberToFinish,
-      CpuMpData->Pm16CodeSegment,
-      CpuMpData->SevEsAPBuffer,
-      CpuMpData->WakeupBuffer
-      );
+    StackStart = mReservedTopOfApStack;
+    mReservedApLoop.GenericEntry (
+                      MwaitSupport,
+                      CpuMpData->ApTargetCState,
+                      CpuMpData->PmCodeSegment,
+                      StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
+                      (UINTN)&mNumberToFinish,
+                      CpuMpData->Pm16CodeSegment,
+                      CpuMpData->SevEsAPBuffer,
+                      CpuMpData->WakeupBuffer
+                      );
   }
 
   //
@@ -477,12 +474,16 @@ InitMpGlobalData (
   )
 {
   EFI_STATUS                       Status;
-  EFI_PHYSICAL_ADDRESS             Address;
-  UINTN                            ApSafeBufferSize;
+  MP_ASSEMBLY_ADDRESS_MAP          *AddressMap;
+  UINTN                            StackPages;
+  UINTN                            FuncPages;
   UINTN                            Index;
   EFI_GCD_MEMORY_SPACE_DESCRIPTOR  MemDesc;
   UINTN                            StackBase;
   CPU_INFO_IN_HOB                  *CpuInfoInHob;
+  EFI_PHYSICAL_ADDRESS             Address;
+  UINT8                            *ApLoopFunc;
+  UINTN                            ApLoopFuncSize;
 
   SaveCpuMpData (CpuMpData);
 
@@ -537,6 +538,15 @@ InitMpGlobalData (
     }
   }
 
+  AddressMap = &CpuMpData->AddressMap;
+  if (CpuMpData->UseSevEsAPMethod) {
+    ApLoopFunc     = AddressMap->RelocateApLoopFuncAddressAmdSev;
+    ApLoopFuncSize = AddressMap->RelocateApLoopFuncSizeAmdSev;
+  } else {
+    ApLoopFunc     = AddressMap->RelocateApLoopFuncAddress;
+    ApLoopFuncSize = AddressMap->RelocateApLoopFuncSize;
+  }
+
   //
   // Avoid APs access invalid buffer data which allocated by BootServices,
   // so we will allocate reserved data for AP loop code. We also need to
@@ -545,26 +555,31 @@ InitMpGlobalData (
   // Allocating it in advance since memory services are not available in
   // Exit Boot Services callback function.
   //
-  ApSafeBufferSize = EFI_PAGES_TO_SIZE (
-                       EFI_SIZE_TO_PAGES (
-                         CpuMpData->AddressMap.RelocateApLoopFuncSize
-                         )
-                       );
-  Address = BASE_4GB - 1;
-  Status  = gBS->AllocatePages (
-                   AllocateMaxAddress,
-                   EfiReservedMemoryType,
-                   EFI_SIZE_TO_PAGES (ApSafeBufferSize),
-                   &Address
-                   );
-  ASSERT_EFI_ERROR (Status);
+  // +------------+ (TopOfApStack)
+  // |  Stack * N |
+  // +------------+
+  // |  Padding   |
+  // +------------+
+  // |  Ap Loop   |
+  // +------------+ (low address )
+  //
 
-  mReservedApLoopFunc = (VOID *)(UINTN)Address;
-  ASSERT (mReservedApLoopFunc != NULL);
+  Address    = BASE_4GB - 1;
+  StackPages = EFI_SIZE_TO_PAGES (CpuMpData->CpuCount * AP_SAFE_STACK_SIZE);
+  FuncPages  = EFI_SIZE_TO_PAGES (ApLoopFuncSize);
 
-  //
-  // Make sure that the buffer memory is executable if NX protection is enabled
-  // for EfiReservedMemoryType.
+  Status = gBS->AllocatePages (
+                  AllocateMaxAddress,
+                  EfiReservedMemoryType,
+                  StackPages+FuncPages,
+                  &Address
+                  );
+  ASSERT_EFI_ERROR (Status);
+  // If a memory range has the EFI_MEMORY_XP attribute, OS loader
+  // may set the IA32_EFER.NXE (No-eXecution Enable) bit in IA32_EFER MSR,
+  // then set the XD (eXecution Disable) bit in the CPU PAE page table.
+  // Here is to make sure that the memory is executable if NX protection is
+  // enabled for EfiReservedMemoryType.
   //
   // TODO: Check EFI_MEMORY_XP bit set or not once it's available in DXE GCD
   //       service.
@@ -573,41 +588,15 @@ InitMpGlobalData (
   if (!EFI_ERROR (Status)) {
     gDS->SetMemorySpaceAttributes (
            Address,
-           ApSafeBufferSize,
+           ALIGN_VALUE (ApLoopFuncSize, EFI_PAGE_SIZE),
            MemDesc.Attributes & (~EFI_MEMORY_XP)
            );
   }
 
-  ApSafeBufferSize = EFI_PAGES_TO_SIZE (
-                       EFI_SIZE_TO_PAGES (
-                         CpuMpData->CpuCount * AP_SAFE_STACK_SIZE
-                         )
-                       );
-  Address = BASE_4GB - 1;
-  Status  = gBS->AllocatePages (
-                   AllocateMaxAddress,
-                   EfiReservedMemoryType,
-                   EFI_SIZE_TO_PAGES (ApSafeBufferSize),
-                   &Address
-                   );
-  ASSERT_EFI_ERROR (Status);
-
-  mReservedTopOfApStack = (UINTN)Address + ApSafeBufferSize;
+  mReservedTopOfApStack = (UINTN)Address + EFI_PAGES_TO_SIZE (StackPages+FuncPages);
   ASSERT ((mReservedTopOfApStack & (UINTN)(CPU_STACK_ALIGNMENT - 1)) == 0);
-  if (CpuMpData->UseSevEsAPMethod) {
-    CopyMem (
-      mReservedApLoopFunc,
-      CpuMpData->AddressMap.RelocateApLoopFuncAddressAmdSev,
-      CpuMpData->AddressMap.RelocateApLoopFuncSizeAmdSev
-      );
-  } else {
-    CopyMem (
-      mReservedApLoopFunc,
-      CpuMpData->AddressMap.RelocateApLoopFuncAddress,
-      CpuMpData->AddressMap.RelocateApLoopFuncSize
-      );
-  }
-
+  mReservedApLoop.Data = (VOID *)(UINTN)Address;
+  CopyMem (mReservedApLoop.Data, ApLoopFunc, ApLoopFuncSize);
   Status = gBS->CreateEvent (
                   EVT_TIMER | EVT_NOTIFY_SIGNAL,
                   TPL_NOTIFY,
diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.h b/UefiCpuPkg/Library/MpInitLib/MpLib.h
index 5011533302..f0daa2c5af 100644
--- a/UefiCpuPkg/Library/MpInitLib/MpLib.h
+++ b/UefiCpuPkg/Library/MpInitLib/MpLib.h
@@ -400,6 +400,12 @@ typedef
   IN UINTN                   WakeupBuffer
   );
 
+typedef union {
+  VOID                           *Data;
+  ASM_RELOCATE_AP_LOOP_AMDSEV    AmdSevEntry;  // 64-bit AMD Sev processors
+  ASM_RELOCATE_AP_LOOP           GenericEntry; // Intel processors (32-bit or 64-bit), 32-bit AMD processors, or AMD non-Sev processors
+} RELOCATE_AP_LOOP_ENTRY;
+
 /**
   Assembly code to get starting address and size of the rendezvous entry for APs.
   Information for fixing a jump instruction in the code is also returned.
-- 
2.36.1.windows.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [Patch V3 4/6] OvmfPkg: Add CpuPageTableLib required by MpInitLib.
  2023-02-23 18:05 [Patch V3 0/6] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
                   ` (2 preceding siblings ...)
  2023-02-23 18:05 ` [Patch V3 3/6] UefiCpuPkg: Contiguous memory allocation and code clean-up Yuanhao Xie
@ 2023-02-23 18:05 ` Yuanhao Xie
  2023-02-23 18:05 ` [Patch V3 5/6] UefiPayloadPkg: " Yuanhao Xie
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Yuanhao Xie @ 2023-02-23 18:05 UTC (permalink / raw)
  To: devel; +Cc: Ard Biesheuvel, Jiewen Yao, Jordan Justen, Gerd Hoffmann

Add CpuPageTableLib required by MpInitLib in OvmfPkg.

Cc: Ard Biesheuvel <ardb+tianocore@kernel.org>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
---
 OvmfPkg/AmdSev/AmdSevX64.dsc     | 3 ++-
 OvmfPkg/CloudHv/CloudHvX64.dsc   | 3 ++-
 OvmfPkg/IntelTdx/IntelTdxX64.dsc | 4 +++-
 OvmfPkg/Microvm/MicrovmX64.dsc   | 3 ++-
 OvmfPkg/OvmfPkgIa32X64.dsc       | 3 ++-
 OvmfPkg/OvmfPkgX64.dsc           | 4 +++-
 OvmfPkg/OvmfXen.dsc              | 3 ++-
 7 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/OvmfPkg/AmdSev/AmdSevX64.dsc b/OvmfPkg/AmdSev/AmdSevX64.dsc
index 1cebd6b4bc..f0c4dc2310 100644
--- a/OvmfPkg/AmdSev/AmdSevX64.dsc
+++ b/OvmfPkg/AmdSev/AmdSevX64.dsc
@@ -3,7 +3,7 @@
 #  virtual machine remote attestation and secret injection
 #
 #  Copyright (c) 2020 James Bottomley, IBM Corporation.
-#  Copyright (c) 2006 - 2021, Intel Corporation. All rights reserved.<BR>
+#  Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
 #  (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
 #
 #  SPDX-License-Identifier: BSD-2-Clause-Patent
@@ -353,6 +353,7 @@
   DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
 !endif
   PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
+  CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
   MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
   NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
   QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
diff --git a/OvmfPkg/CloudHv/CloudHvX64.dsc b/OvmfPkg/CloudHv/CloudHvX64.dsc
index fda7d2b9e5..327f53ff3c 100644
--- a/OvmfPkg/CloudHv/CloudHvX64.dsc
+++ b/OvmfPkg/CloudHv/CloudHvX64.dsc
@@ -1,7 +1,7 @@
 ## @file
 #  EFI/Framework Open Virtual Machine Firmware (OVMF) platform
 #
-#  Copyright (c) 2006 - 2022, Intel Corporation. All rights reserved.<BR>
+#  Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
 #  (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
 #  Copyright (c) Microsoft Corporation.
 #
@@ -404,6 +404,7 @@
   DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
 !endif
   PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
+  CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
   MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
   NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
   QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
diff --git a/OvmfPkg/IntelTdx/IntelTdxX64.dsc b/OvmfPkg/IntelTdx/IntelTdxX64.dsc
index 95b9594ddc..d093660283 100644
--- a/OvmfPkg/IntelTdx/IntelTdxX64.dsc
+++ b/OvmfPkg/IntelTdx/IntelTdxX64.dsc
@@ -1,7 +1,7 @@
 ## @file
 #  EFI/Framework Open Virtual Machine Firmware (OVMF) platform
 #
-#  Copyright (c) 2006 - 2021, Intel Corporation. All rights reserved.<BR>
+#  Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
 #  (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
 #  Copyright (c) Microsoft Corporation.
 #
@@ -320,6 +320,7 @@
   CpuExceptionHandlerLib|UefiCpuPkg/Library/CpuExceptionHandlerLib/DxeCpuExceptionHandlerLib.inf
   LockBoxLib|OvmfPkg/Library/LockBoxLib/LockBoxDxeLib.inf
   PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
+  CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
   MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
   NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
   QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
@@ -590,6 +591,7 @@
       # Directly use DxeMpInitLib. It depends on DxeMpInitLibMpDepLib which
       # checks the Protocol of gEfiMpInitLibMpDepProtocolGuid.
       #
+      CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
       MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
       NULL|OvmfPkg/Library/MpInitLibDepLib/DxeMpInitLibMpDepLib.inf
   }
diff --git a/OvmfPkg/Microvm/MicrovmX64.dsc b/OvmfPkg/Microvm/MicrovmX64.dsc
index 0d65d21e65..76fc548650 100644
--- a/OvmfPkg/Microvm/MicrovmX64.dsc
+++ b/OvmfPkg/Microvm/MicrovmX64.dsc
@@ -1,7 +1,7 @@
 ## @file
 #  EFI/Framework Open Virtual Machine Firmware (OVMF) platform
 #
-#  Copyright (c) 2006 - 2021, Intel Corporation. All rights reserved.<BR>
+#  Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
 #  (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
 #  Copyright (c) Microsoft Corporation.
 #
@@ -403,6 +403,7 @@
   PciLib|MdePkg/Library/BasePciLibPciExpress/BasePciLibPciExpress.inf
   PciPcdProducerLib|OvmfPkg/Fdt/FdtPciPcdProducerLib/FdtPciPcdProducerLib.inf
   PciExpressLib|OvmfPkg/Library/BaseCachingPciExpressLib/BaseCachingPciExpressLib.inf
+  CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
   MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
   NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
   QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
diff --git a/OvmfPkg/OvmfPkgIa32X64.dsc b/OvmfPkg/OvmfPkgIa32X64.dsc
index 6b539814bd..51db692b10 100644
--- a/OvmfPkg/OvmfPkgIa32X64.dsc
+++ b/OvmfPkg/OvmfPkgIa32X64.dsc
@@ -1,7 +1,7 @@
 ## @file
 #  EFI/Framework Open Virtual Machine Firmware (OVMF) platform
 #
-#  Copyright (c) 2006 - 2022, Intel Corporation. All rights reserved.<BR>
+#  Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
 #  (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
 #  Copyright (c) Microsoft Corporation.
 #
@@ -414,6 +414,7 @@
   DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
 !endif
   PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
+  CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
   MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
   NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
   QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
diff --git a/OvmfPkg/OvmfPkgX64.dsc b/OvmfPkg/OvmfPkgX64.dsc
index e3c64456df..04d50704c7 100644
--- a/OvmfPkg/OvmfPkgX64.dsc
+++ b/OvmfPkg/OvmfPkgX64.dsc
@@ -1,7 +1,7 @@
 ## @file
 #  EFI/Framework Open Virtual Machine Firmware (OVMF) platform
 #
-#  Copyright (c) 2006 - 2022, Intel Corporation. All rights reserved.<BR>
+#  Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
 #  (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
 #  Copyright (c) Microsoft Corporation.
 #
@@ -435,6 +435,7 @@
   DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
 !endif
   PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
+  CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
   MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
   NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
   QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
@@ -826,6 +827,7 @@
       # Directly use DxeMpInitLib. It depends on DxeMpInitLibMpDepLib which
       # checks the Protocol of gEfiMpInitLibMpDepProtocolGuid.
       #
+      CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
       MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
       NULL|OvmfPkg/Library/MpInitLibDepLib/DxeMpInitLibMpDepLib.inf
   }
diff --git a/OvmfPkg/OvmfXen.dsc b/OvmfPkg/OvmfXen.dsc
index c328987e84..f1f02d969f 100644
--- a/OvmfPkg/OvmfXen.dsc
+++ b/OvmfPkg/OvmfXen.dsc
@@ -1,7 +1,7 @@
 ## @file
 #  EFI/Framework Open Virtual Machine Firmware (OVMF) platform
 #
-#  Copyright (c) 2006 - 2021, Intel Corporation. All rights reserved.<BR>
+#  Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
 #  (C) Copyright 2016 Hewlett Packard Enterprise Development LP<BR>
 #  Copyright (c) 2019, Citrix Systems, Inc.
 #  Copyright (c) Microsoft Corporation.
@@ -339,6 +339,7 @@
   DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
 !endif
   PciLib|OvmfPkg/Library/DxePciLibI440FxQ35/DxePciLibI440FxQ35.inf
+  CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
   MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
   NestedInterruptTplLib|OvmfPkg/Library/NestedInterruptTplLib/NestedInterruptTplLib.inf
   QemuFwCfgS3Lib|OvmfPkg/Library/QemuFwCfgS3Lib/DxeQemuFwCfgS3LibFwCfg.inf
-- 
2.36.1.windows.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [Patch V3 5/6] UefiPayloadPkg: Add CpuPageTableLib required by MpInitLib.
  2023-02-23 18:05 [Patch V3 0/6] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
                   ` (3 preceding siblings ...)
  2023-02-23 18:05 ` [Patch V3 4/6] OvmfPkg: Add CpuPageTableLib required by MpInitLib Yuanhao Xie
@ 2023-02-23 18:05 ` Yuanhao Xie
  2023-02-23 18:05 ` [Patch V3 6/6] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS Yuanhao Xie
  2023-02-24  0:26 ` [edk2-devel] [Patch V3 0/6] " Ni, Ray
  6 siblings, 0 replies; 12+ messages in thread
From: Yuanhao Xie @ 2023-02-23 18:05 UTC (permalink / raw)
  To: devel; +Cc: Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo

Add CpuPageTableLib required by MpInitLib in UefiPayloadPkg.

Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
---
 UefiPayloadPkg/UefiPayloadPkg.dsc | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/UefiPayloadPkg/UefiPayloadPkg.dsc b/UefiPayloadPkg/UefiPayloadPkg.dsc
index 2dbd875f37..8cbbbe9a05 100644
--- a/UefiPayloadPkg/UefiPayloadPkg.dsc
+++ b/UefiPayloadPkg/UefiPayloadPkg.dsc
@@ -3,7 +3,7 @@
 #
 # Provides drivers and definitions to create uefi payload for bootloaders.
 #
-# Copyright (c) 2014 - 2022, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2014 - 2023, Intel Corporation. All rights reserved.<BR>
 # Copyright (c) Microsoft Corporation.
 # SPDX-License-Identifier: BSD-2-Clause-Patent
 #
@@ -340,6 +340,7 @@
   DebugAgentLib|SourceLevelDebugPkg/Library/DebugAgent/DxeDebugAgentLib.inf
 !endif
   CpuExceptionHandlerLib|UefiCpuPkg/Library/CpuExceptionHandlerLib/DxeCpuExceptionHandlerLib.inf
+  CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
   MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
 !if $(PERFORMANCE_MEASUREMENT_ENABLE)
   PerformanceLib|MdeModulePkg/Library/DxePerformanceLib/DxePerformanceLib.inf
-- 
2.36.1.windows.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [Patch V3 6/6] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS.
  2023-02-23 18:05 [Patch V3 0/6] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
                   ` (4 preceding siblings ...)
  2023-02-23 18:05 ` [Patch V3 5/6] UefiPayloadPkg: " Yuanhao Xie
@ 2023-02-23 18:05 ` Yuanhao Xie
  2023-02-24  0:26 ` [edk2-devel] [Patch V3 0/6] " Ni, Ray
  6 siblings, 0 replies; 12+ messages in thread
From: Yuanhao Xie @ 2023-02-23 18:05 UTC (permalink / raw)
  To: devel; +Cc: Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo

Only keep 4GB limitation of memory allocation for the case APs still
need to be transferred to 32-bit mode before OS.

Remove the unused arguments of AsmRelocateApLoopStart, updated the
stack offset.

Create PageTable for the allocated reserved memory.

Cc: Guo Dong <guo.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Sean Rhodes <sean@starlabs.systems>
Cc: James Lu <james.lu@intel.com>
Cc: Gua Guo <gua.guo@intel.com>
Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
Test-by: Yuanhao Xie <yuanhao.xie@intel.com>
---
 UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf       |   6 +++++-
 UefiCpuPkg/Library/MpInitLib/DxeMpLib.c             |  40 ++++++++++++++++++++++++++++++----------
 UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c |  23 +++++++++++++++++++++++
 UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm      |  11 ++++-------
 UefiCpuPkg/Library/MpInitLib/MpLib.h                |  17 +++++++++++++----
 UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c  |  82 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm       | 173 ++++++++++++++++++++++++++++-------------------------------------------------------------------------------------------------------------------------------------------------
 UefiCpuPkg/UefiCpuPkg.dsc                           |   3 ++-
 8 files changed, 187 insertions(+), 168 deletions(-)

diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf b/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
index cd07de3a3c..4285dd06b4 100644
--- a/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
+++ b/UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
@@ -1,7 +1,7 @@
 ## @file
 #  MP Initialize Library instance for DXE driver.
 #
-#  Copyright (c) 2016 - 2021, Intel Corporation. All rights reserved.<BR>
+#  Copyright (c) 2016 - 2023, Intel Corporation. All rights reserved.<BR>
 #  SPDX-License-Identifier: BSD-2-Clause-Patent
 #
 ##
@@ -24,10 +24,12 @@
 [Sources.IA32]
   Ia32/AmdSev.c
   Ia32/MpFuncs.nasm
+  Ia32/CreatePageTable.c
 
 [Sources.X64]
   X64/AmdSev.c
   X64/MpFuncs.nasm
+  X64/CreatePageTable.c
 
 [Sources.common]
   AmdSev.c
@@ -56,6 +58,8 @@
   PcdLib
   CcExitLib
   MicrocodeLib
+[LibraryClasses.X64]
+  CpuPageTableLib
 
 [Protocols]
   gEfiTimerArchProtocolGuid                     ## SOMETIMES_CONSUMES
diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
index c095ee9f13..fef91ecc3b 100644
--- a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
+++ b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
@@ -28,7 +28,7 @@ volatile BOOLEAN        mStopCheckAllApsStatus       = TRUE;
 UINTN                   mReservedTopOfApStack;
 volatile UINT32         mNumberToFinish = 0;
 RELOCATE_AP_LOOP_ENTRY  mReservedApLoop;
-
+UINTN                   mApPageTable;
 
 //
 // Begin wakeup buffer allocation below 0x88000
@@ -379,15 +379,18 @@ RelocateApLoop (
   IN OUT VOID  *Buffer
   )
 {
-  CPU_MP_DATA                  *CpuMpData;
-  BOOLEAN                      MwaitSupport;
-  UINTN                        ProcessorNumber;
-  UINTN                        StackStart;
+  CPU_MP_DATA  *CpuMpData;
+  BOOLEAN      MwaitSupport;
+  UINTN        ProcessorNumber;
+  UINTN        StackStart;
 
   MpInitLibWhoAmI (&ProcessorNumber);
   CpuMpData    = GetCpuMpData ();
   MwaitSupport = IsMwaitSupport ();
   if (CpuMpData->UseSevEsAPMethod) {
+    //
+    // 64-bit AMD processors with SEV-ES
+    //
     StackStart = CpuMpData->SevEsAPResetStackStart;
     mReservedApLoop.AmdSevEntry (
                       MwaitSupport,
@@ -400,16 +403,16 @@ RelocateApLoop (
                       CpuMpData->WakeupBuffer
                       );
   } else {
+    //
+    // Intel processors (32-bit or 64-bit), 32-bit AMD processors, or 64-bit AMD processors without SEV-ES
+    //
     StackStart = mReservedTopOfApStack;
     mReservedApLoop.GenericEntry (
                       MwaitSupport,
                       CpuMpData->ApTargetCState,
-                      CpuMpData->PmCodeSegment,
                       StackStart - ProcessorNumber * AP_SAFE_STACK_SIZE,
                       (UINTN)&mNumberToFinish,
-                      CpuMpData->Pm16CodeSegment,
-                      CpuMpData->SevEsAPBuffer,
-                      CpuMpData->WakeupBuffer
+                      mApPageTable
                       );
   }
 
@@ -540,9 +543,17 @@ InitMpGlobalData (
 
   AddressMap = &CpuMpData->AddressMap;
   if (CpuMpData->UseSevEsAPMethod) {
+    //
+    // 64-bit AMD processors with SEV-ES
+    //
+    Address        = BASE_4GB - 1;
     ApLoopFunc     = AddressMap->RelocateApLoopFuncAddressAmdSev;
     ApLoopFuncSize = AddressMap->RelocateApLoopFuncSizeAmdSev;
   } else {
+    //
+    // Intel processors (32-bit or 64-bit), 32-bit AMD processors, or 64-bit AMD processors without SEV-ES
+    //
+    Address        = MAX_ADDRESS;
     ApLoopFunc     = AddressMap->RelocateApLoopFuncAddress;
     ApLoopFuncSize = AddressMap->RelocateApLoopFuncSize;
   }
@@ -564,7 +575,6 @@ InitMpGlobalData (
   // +------------+ (low address )
   //
 
-  Address    = BASE_4GB - 1;
   StackPages = EFI_SIZE_TO_PAGES (CpuMpData->CpuCount * AP_SAFE_STACK_SIZE);
   FuncPages  = EFI_SIZE_TO_PAGES (ApLoopFuncSize);
 
@@ -597,6 +607,16 @@ InitMpGlobalData (
   ASSERT ((mReservedTopOfApStack & (UINTN)(CPU_STACK_ALIGNMENT - 1)) == 0);
   mReservedApLoop.Data = (VOID *)(UINTN)Address;
   CopyMem (mReservedApLoop.Data, ApLoopFunc, ApLoopFuncSize);
+  if (!CpuMpData->UseSevEsAPMethod) {
+    //
+    // processors without SEV-ES
+    //
+    mApPageTable = CreatePageTable (
+                     (UINTN)Address,
+                     EFI_PAGES_TO_SIZE (StackPages+FuncPages)
+                     );
+  }
+
   Status = gBS->CreateEvent (
                   EVT_TIMER | EVT_NOTIFY_SIGNAL,
                   TPL_NOTIFY,
diff --git a/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c b/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
new file mode 100644
index 0000000000..bec9b247c0
--- /dev/null
+++ b/UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
@@ -0,0 +1,23 @@
+/** @file
+  Function to create page talbe.
+  Only create page table for x64, and leave the CreatePageTable empty for Ia32.
+  Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+**/
+
+#include <Base.h>
+
+/**
+  Only create page table for x64, and leave the CreatePageTable empty for Ia32.
+  @param[in]      LinearAddress  The start of the linear address range.
+  @param[in]      Length         The length of the linear address range.
+  @return The page table to be created.
+**/
+UINTN
+CreatePageTable (
+  IN UINTN  Address,
+  IN UINTN  Length
+  )
+{
+  return 0;
+}
diff --git a/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm b/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
index bfcdbd31c1..c65a825a23 100644
--- a/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
+++ b/UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm
@@ -1,5 +1,5 @@
 ;------------------------------------------------------------------------------ ;
-; Copyright (c) 2015 - 2022, Intel Corporation. All rights reserved.<BR>
+; Copyright (c) 2015 - 2023, Intel Corporation. All rights reserved.<BR>
 ; SPDX-License-Identifier: BSD-2-Clause-Patent
 ;
 ; Module Name:
@@ -219,20 +219,17 @@ SwitchToRealProcEnd:
 RendezvousFunnelProcEnd:
 
 ;-------------------------------------------------------------------------------------
-;  AsmRelocateApLoop (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
-;
-;  The last three parameters (Pm16CodeSegment, SevEsAPJumpTable and WakeupBuffer) are
-;  specific to SEV-ES support and are not applicable on IA32.
+;  AsmRelocateApLoop (MwaitSupport, ApTargetCState, TopOfApStack, CountTofinish, Cr3);
 ;-------------------------------------------------------------------------------------
 AsmRelocateApLoopStart:
     mov        eax, esp
-    mov        esp, [eax + 16]     ; TopOfApStack
+    mov        esp, [eax + 12]     ; TopOfApStack
     push       dword [eax]         ; push return address for stack trace
     push       ebp
     mov        ebp, esp
     mov        ebx, [eax + 8]      ; ApTargetCState
     mov        ecx, [eax + 4]      ; MwaitSupport
-    mov        eax, [eax + 20]     ; CountTofinish
+    mov        eax, [eax + 16]     ; CountTofinish
     lock dec   dword [eax]         ; (*CountTofinish)--
     cmp        cl,  1              ; Check mwait-monitor support
     jnz        HltLoop
diff --git a/UefiCpuPkg/Library/MpInitLib/MpLib.h b/UefiCpuPkg/Library/MpInitLib/MpLib.h
index f0daa2c5af..ba7ec5bba3 100644
--- a/UefiCpuPkg/Library/MpInitLib/MpLib.h
+++ b/UefiCpuPkg/Library/MpInitLib/MpLib.h
@@ -367,12 +367,9 @@ typedef
 (EFIAPI *ASM_RELOCATE_AP_LOOP)(
   IN BOOLEAN                 MwaitSupport,
   IN UINTN                   ApTargetCState,
-  IN UINTN                   PmCodeSegment,
   IN UINTN                   TopOfApStack,
   IN UINTN                   NumberToFinish,
-  IN UINTN                   Pm16CodeSegment,
-  IN UINTN                   SevEsAPJumpTable,
-  IN UINTN                   WakeupBuffer
+  IN UINTN                   Cr3
   );
 
 /**
@@ -497,6 +494,18 @@ GetSevEsAPMemory (
   VOID
   );
 
+/**
+  Create 1:1 mapping page table in reserved memory to map the specified address range.
+  @param[in]      LinearAddress  The start of the linear address range.
+  @param[in]      Length         The length of the linear address range.
+  @return The page table to be created.
+**/
+UINTN
+CreatePageTable (
+  IN UINTN  Address,
+  IN UINTN  Length
+  );
+
 /**
   This function will be called by BSP to wakeup AP.
 
diff --git a/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c b/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
new file mode 100644
index 0000000000..7cf91ed9c4
--- /dev/null
+++ b/UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
@@ -0,0 +1,82 @@
+/** @file
+  Function to create page talbe.
+  Only create page table for x64, and leave the CreatePageTable empty for Ia32.
+  Copyright (c) 2023, Intel Corporation. All rights reserved.<BR>
+  SPDX-License-Identifier: BSD-2-Clause-Patent
+**/
+#include <Library/CpuPageTableLib.h>
+#include <Library/MemoryAllocationLib.h>
+#include <Base.h>
+#include <Library/BaseMemoryLib.h>
+#include <Library/DebugLib.h>
+#include <Library/BaseLib.h>
+
+/**
+  Create 1:1 mapping page table in reserved memory to map the specified address range.
+  @param[in]      LinearAddress  The start of the linear address range.
+  @param[in]      Length         The length of the linear address range.
+  @return The page table to be created.
+**/
+UINTN
+CreatePageTable (
+  IN UINTN  Address,
+  IN UINTN  Length
+  )
+{
+  EFI_STATUS   Status;
+  VOID         *PageTableBuffer;
+  UINTN        PageTableBufferSize;
+  UINTN        PageTable;
+  PAGING_MODE  PagingMode;
+  IA32_CR4     Cr4;
+
+  IA32_MAP_ATTRIBUTE  MapAttribute;
+  IA32_MAP_ATTRIBUTE  MapMask;
+
+  MapAttribute.Uint64         = Address;
+  MapAttribute.Bits.Present   = 1;
+  MapAttribute.Bits.ReadWrite = 1;
+
+  MapMask.Bits.PageTableBaseAddress = 1;
+  MapMask.Bits.Present              = 1;
+  MapMask.Bits.ReadWrite            = 1;
+
+  PageTable           = 0;
+  PageTableBufferSize = 0;
+
+  Cr4.UintN = AsmReadCr4 ();
+
+  if (Cr4.Bits.LA57 == 1) {
+    PagingMode = Paging5Level;
+  } else {
+    PagingMode = Paging4Level;
+  }
+
+  Status = PageTableMap (
+             &PageTable,
+             PagingMode,
+             NULL,
+             &PageTableBufferSize,
+             Address,
+             Length,
+             &MapAttribute,
+             &MapMask
+             );
+  ASSERT (Status == EFI_BUFFER_TOO_SMALL);
+  DEBUG ((DEBUG_INFO, "AP Page Table Buffer Size = %x\n", PageTableBufferSize));
+
+  PageTableBuffer = AllocateReservedPages (EFI_SIZE_TO_PAGES (PageTableBufferSize));
+  ASSERT (PageTableBuffer != NULL);
+  Status = PageTableMap (
+             &PageTable,
+             PagingMode,
+             PageTableBuffer,
+             &PageTableBufferSize,
+             Address,
+             Length,
+             &MapAttribute,
+             &MapMask
+             );
+  ASSERT_EFI_ERROR (Status);
+  return PageTable;
+}
diff --git a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
index d36f8ba06d..2bce04d99c 100644
--- a/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
+++ b/UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm
@@ -279,172 +279,55 @@ CProcedureInvoke:
 RendezvousFunnelProcEnd:
 
 ;-------------------------------------------------------------------------------------
-;  AsmRelocateApLoop (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
+;  AsmRelocateApLoop (MwaitSupport, ApTargetCState, TopOfApStack, CountTofinish, Cr3);
+;  This function is called during the finalizaiton of Mp initialization before booting
+;  to OS, and aim to put Aps either in Mwait or HLT.
 ;-------------------------------------------------------------------------------------
-AsmRelocateApLoopStart:
-BITS 64
-    cmp        qword [rsp + 56], 0  ; SevEsAPJumpTable
-    je         NoSevEs
-
-    ;
-    ; Perform some SEV-ES related setup before leaving 64-bit mode
-    ;
-    push       rcx
-    push       rdx
-
-    ;
-    ; Get the RDX reset value using CPUID
-    ;
-    mov        rax, 1
-    cpuid
-    mov        rsi, rax          ; Save off the reset value for RDX
-
-    ;
-    ; Prepare the GHCB for the AP_HLT_LOOP VMGEXIT call
-    ;   - Must be done while in 64-bit long mode so that writes to
-    ;     the GHCB memory will be unencrypted.
-    ;   - No NAE events can be generated once this is set otherwise
-    ;     the AP_RESET_HOLD SW_EXITCODE will be overwritten.
-    ;
-    mov        rcx, 0xc0010130
-    rdmsr                        ; Retrieve current GHCB address
-    shl        rdx, 32
-    or         rdx, rax
-
-    mov        rdi, rdx
-    xor        rax, rax
-    mov        rcx, 0x800
-    shr        rcx, 3
-    rep stosq                    ; Clear the GHCB
-
-    mov        rax, 0x80000004   ; VMGEXIT AP_RESET_HOLD
-    mov        [rdx + 0x390], rax
-    mov        rax, 114          ; Set SwExitCode valid bit
-    bts        [rdx + 0x3f0], rax
-    inc        rax               ; Set SwExitInfo1 valid bit
-    bts        [rdx + 0x3f0], rax
-    inc        rax               ; Set SwExitInfo2 valid bit
-    bts        [rdx + 0x3f0], rax
+; +----------------+
+; | Cr3            |  rsp+40
+; +----------------+
+; | CountTofinish  |  r9
+; +----------------+
+; | TopOfApStack   |  r8
+; +----------------+
+; | ApTargetCState |  rdx
+; +----------------+
+; | MwaitSupport   |  rcx
+; +----------------+
+; | the return     |
+; +----------------+ low address
 
-    pop        rdx
-    pop        rcx
-
-NoSevEs:
-    cli                          ; Disable interrupt before switching to 32-bit mode
-    mov        rax, [rsp + 40]   ; CountTofinish
+AsmRelocateApLoopStart:
+    mov        rax, r9           ; CountTofinish
     lock dec   dword [rax]       ; (*CountTofinish)--
 
-    mov        r10, [rsp + 48]   ; Pm16CodeSegment
-    mov        rax, [rsp + 56]   ; SevEsAPJumpTable
-    mov        rbx, [rsp + 64]   ; WakeupBuffer
-    mov        rsp, r9           ; TopOfApStack
-
-    push       rax               ; Save SevEsAPJumpTable
-    push       rbx               ; Save WakeupBuffer
-    push       r10               ; Save Pm16CodeSegment
-    push       rcx               ; Save MwaitSupport
-    push       rdx               ; Save ApTargetCState
-
-    lea        rax, [PmEntry]    ; rax <- The start address of transition code
-
-    push       r8
-    push       rax
-
-    ;
-    ; Clear R8 - R15, for reset, before going into 32-bit mode
-    ;
-    xor        r8, r8
-    xor        r9, r9
-    xor        r10, r10
-    xor        r11, r11
-    xor        r12, r12
-    xor        r13, r13
-    xor        r14, r14
-    xor        r15, r15
-
-    ;
-    ; Far return into 32-bit mode
-    ;
-    retfq
-
-BITS 32
-PmEntry:
-    mov        eax, cr0
-    btr        eax, 31           ; Clear CR0.PG
-    mov        cr0, eax          ; Disable paging and caches
-
-    mov        ecx, 0xc0000080
-    rdmsr
-    and        ah, ~ 1           ; Clear LME
-    wrmsr
-    mov        eax, cr4
-    and        al, ~ (1 << 5)    ; Clear PAE
-    mov        cr4, eax
-
-    pop        edx
-    add        esp, 4
-    pop        ecx,
-    add        esp, 4
+    mov        rax, [rsp + 40]    ; Cr3
+    ; Do not push on old stack, since old stack is not mapped
+    ; in the page table pointed by cr3
+    mov        cr3, rax
+    mov        rsp, r8            ; TopOfApStack
 
 MwaitCheck:
     cmp        cl, 1              ; Check mwait-monitor support
     jnz        HltLoop
-    mov        ebx, edx           ; Save C-State to ebx
+    mov        rbx, rdx           ; Save C-State to ebx
+
 MwaitLoop:
     cli
-    mov        eax, esp           ; Set Monitor Address
+    mov        rax, rsp           ; Set Monitor Address
     xor        ecx, ecx           ; ecx = 0
     xor        edx, edx           ; edx = 0
     monitor
-    mov        eax, ebx           ; Mwait Cx, Target C-State per eax[7:4]
+    mov        rax, rbx           ; Mwait Cx, Target C-State per eax[7:4]
     shl        eax, 4
     mwait
     jmp        MwaitLoop
 
 HltLoop:
-    pop        edx                ; PM16CodeSegment
-    add        esp, 4
-    pop        ebx                ; WakeupBuffer
-    add        esp, 4
-    pop        eax                ; SevEsAPJumpTable
-    add        esp, 4
-    cmp        eax, 0             ; Check for SEV-ES
-    je         DoHlt
-
-    cli
-    ;
-    ; SEV-ES is enabled, use VMGEXIT (GHCB information already
-    ; set by caller)
-    ;
-BITS 64
-    rep        vmmcall
-BITS 32
-
-    ;
-    ; Back from VMGEXIT AP_HLT_LOOP
-    ;   Push the FLAGS/CS/IP values to use
-    ;
-    push       word 0x0002        ; EFLAGS
-    xor        ecx, ecx
-    mov        cx, [eax + 2]      ; CS
-    push       cx
-    mov        cx, [eax]          ; IP
-    push       cx
-    push       word 0x0000        ; For alignment, will be discarded
-
-    push       edx
-    push       ebx
-
-    mov        edx, esi           ; Restore RDX reset value
-
-    retf
-
-DoHlt:
     cli
     hlt
-    jmp        DoHlt
+    jmp        HltLoop
 
-BITS 64
 AsmRelocateApLoopEnd:
 
 ;-------------------------------------------------------------------------------------
diff --git a/UefiCpuPkg/UefiCpuPkg.dsc b/UefiCpuPkg/UefiCpuPkg.dsc
index a7318d3fe9..105c2e9313 100644
--- a/UefiCpuPkg/UefiCpuPkg.dsc
+++ b/UefiCpuPkg/UefiCpuPkg.dsc
@@ -1,7 +1,7 @@
 ## @file
 #  UefiCpuPkg Package
 #
-#  Copyright (c) 2007 - 2022, Intel Corporation. All rights reserved.<BR>
+#  Copyright (c) 2007 - 2023, Intel Corporation. All rights reserved.<BR>
 #
 #  SPDX-License-Identifier: BSD-2-Clause-Patent
 #
@@ -94,6 +94,7 @@
   MemoryAllocationLib|MdePkg/Library/UefiMemoryAllocationLib/UefiMemoryAllocationLib.inf
   HobLib|MdePkg/Library/DxeHobLib/DxeHobLib.inf
   CpuExceptionHandlerLib|UefiCpuPkg/Library/CpuExceptionHandlerLib/DxeCpuExceptionHandlerLib.inf
+  CpuPageTableLib|UefiCpuPkg/Library/CpuPageTableLib/CpuPageTableLib.inf
   MpInitLib|UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf
   RegisterCpuFeaturesLib|UefiCpuPkg/Library/RegisterCpuFeaturesLib/DxeRegisterCpuFeaturesLib.inf
   CpuCacheInfoLib|UefiCpuPkg/Library/CpuCacheInfoLib/DxeCpuCacheInfoLib.inf
-- 
2.36.1.windows.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [edk2-devel] [Patch V3 0/6] Put APs in 64 bit mode before handoff to OS.
  2023-02-23 18:05 [Patch V3 0/6] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
                   ` (5 preceding siblings ...)
  2023-02-23 18:05 ` [Patch V3 6/6] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS Yuanhao Xie
@ 2023-02-24  0:26 ` Ni, Ray
  2023-02-24  3:10   ` Yuanhao Xie
  6 siblings, 1 reply; 12+ messages in thread
From: Ni, Ray @ 2023-02-24  0:26 UTC (permalink / raw)
  To: devel@edk2.groups.io, Xie, Yuanhao

Yuanhao,
What changes have been made in V3 comparing against V2?

> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of
> Yuanhao Xie
> Sent: Friday, February 24, 2023 2:05 AM
> To: devel@edk2.groups.io
> Subject: [edk2-devel] [Patch V3 0/6] Put APs in 64 bit mode before handoff
> to OS.
> 
> The purpose of this patch series is to put the AP in 64-bit mode
> before handing off the boot process to the OS. To do this,
> duplicate relocateApLoop for processors with SEV-ES, allocate
> contiguous memory, then create page tables and keep AP in 64-bit
>  mode.
> 
> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=4234
> 
> Yuanhao Xie (6):
>   UefiCpuPkg: Move AsmRelocateApLoop to AmdSev.nasm.
>   UefiCpuPkg: Duplicate AsmRelocateApLoopAmd.
>   UefiCpuPkg: Contiguous memory allocation and code clean-up.
>   OvmfPkg: Add CpuPageTableLib required by MpInitLib.
>   UefiPayloadPkg: Add CpuPageTableLib required by MpInitLib.
>   UefiCpuPkg: Put APs in 64 bit mode before handoff to OS.
> 
>  OvmfPkg/AmdSev/AmdSevX64.dsc                        |   3 ++-
>  OvmfPkg/CloudHv/CloudHvX64.dsc                      |   3 ++-
>  OvmfPkg/IntelTdx/IntelTdxX64.dsc                    |   4 +++-
>  OvmfPkg/Microvm/MicrovmX64.dsc                      |   3 ++-
>  OvmfPkg/OvmfPkgIa32X64.dsc                          |   3 ++-
>  OvmfPkg/OvmfPkgX64.dsc                              |   4 +++-
>  OvmfPkg/OvmfXen.dsc                                 |   3 ++-
>  UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf       |   6 +++++-
>  UefiCpuPkg/Library/MpInitLib/DxeMpLib.c             | 161
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> +++++++++++++++++++++++++++++++++++++----------------------------------
> --------------------------------
>  UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c |  23
> +++++++++++++++++++++++
>  UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm      |  11 ++++-------
>  UefiCpuPkg/Library/MpInitLib/MpEqu.inc              |  22 ++++++++++++---------
> -
>  UefiCpuPkg/Library/MpInitLib/MpLib.h                |  46
> ++++++++++++++++++++++++++++++++++++++++++++--
>  UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm        | 169
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> +++++++++++++++++++++++++++++++++++++++++++++++++++++
>  UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c  |  82
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ++++++++++++++++++++++++
>  UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm       | 178
> ++++++++++++++++++++++++++++++++------------------------------------------
> ----------------------------------------------------------------------------------------------
> ----------
>  UefiCpuPkg/UefiCpuPkg.dsc                           |   3 ++-
>  UefiPayloadPkg/UefiPayloadPkg.dsc                   |   3 ++-
>  18 files changed, 486 insertions(+), 241 deletions(-)
>  create mode 100644 UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
>  create mode 100644 UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
> 
> --
> 2.36.1.windows.1
> 
> 
> 
> 
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [edk2-devel] [Patch V3 0/6] Put APs in 64 bit mode before handoff to OS.
  2023-02-24  0:26 ` [edk2-devel] [Patch V3 0/6] " Ni, Ray
@ 2023-02-24  3:10   ` Yuanhao Xie
  0 siblings, 0 replies; 12+ messages in thread
From: Yuanhao Xie @ 2023-02-24  3:10 UTC (permalink / raw)
  To: Ni, Ray; +Cc: devel@edk2.groups.io

Hi Ray,

The first patch(v2) are separated into 2 patches(v3).

Regards,
Yuanhao

-----Original Message-----
From: Ni, Ray <ray.ni@intel.com> 
Sent: Friday, February 24, 2023 8:27 AM
To: devel@edk2.groups.io; Xie, Yuanhao <yuanhao.xie@intel.com>
Subject: RE: [edk2-devel] [Patch V3 0/6] Put APs in 64 bit mode before handoff to OS.

Yuanhao,
What changes have been made in V3 comparing against V2?

> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Yuanhao 
> Xie
> Sent: Friday, February 24, 2023 2:05 AM
> To: devel@edk2.groups.io
> Subject: [edk2-devel] [Patch V3 0/6] Put APs in 64 bit mode before 
> handoff to OS.
> 
> The purpose of this patch series is to put the AP in 64-bit mode 
> before handing off the boot process to the OS. To do this, duplicate 
> relocateApLoop for processors with SEV-ES, allocate contiguous memory, 
> then create page tables and keep AP in 64-bit  mode.
> 
> Ref: https://bugzilla.tianocore.org/show_bug.cgi?id=4234
> 
> Yuanhao Xie (6):
>   UefiCpuPkg: Move AsmRelocateApLoop to AmdSev.nasm.
>   UefiCpuPkg: Duplicate AsmRelocateApLoopAmd.
>   UefiCpuPkg: Contiguous memory allocation and code clean-up.
>   OvmfPkg: Add CpuPageTableLib required by MpInitLib.
>   UefiPayloadPkg: Add CpuPageTableLib required by MpInitLib.
>   UefiCpuPkg: Put APs in 64 bit mode before handoff to OS.
> 
>  OvmfPkg/AmdSev/AmdSevX64.dsc                        |   3 ++-
>  OvmfPkg/CloudHv/CloudHvX64.dsc                      |   3 ++-
>  OvmfPkg/IntelTdx/IntelTdxX64.dsc                    |   4 +++-
>  OvmfPkg/Microvm/MicrovmX64.dsc                      |   3 ++-
>  OvmfPkg/OvmfPkgIa32X64.dsc                          |   3 ++-
>  OvmfPkg/OvmfPkgX64.dsc                              |   4 +++-
>  OvmfPkg/OvmfXen.dsc                                 |   3 ++-
>  UefiCpuPkg/Library/MpInitLib/DxeMpInitLib.inf       |   6 +++++-
>  UefiCpuPkg/Library/MpInitLib/DxeMpLib.c             | 161
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> +++++++++++++++++++++++++++++++++++++---------------------------------
> +++++++++++++++++++++++++++++++++++++-
> --------------------------------
>  UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c |  23
> +++++++++++++++++++++++
>  UefiCpuPkg/Library/MpInitLib/Ia32/MpFuncs.nasm      |  11 ++++-------
>  UefiCpuPkg/Library/MpInitLib/MpEqu.inc              |  22 ++++++++++++---------
> -
>  UefiCpuPkg/Library/MpInitLib/MpLib.h                |  46
> ++++++++++++++++++++++++++++++++++++++++++++--
>  UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm        | 169
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> +++++++++++++++++++++++++++++++++++++++++++++++++++++
>  UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c  |  82
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ++++++++++++++++++++++++
>  UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm       | 178
> ++++++++++++++++++++++++++++++++--------------------------------------
> ++++++++++++++++++++++++++++++++----
> ----------------------------------------------------------------------
> ------------------------
> ----------
>  UefiCpuPkg/UefiCpuPkg.dsc                           |   3 ++-
>  UefiPayloadPkg/UefiPayloadPkg.dsc                   |   3 ++-
>  18 files changed, 486 insertions(+), 241 deletions(-)  create mode 
> 100644 UefiCpuPkg/Library/MpInitLib/Ia32/CreatePageTable.c
>  create mode 100644 UefiCpuPkg/Library/MpInitLib/X64/CreatePageTable.c
> 
> --
> 2.36.1.windows.1
> 
> 
> 
> 
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [edk2-devel] [Patch V3 2/6] UefiCpuPkg: Duplicate AsmRelocateApLoopAmd.
  2023-02-23 18:05 ` [Patch V3 2/6] UefiCpuPkg: Duplicate AsmRelocateApLoopAmd Yuanhao Xie
@ 2023-02-24  7:38   ` Gerd Hoffmann
  2023-02-24 10:32     ` Yuanhao Xie
  0 siblings, 1 reply; 12+ messages in thread
From: Gerd Hoffmann @ 2023-02-24  7:38 UTC (permalink / raw)
  To: devel, yuanhao.xie; +Cc: Guo Dong, Ray Ni, Sean Rhodes, James Lu, Gua Guo

On Fri, Feb 24, 2023 at 02:05:31AM +0800, Yuanhao Xie wrote:
> Duplicate AsmRelocateApLoopAmd for non-SEV-ES enabled processors.
> 
> Cc: Guo Dong <guo.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Sean Rhodes <sean@starlabs.systems>
> Cc: James Lu <james.lu@intel.com>
> Cc: Gua Guo <gua.guo@intel.com>
> Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
> Test-by: Yuanhao Xie <yuanhao.xie@intel.com>
> ---
>  UefiCpuPkg/Library/MpInitLib/DxeMpLib.c       |  68 ++++++++++++++++++++++++++++++++++++++++++++------------------------
>  UefiCpuPkg/Library/MpInitLib/MpEqu.inc        |  22 ++++++++++++----------
>  UefiCpuPkg/Library/MpInitLib/MpLib.h          |  31 +++++++++++++++++++++++++++++--
>  UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm  |  33 +++++++++++++++++----------------
>  UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 171 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  5 files changed, 273 insertions(+), 52 deletions(-)
> 
> diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
> index a84e9e33ba..dd935a79d3 100644
> --- a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
> +++ b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
> @@ -1,7 +1,7 @@
>  /** @file
>    MP initialize support functions for DXE phase.
>  
> -  Copyright (c) 2016 - 2020, Intel Corporation. All rights reserved.<BR>
> +  Copyright (c) 2016 - 2023, Intel Corporation. All rights reserved.<BR>
>    SPDX-License-Identifier: BSD-2-Clause-Patent
>  
>  **/
> @@ -378,32 +378,44 @@ RelocateApLoop (
>    IN OUT VOID  *Buffer
>    )
>  {
> -  CPU_MP_DATA           *CpuMpData;
> -  BOOLEAN               MwaitSupport;
> -  ASM_RELOCATE_AP_LOOP  AsmRelocateApLoopFunc;
> -  UINTN                 ProcessorNumber;
> -  UINTN                 StackStart;
> +  CPU_MP_DATA                  *CpuMpData;
> +  BOOLEAN                      MwaitSupport;
> +  ASM_RELOCATE_AP_LOOP         AsmRelocateApLoopFunc;
> +  ASM_RELOCATE_AP_LOOP_AMDSEV  AsmRelocateApLoopFuncAmdSev;
> +  UINTN                        ProcessorNumber;
> +  UINTN                        StackStart;
>  
>    MpInitLibWhoAmI (&ProcessorNumber);
>    CpuMpData    = GetCpuMpData ();
>    MwaitSupport = IsMwaitSupport ();
>    if (CpuMpData->UseSevEsAPMethod) {
> -    StackStart = CpuMpData->SevEsAPResetStackStart;
> +    StackStart                  = CpuMpData->SevEsAPResetStackStart;
> +    AsmRelocateApLoopFuncAmdSev = (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;

mReservedApLoopFuncAmdSev ?

> diff --git a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
> index c1e8a045a4..6b48913306 100644
> --- a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
> +++ b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
> @@ -347,12 +347,13 @@ PM16Mode:
>  
>  SwitchToRealProcEnd:
>  ;-------------------------------------------------------------------------------------
> -;  AsmRelocateApLoop (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
> +;  AsmRelocateApLoopAmdSev (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
>  ;-------------------------------------------------------------------------------------
> -AsmRelocateApLoopStart:
> +
> +AsmRelocateApLoopStartAmdSev:

I'd suggest to do the rename in patch #1 too.

> +;-------------------------------------------------------------------------------------
> +;  AsmRelocateApLoop (MwaitSupport, ApTargetCState, PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, SevEsAPJumpTable, WakeupBuffer);
> +;-------------------------------------------------------------------------------------
> +AsmRelocateApLoopStart:
> +BITS 64
> +    cmp        qword [rsp + 56], 0  ; SevEsAPJumpTable
> +    je         NoSevEs

Now you are adding back the AmdSev version.
It should be the generic version though.

If you want add the generic version later in the in the patch series
(when changing the function prototype to drop sev support and add paging
support) you can temporary call AsmRelocateApLoopStartAmdSev in the
generic code path too.

take care,
  Gerd


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [edk2-devel] [Patch V3 2/6] UefiCpuPkg: Duplicate AsmRelocateApLoopAmd.
  2023-02-24  7:38   ` [edk2-devel] " Gerd Hoffmann
@ 2023-02-24 10:32     ` Yuanhao Xie
  2023-02-24 12:00       ` Gerd Hoffmann
  0 siblings, 1 reply; 12+ messages in thread
From: Yuanhao Xie @ 2023-02-24 10:32 UTC (permalink / raw)
  To: Gerd Hoffmann, devel@edk2.groups.io
  Cc: Dong, Guo, Ni, Ray, Rhodes, Sean, Lu, James, Guo, Gua


Hi Gerd,

-Now you are adding back the AmdSev version.
-It should be the generic version though.
Duplication is as I want to build up the desired functionality in small steps, generic version is updated in patch3 and ready in patch 6.

Call AsmRelocateApLoopStartAmdSev brings more confusion.

Thanks
Regards,
Yuanhao
-----Original Message-----
From: Gerd Hoffmann <kraxel@redhat.com> 
Sent: Friday, February 24, 2023 3:38 PM
To: devel@edk2.groups.io; Xie, Yuanhao <yuanhao.xie@intel.com>
Cc: Dong, Guo <guo.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Rhodes, Sean <sean@starlabs.systems>; Lu, James <james.lu@intel.com>; Guo, Gua <gua.guo@intel.com>
Subject: Re: [edk2-devel] [Patch V3 2/6] UefiCpuPkg: Duplicate AsmRelocateApLoopAmd.

On Fri, Feb 24, 2023 at 02:05:31AM +0800, Yuanhao Xie wrote:
> Duplicate AsmRelocateApLoopAmd for non-SEV-ES enabled processors.
> 
> Cc: Guo Dong <guo.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Sean Rhodes <sean@starlabs.systems>
> Cc: James Lu <james.lu@intel.com>
> Cc: Gua Guo <gua.guo@intel.com>
> Signed-off-by: Yuanhao Xie <yuanhao.xie@intel.com>
> Test-by: Yuanhao Xie <yuanhao.xie@intel.com>
> ---
>  UefiCpuPkg/Library/MpInitLib/DxeMpLib.c       |  68 ++++++++++++++++++++++++++++++++++++++++++++------------------------
>  UefiCpuPkg/Library/MpInitLib/MpEqu.inc        |  22 ++++++++++++----------
>  UefiCpuPkg/Library/MpInitLib/MpLib.h          |  31 +++++++++++++++++++++++++++++--
>  UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm  |  33 
> +++++++++++++++++----------------  
> UefiCpuPkg/Library/MpInitLib/X64/MpFuncs.nasm | 171 
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> +++++++++++++++++++++++++++++++
>  5 files changed, 273 insertions(+), 52 deletions(-)
> 
> diff --git a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c 
> b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
> index a84e9e33ba..dd935a79d3 100644
> --- a/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
> +++ b/UefiCpuPkg/Library/MpInitLib/DxeMpLib.c
> @@ -1,7 +1,7 @@
>  /** @file
>    MP initialize support functions for DXE phase.
>  
> -  Copyright (c) 2016 - 2020, Intel Corporation. All rights 
> reserved.<BR>
> +  Copyright (c) 2016 - 2023, Intel Corporation. All rights 
> + reserved.<BR>
>    SPDX-License-Identifier: BSD-2-Clause-Patent
>  
>  **/
> @@ -378,32 +378,44 @@ RelocateApLoop (
>    IN OUT VOID  *Buffer
>    )
>  {
> -  CPU_MP_DATA           *CpuMpData;
> -  BOOLEAN               MwaitSupport;
> -  ASM_RELOCATE_AP_LOOP  AsmRelocateApLoopFunc;
> -  UINTN                 ProcessorNumber;
> -  UINTN                 StackStart;
> +  CPU_MP_DATA                  *CpuMpData;
> +  BOOLEAN                      MwaitSupport;
> +  ASM_RELOCATE_AP_LOOP         AsmRelocateApLoopFunc;
> +  ASM_RELOCATE_AP_LOOP_AMDSEV  AsmRelocateApLoopFuncAmdSev;
> +  UINTN                        ProcessorNumber;
> +  UINTN                        StackStart;
>  
>    MpInitLibWhoAmI (&ProcessorNumber);
>    CpuMpData    = GetCpuMpData ();
>    MwaitSupport = IsMwaitSupport ();
>    if (CpuMpData->UseSevEsAPMethod) {
> -    StackStart = CpuMpData->SevEsAPResetStackStart;
> +    StackStart                  = CpuMpData->SevEsAPResetStackStart;
> +    AsmRelocateApLoopFuncAmdSev = 
> + (ASM_RELOCATE_AP_LOOP)(UINTN)mReservedApLoopFunc;

mReservedApLoopFuncAmdSev ?

> diff --git a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm 
> b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
> index c1e8a045a4..6b48913306 100644
> --- a/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
> +++ b/UefiCpuPkg/Library/MpInitLib/X64/AmdSev.nasm
> @@ -347,12 +347,13 @@ PM16Mode:
>  
>  SwitchToRealProcEnd:
>  
> ;---------------------------------------------------------------------
> ---------------- -;  AsmRelocateApLoop (MwaitSupport, ApTargetCState, 
> PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, 
> SevEsAPJumpTable, WakeupBuffer);
> +;  AsmRelocateApLoopAmdSev (MwaitSupport, ApTargetCState, 
> +PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, 
> +SevEsAPJumpTable, WakeupBuffer);
>  
> ;---------------------------------------------------------------------
> ----------------
> -AsmRelocateApLoopStart:
> +
> +AsmRelocateApLoopStartAmdSev:

I'd suggest to do the rename in patch #1 too.

> +;--------------------------------------------------------------------
> +----------------- ;  AsmRelocateApLoop (MwaitSupport, ApTargetCState, 
> +PmCodeSegment, TopOfApStack, CountTofinish, Pm16CodeSegment, 
> +SevEsAPJumpTable, WakeupBuffer);
> +;--------------------------------------------------------------------
> +-----------------
> +AsmRelocateApLoopStart:
> +BITS 64
> +    cmp        qword [rsp + 56], 0  ; SevEsAPJumpTable
> +    je         NoSevEs

Now you are adding back the AmdSev version.
It should be the generic version though.

If you want add the generic version later in the in the patch series (when changing the function prototype to drop sev support and add paging
support) you can temporary call AsmRelocateApLoopStartAmdSev in the generic code path too.

take care,
  Gerd


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [edk2-devel] [Patch V3 2/6] UefiCpuPkg: Duplicate AsmRelocateApLoopAmd.
  2023-02-24 10:32     ` Yuanhao Xie
@ 2023-02-24 12:00       ` Gerd Hoffmann
  0 siblings, 0 replies; 12+ messages in thread
From: Gerd Hoffmann @ 2023-02-24 12:00 UTC (permalink / raw)
  To: Xie, Yuanhao
  Cc: devel@edk2.groups.io, Dong, Guo, Ni, Ray, Rhodes, Sean, Lu, James,
	Guo, Gua

On Fri, Feb 24, 2023 at 10:32:26AM +0000, Xie, Yuanhao wrote:
> 
> Hi Gerd,
> 
> -Now you are adding back the AmdSev version.
> -It should be the generic version though.
> Duplication is as I want to build up the desired functionality in small steps, generic version is updated in patch3 and ready in patch 6.
> 
> Call AsmRelocateApLoopStartAmdSev brings more confusion.

But now you are adding a duplicate of the AsmRelocateApLoopStartAmdSev
function, only to modify it later, which is confusing too ...

Maybe the best is this:

 * Leave asm code unmodified until the generic version is updated.
 * The patch updating the generic version (#6 in this version)
   adds the new AP loop as 'AsmRelocateApLoopStartGeneric'.
 * Finally rename AsmRelocateApLoopStart -> AsmRelocateApLoopStartAmdSev
   and move to AmdSev.nasm

This avoids duplicating the AsmRelocateApLoopStartAmdSev code, making
the patches more readable and also shouldn't be confusing due to
(temporary) using AsmRelocateApLoopStartAmdSev in the generic code path

take care,
  Gerd


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-02-24 12:00 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-23 18:05 [Patch V3 0/6] Put APs in 64 bit mode before handoff to OS Yuanhao Xie
2023-02-23 18:05 ` [Patch V3 1/6] UefiCpuPkg: Move AsmRelocateApLoop to AmdSev.nasm Yuanhao Xie
2023-02-23 18:05 ` [Patch V3 2/6] UefiCpuPkg: Duplicate AsmRelocateApLoopAmd Yuanhao Xie
2023-02-24  7:38   ` [edk2-devel] " Gerd Hoffmann
2023-02-24 10:32     ` Yuanhao Xie
2023-02-24 12:00       ` Gerd Hoffmann
2023-02-23 18:05 ` [Patch V3 3/6] UefiCpuPkg: Contiguous memory allocation and code clean-up Yuanhao Xie
2023-02-23 18:05 ` [Patch V3 4/6] OvmfPkg: Add CpuPageTableLib required by MpInitLib Yuanhao Xie
2023-02-23 18:05 ` [Patch V3 5/6] UefiPayloadPkg: " Yuanhao Xie
2023-02-23 18:05 ` [Patch V3 6/6] UefiCpuPkg: Put APs in 64 bit mode before handoff to OS Yuanhao Xie
2023-02-24  0:26 ` [edk2-devel] [Patch V3 0/6] " Ni, Ray
2023-02-24  3:10   ` Yuanhao Xie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox