* [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory
2023-05-09 10:22 [PATCH v1 0/3] Target to enable paging from temporary RAM Done Wu, Jiaxin
@ 2023-05-09 10:22 ` Wu, Jiaxin
2023-05-09 14:39 ` [edk2-devel] " Gerd Hoffmann
2023-05-10 7:50 ` Ni, Ray
2023-05-09 10:22 ` [PATCH v1 2/3] UefiCpuPkg/CpuMpPei: Enable PAE page table if CR0.PG is not set Wu, Jiaxin
` (2 subsequent siblings)
3 siblings, 2 replies; 23+ messages in thread
From: Wu, Jiaxin @ 2023-05-09 10:22 UTC (permalink / raw)
To: devel; +Cc: Eric Dong, Ray Ni, Zeng Star, Gerd Hoffmann, Rahul Kumar
Background:
For arch X64, system will enable the page table in SPI to cover 0-512G range
via CR4.PAE & MSR.LME & CR0.PG & CR3 setting (see ResetVector code). Existing
code doesn't cover the higher address access above 512G before memory-discovered
callback. That will be potential problem if system access the higher address
after the transition from temporary RAM to permanent MEM RAM.
Solution:
This patch is to migrate page table to permanent memory to map entire physical
address space if CR0.PG is set during temporary RAM Done.
Change-Id: I29bdb078ef567ed9455d1328cb007f4f60a617a2
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
---
UefiCpuPkg/SecCore/SecCore.inf | 1 +
UefiCpuPkg/SecCore/SecCoreNative.inf | 1 +
UefiCpuPkg/SecCore/SecMain.c | 164 +++++++++++++++++++++++++++++++++++
UefiCpuPkg/SecCore/SecMain.h | 4 +
4 files changed, 170 insertions(+)
diff --git a/UefiCpuPkg/SecCore/SecCore.inf b/UefiCpuPkg/SecCore/SecCore.inf
index 3758aded3b..cab69b8b97 100644
--- a/UefiCpuPkg/SecCore/SecCore.inf
+++ b/UefiCpuPkg/SecCore/SecCore.inf
@@ -53,10 +53,11 @@
CpuExceptionHandlerLib
ReportStatusCodeLib
PeiServicesLib
PeiServicesTablePointerLib
HobLib
+ CpuPageTableLib
[Ppis]
## SOMETIMES_CONSUMES
## PRODUCES
gEfiSecPlatformInformationPpiGuid
diff --git a/UefiCpuPkg/SecCore/SecCoreNative.inf b/UefiCpuPkg/SecCore/SecCoreNative.inf
index 1ee6ff7d88..fa241cca94 100644
--- a/UefiCpuPkg/SecCore/SecCoreNative.inf
+++ b/UefiCpuPkg/SecCore/SecCoreNative.inf
@@ -50,10 +50,11 @@
CpuExceptionHandlerLib
ReportStatusCodeLib
PeiServicesLib
PeiServicesTablePointerLib
HobLib
+ CpuPageTableLib
[Ppis]
## SOMETIMES_CONSUMES
## PRODUCES
gEfiSecPlatformInformationPpiGuid
diff --git a/UefiCpuPkg/SecCore/SecMain.c b/UefiCpuPkg/SecCore/SecMain.c
index 95375850ec..d0dc76e804 100644
--- a/UefiCpuPkg/SecCore/SecMain.c
+++ b/UefiCpuPkg/SecCore/SecMain.c
@@ -70,10 +70,159 @@ MigrateGdt (
AsmWriteGdtr (&Gdtr);
return EFI_SUCCESS;
}
+/**
+ Migrate page table to permanent memory mapping entire physical address space.
+
+ @retval EFI_SUCCESS The PageTable was migrated successfully.
+ @retval EFI_UNSUPPORTED Unsupport to migrate page table to permanent memory if IA-32e Mode not actived.
+ @retval EFI_OUT_OF_RESOURCES The PageTable could not be migrated due to lack of available memory.
+
+**/
+EFI_STATUS
+MigratePageTable (
+ VOID
+ )
+{
+ EFI_STATUS Status;
+ IA32_CR4 Cr4;
+ BOOLEAN Page5LevelSupport;
+ UINT32 RegEax;
+ UINT32 RegEdx;
+ BOOLEAN Page1GSupport;
+ PAGING_MODE PagingMode;
+
+ CPUID_VIR_PHY_ADDRESS_SIZE_EAX VirPhyAddressSize;
+ UINT32 MaxExtendedFunctionId;
+
+ UINTN PageTable;
+ VOID *Buffer;
+ UINTN BufferSize;
+ IA32_MAP_ATTRIBUTE MapAttribute;
+ IA32_MAP_ATTRIBUTE MapMask;
+
+ VirPhyAddressSize.Uint32 = 0;
+ PageTable = 0;
+ Buffer = NULL;
+ BufferSize = 0;
+ MapAttribute.Uint64 = 0;
+ MapMask.Uint64 = MAX_UINT64;
+ MapAttribute.Bits.Present = 1;
+ MapAttribute.Bits.ReadWrite = 1;
+
+ //
+ // Check CPU runs in 64bit mode or 32bit mode
+ //
+ if (sizeof (UINTN) == sizeof (UINT32)) {
+ DEBUG ((DEBUG_WARN, "MigratePageTable: CPU runs in 32bit mode, unsupport to migrate page table to permanent memory.\n"));
+ ASSERT (TRUE);
+ return EFI_UNSUPPORTED;
+ }
+
+ //
+ // Check Page5Level Support or not.
+ //
+ Cr4.UintN = AsmReadCr4 ();
+ Page5LevelSupport = (Cr4.Bits.LA57 ? TRUE : FALSE);
+
+ //
+ // Check Page1G Support or not.
+ //
+ Page1GSupport = FALSE;
+ AsmCpuid (CPUID_EXTENDED_FUNCTION, &RegEax, NULL, NULL, NULL);
+ if (RegEax >= CPUID_EXTENDED_CPU_SIG) {
+ AsmCpuid (CPUID_EXTENDED_CPU_SIG, NULL, NULL, NULL, &RegEdx);
+ if ((RegEdx & BIT26) != 0) {
+ Page1GSupport = TRUE;
+ }
+ }
+
+ //
+ // Decide Paging Mode according Page5LevelSupport & Page1GSupport.
+ //
+ PagingMode = Paging5Level1GB;
+ if (Page5LevelSupport && !Page1GSupport) {
+ PagingMode = Paging5Level;
+ } else if (!Page5LevelSupport && Page1GSupport) {
+ PagingMode = Paging4Level1GB;
+ } else if (!Page5LevelSupport && !Page1GSupport) {
+ PagingMode = Paging4Level;
+ }
+
+ DEBUG ((DEBUG_INFO, "MigratePageTable: PagingMode = 0x%lx\n", (UINTN) PagingMode));
+
+ //
+ // Get Maximum Physical Address Bits
+ // Get the number of address lines; Maximum Physical Address is 2^PhysicalAddressBits - 1.
+ // If CPUID does not supported, then use a max value of 36 as per SDM 3A, 4.1.4.
+ //
+ AsmCpuid (CPUID_EXTENDED_FUNCTION, &MaxExtendedFunctionId, NULL, NULL, NULL);
+ if (MaxExtendedFunctionId >= CPUID_VIR_PHY_ADDRESS_SIZE) {
+ AsmCpuid (CPUID_VIR_PHY_ADDRESS_SIZE, &VirPhyAddressSize.Uint32, NULL, NULL, NULL);
+ } else {
+ VirPhyAddressSize.Bits.PhysicalAddressBits = 36;
+ }
+
+ if (PagingMode == Paging4Level1GB || PagingMode == Paging4Level) {
+ //
+ // The max lineaddress bits is 48 for 4 level page table.
+ //
+ VirPhyAddressSize.Bits.PhysicalAddressBits = MIN (VirPhyAddressSize.Bits.PhysicalAddressBits, 48);
+ }
+
+ DEBUG ((DEBUG_INFO, "MigratePageTable: Maximum Physical Address Bits = %d\n", VirPhyAddressSize.Bits.PhysicalAddressBits));
+
+ //
+ // Get required buffer size for the pagetable that will be created.
+ //
+ Status = PageTableMap (&PageTable, PagingMode, 0, &BufferSize, 0, LShiftU64 (1, VirPhyAddressSize.Bits.PhysicalAddressBits), &MapAttribute, &MapMask, NULL);
+ ASSERT (Status == EFI_BUFFER_TOO_SMALL);
+ if (Status != EFI_BUFFER_TOO_SMALL) {
+ DEBUG ((DEBUG_ERROR, "MigratePageTable: Failed to get PageTable required BufferSize, Status = %r\n", Status));
+ return Status;
+ }
+
+ DEBUG ((DEBUG_INFO, "MigratePageTable: Get PageTable required BufferSize = %x\n", BufferSize));
+
+ //
+ // Allocate required Buffer.
+ //
+ Status = PeiServicesAllocatePages (
+ EfiBootServicesData,
+ EFI_SIZE_TO_PAGES (BufferSize),
+ &((EFI_PHYSICAL_ADDRESS) Buffer)
+ );
+ ASSERT (Buffer != NULL);
+ if (EFI_ERROR (Status)) {
+ DEBUG ((DEBUG_ERROR, "MigratePageTable: Failed to allocate PageTable required BufferSize, Status = %r\n", Status));
+ return EFI_OUT_OF_RESOURCES;
+ }
+
+ //
+ // Create PageTable in permanent memory.
+ //
+ Status = PageTableMap (&PageTable, PagingMode, Buffer, &BufferSize, 0, LShiftU64 (1, VirPhyAddressSize.Bits.PhysicalAddressBits), &MapAttribute, &MapMask, NULL);
+ ASSERT (!EFI_ERROR (Status) && PageTable != 0);
+ if (EFI_ERROR (Status) || PageTable == 0) {
+ DEBUG ((DEBUG_ERROR, "MigratePageTable: Failed to create PageTable, Status = %r, PageTable = 0x%lx\n", Status, PageTable));
+ return EFI_OUT_OF_RESOURCES;
+ }
+
+ DEBUG ((DEBUG_INFO, "MigratePageTable: Create PageTable = 0x%lx\n", PageTable));
+
+ //
+ // Write the Pagetable to CR3.
+ //
+ AsmWriteCr3 (PageTable);
+
+ DEBUG ((DEBUG_INFO, "MigratePageTable: Write the Pagetable to CR3 Sucessfully.\n"));
+
+ return Status;
+}
+
//
// These are IDT entries pointing to 10:FFFFFFE4h.
//
UINT64 mIdtEntryTemplate = 0xffff8e000010ffe4ULL;
@@ -492,10 +641,25 @@ SecTemporaryRamDone (
if (PcdGetBool (PcdMigrateTemporaryRamFirmwareVolumes)) {
Status = MigrateGdt ();
ASSERT_EFI_ERROR (Status);
}
+ //
+ // Migrate page table to permanent memory mapping entire physical address space if CR0.PG is set.
+ //
+ if ((AsmReadCr0 () & BIT31) != 0) {
+ //
+ // CR4.PAE must be enabled.
+ //
+ ASSERT ((AsmReadCr4 () & BIT5) != 0);
+ Status = MigratePageTable ();
+ if (EFI_ERROR (Status)) {
+ DEBUG ((DEBUG_WARN, "SecTemporaryRamDone: Failed to migrate page table to permanent memory: %r.\n", Status));
+ ASSERT_EFI_ERROR (Status);
+ }
+ }
+
//
// Disable Temporary RAM after Stack and Heap have been migrated at this point.
//
SecPlatformDisableTemporaryMemory ();
diff --git a/UefiCpuPkg/SecCore/SecMain.h b/UefiCpuPkg/SecCore/SecMain.h
index 880e6cd1b8..b50d96e45b 100644
--- a/UefiCpuPkg/SecCore/SecMain.h
+++ b/UefiCpuPkg/SecCore/SecMain.h
@@ -17,10 +17,11 @@
#include <Ppi/PeiCoreFvLocation.h>
#include <Ppi/RepublishSecPpi.h>
#include <Guid/FirmwarePerformance.h>
+#include <Library/BaseLib.h>
#include <Library/DebugLib.h>
#include <Library/PcdLib.h>
#include <Library/BaseMemoryLib.h>
#include <Library/PlatformSecLib.h>
#include <Library/CpuLib.h>
@@ -30,10 +31,13 @@
#include <Library/CpuExceptionHandlerLib.h>
#include <Library/ReportStatusCodeLib.h>
#include <Library/PeiServicesTablePointerLib.h>
#include <Library/HobLib.h>
#include <Library/PeiServicesLib.h>
+#include <Library/CpuPageTableLib.h>
+#include <Register/Intel/Cpuid.h>
+#include <Register/Intel/Msr.h>
#define SEC_IDT_ENTRY_COUNT 34
typedef struct _SEC_IDT_TABLE {
//
--
2.16.2.windows.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory
2023-05-09 10:22 ` [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory Wu, Jiaxin
@ 2023-05-09 14:39 ` Gerd Hoffmann
2023-05-10 2:00 ` Wu, Jiaxin
2023-05-10 2:48 ` Ni, Ray
2023-05-10 7:50 ` Ni, Ray
1 sibling, 2 replies; 23+ messages in thread
From: Gerd Hoffmann @ 2023-05-09 14:39 UTC (permalink / raw)
To: devel, jiaxin.wu; +Cc: Eric Dong, Ray Ni, Zeng Star, Rahul Kumar
Hi,
> + if (PagingMode == Paging4Level1GB || PagingMode == Paging4Level) {
> + //
> + // The max lineaddress bits is 48 for 4 level page table.
> + //
> + VirPhyAddressSize.Bits.PhysicalAddressBits = MIN (VirPhyAddressSize.Bits.PhysicalAddressBits, 48);
> + }
virtual addresses in long mode are sign-extended. Which means you have
only 47 bits (or 56 bits with 5-level paging) for identity mappings.
take care,
Gerd
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory
2023-05-09 14:39 ` [edk2-devel] " Gerd Hoffmann
@ 2023-05-10 2:00 ` Wu, Jiaxin
2023-05-10 2:44 ` Ni, Ray
2023-05-10 2:48 ` Ni, Ray
1 sibling, 1 reply; 23+ messages in thread
From: Wu, Jiaxin @ 2023-05-10 2:00 UTC (permalink / raw)
To: Gerd Hoffmann, devel@edk2.groups.io
Cc: Dong, Eric, Ni, Ray, Zeng, Star, Kumar, Rahul R
Hi Gerd,
Could you share me which document introduce the sign-extended impact the line address width?
Thanks,
Jiaxin
> -----Original Message-----
> From: Gerd Hoffmann <kraxel@redhat.com>
> Sent: Tuesday, May 9, 2023 10:39 PM
> To: devel@edk2.groups.io; Wu, Jiaxin <jiaxin.wu@intel.com>
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Zeng, Star
> <star.zeng@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>
> Subject: Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page
> table to permanent memory
>
> Hi,
>
> > + if (PagingMode == Paging4Level1GB || PagingMode == Paging4Level) {
> > + //
> > + // The max lineaddress bits is 48 for 4 level page table.
> > + //
> > + VirPhyAddressSize.Bits.PhysicalAddressBits = MIN
> (VirPhyAddressSize.Bits.PhysicalAddressBits, 48);
> > + }
>
> virtual addresses in long mode are sign-extended. Which means you have
> only 47 bits (or 56 bits with 5-level paging) for identity mappings.
>
> take care,
> Gerd
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory
2023-05-10 2:00 ` Wu, Jiaxin
@ 2023-05-10 2:44 ` Ni, Ray
0 siblings, 0 replies; 23+ messages in thread
From: Ni, Ray @ 2023-05-10 2:44 UTC (permalink / raw)
To: Wu, Jiaxin, Gerd Hoffmann, devel@edk2.groups.io
Cc: Dong, Eric, Zeng, Star, Kumar, Rahul R
Jiaxin,
SDM has following:
> 3.3.7.1 Canonical Addressing
> In 64-bit mode, an address is considered to be in canonical form if **address bits 63 through to the most-significant implemented bit by the microarchitecture are set to either all ones or all zeros**.
> Intel 64 architecture defines a 64-bit linear address. Implementations can support less. The first implementation of IA-32 processors with Intel 64 architecture supports a 48-bit linear address. This means a canonical address must have bits 63 through 48 set to zeros or ones (depending on whether bit 47 is a zero or one).
> Although implementations may not use all 64 bits of the linear address, they should check bits 63 through the most-significant implemented bit to see if the address is in canonical form. If a linear-memory reference is not in canonical form, the implementation should generate an exception. In most cases, a general-protection exception (#GP) is generated.
> -----Original Message-----
> From: Wu, Jiaxin <jiaxin.wu@intel.com>
> Sent: Wednesday, May 10, 2023 10:00 AM
> To: Gerd Hoffmann <kraxel@redhat.com>; devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Zeng, Star
> <star.zeng@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>
> Subject: RE: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page
> table to permanent memory
>
> Hi Gerd,
>
> Could you share me which document introduce the sign-extended impact the
> line address width?
>
> Thanks,
> Jiaxin
>
> > -----Original Message-----
> > From: Gerd Hoffmann <kraxel@redhat.com>
> > Sent: Tuesday, May 9, 2023 10:39 PM
> > To: devel@edk2.groups.io; Wu, Jiaxin <jiaxin.wu@intel.com>
> > Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Zeng, Star
> > <star.zeng@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>
> > Subject: Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page
> > table to permanent memory
> >
> > Hi,
> >
> > > + if (PagingMode == Paging4Level1GB || PagingMode == Paging4Level) {
> > > + //
> > > + // The max lineaddress bits is 48 for 4 level page table.
> > > + //
> > > + VirPhyAddressSize.Bits.PhysicalAddressBits = MIN
> > (VirPhyAddressSize.Bits.PhysicalAddressBits, 48);
> > > + }
> >
> > virtual addresses in long mode are sign-extended. Which means you have
> > only 47 bits (or 56 bits with 5-level paging) for identity mappings.
> >
> > take care,
> > Gerd
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory
2023-05-09 14:39 ` [edk2-devel] " Gerd Hoffmann
2023-05-10 2:00 ` Wu, Jiaxin
@ 2023-05-10 2:48 ` Ni, Ray
2023-05-10 7:48 ` Gerd Hoffmann
1 sibling, 1 reply; 23+ messages in thread
From: Ni, Ray @ 2023-05-10 2:48 UTC (permalink / raw)
To: devel@edk2.groups.io, kraxel@redhat.com, Wu, Jiaxin
Cc: Dong, Eric, Zeng, Star, Kumar, Rahul R
Gerd,
My understanding is that when code dereferences memory address, the code itself is responsible for
supplying the sign-extended linear address.
The page table creation logic still maps the entire linear memory space supported by the CPU.
Why do you think covering the half of the space is better?
Thanks,
Ray
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Gerd
> Hoffmann
> Sent: Tuesday, May 9, 2023 10:39 PM
> To: devel@edk2.groups.io; Wu, Jiaxin <jiaxin.wu@intel.com>
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Zeng, Star
> <star.zeng@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>
> Subject: Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page
> table to permanent memory
>
> Hi,
>
> > + if (PagingMode == Paging4Level1GB || PagingMode == Paging4Level) {
> > + //
> > + // The max lineaddress bits is 48 for 4 level page table.
> > + //
> > + VirPhyAddressSize.Bits.PhysicalAddressBits = MIN
> (VirPhyAddressSize.Bits.PhysicalAddressBits, 48);
> > + }
>
> virtual addresses in long mode are sign-extended. Which means you have
> only 47 bits (or 56 bits with 5-level paging) for identity mappings.
>
> take care,
> Gerd
>
>
>
>
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory
2023-05-10 2:48 ` Ni, Ray
@ 2023-05-10 7:48 ` Gerd Hoffmann
2023-05-11 5:08 ` Wu, Jiaxin
2023-05-11 5:36 ` Ni, Ray
0 siblings, 2 replies; 23+ messages in thread
From: Gerd Hoffmann @ 2023-05-10 7:48 UTC (permalink / raw)
To: devel, ray.ni; +Cc: Wu, Jiaxin, Dong, Eric, Zeng, Star, Kumar, Rahul R
On Wed, May 10, 2023 at 02:48:52AM +0000, Ni, Ray wrote:
> Gerd,
> My understanding is that when code dereferences memory address, the code itself is responsible for
> supplying the sign-extended linear address.
> The page table creation logic still maps the entire linear memory space supported by the CPU.
>
> Why do you think covering the half of the space is better?
edk2 boot services operate on the assumption that everything is identity
mapped, only runtime services know the concept of virtual addresses.
The lower half of the address space can be identity-mapped (virtual
address == physical address). The upper half can not, so I think it's
better for efi boot services to restrict themself to the lower half.
take care,
Gerd
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory
2023-05-10 7:48 ` Gerd Hoffmann
@ 2023-05-11 5:08 ` Wu, Jiaxin
2023-05-11 7:47 ` Ni, Ray
2023-05-11 5:36 ` Ni, Ray
1 sibling, 1 reply; 23+ messages in thread
From: Wu, Jiaxin @ 2023-05-11 5:08 UTC (permalink / raw)
To: devel@edk2.groups.io, kraxel@redhat.com, Ni, Ray
Cc: Dong, Eric, Zeng, Star, Kumar, Rahul R
What's your comments to the existing code logic for the PhysicalAddressBits in the CreateIdentityMappingPageTables()? Looks all doesn't consider the sign-extended case? is it reasonable create the paging but not used? All system with long mode are sign-extended?
//
// IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses
// when 5-Level Paging is disabled,
// due to either unsupported by HW, or disabled by PCD.
//
ASSERT (PhysicalAddressBits <= 52);
if (!Page5LevelSupport && (PhysicalAddressBits > 48)) {
PhysicalAddressBits = 48;
}
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Gerd
> Hoffmann
> Sent: Wednesday, May 10, 2023 3:48 PM
> To: devel@edk2.groups.io; Ni, Ray <ray.ni@intel.com>
> Cc: Wu, Jiaxin <jiaxin.wu@intel.com>; Dong, Eric <eric.dong@intel.com>; Zeng,
> Star <star.zeng@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>
> Subject: Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page
> table to permanent memory
>
> On Wed, May 10, 2023 at 02:48:52AM +0000, Ni, Ray wrote:
> > Gerd,
> > My understanding is that when code dereferences memory address, the code
> itself is responsible for
> > supplying the sign-extended linear address.
> > The page table creation logic still maps the entire linear memory space
> supported by the CPU.
> >
> > Why do you think covering the half of the space is better?
>
> edk2 boot services operate on the assumption that everything is identity
> mapped, only runtime services know the concept of virtual addresses.
>
> The lower half of the address space can be identity-mapped (virtual
> address == physical address). The upper half can not, so I think it's
> better for efi boot services to restrict themself to the lower half.
>
> take care,
> Gerd
>
>
>
>
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory
2023-05-11 5:08 ` Wu, Jiaxin
@ 2023-05-11 7:47 ` Ni, Ray
2023-05-12 2:19 ` Wu, Jiaxin
0 siblings, 1 reply; 23+ messages in thread
From: Ni, Ray @ 2023-05-11 7:47 UTC (permalink / raw)
To: Wu, Jiaxin, devel@edk2.groups.io, kraxel@redhat.com
Cc: Dong, Eric, Zeng, Star, Kumar, Rahul R
Jiaxin,
Let's keep using 48 or 57.
We can use separate patch to clean all existing code to use 47 and 56.
Thanks,
Ray
> -----Original Message-----
> From: Wu, Jiaxin <jiaxin.wu@intel.com>
> Sent: Thursday, May 11, 2023 1:08 PM
> To: devel@edk2.groups.io; kraxel@redhat.com; Ni, Ray <ray.ni@intel.com>
> Cc: Dong, Eric <eric.dong@intel.com>; Zeng, Star <star.zeng@intel.com>;
> Kumar, Rahul R <rahul.r.kumar@intel.com>
> Subject: RE: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page
> table to permanent memory
>
> What's your comments to the existing code logic for the PhysicalAddressBits in
> the CreateIdentityMappingPageTables()? Looks all doesn't consider the sign-
> extended case? is it reasonable create the paging but not used? All system with
> long mode are sign-extended?
>
> //
> // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses
> // when 5-Level Paging is disabled,
> // due to either unsupported by HW, or disabled by PCD.
> //
> ASSERT (PhysicalAddressBits <= 52);
> if (!Page5LevelSupport && (PhysicalAddressBits > 48)) {
> PhysicalAddressBits = 48;
> }
>
>
> > -----Original Message-----
> > From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Gerd
> > Hoffmann
> > Sent: Wednesday, May 10, 2023 3:48 PM
> > To: devel@edk2.groups.io; Ni, Ray <ray.ni@intel.com>
> > Cc: Wu, Jiaxin <jiaxin.wu@intel.com>; Dong, Eric <eric.dong@intel.com>;
> Zeng,
> > Star <star.zeng@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>
> > Subject: Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page
> > table to permanent memory
> >
> > On Wed, May 10, 2023 at 02:48:52AM +0000, Ni, Ray wrote:
> > > Gerd,
> > > My understanding is that when code dereferences memory address, the code
> > itself is responsible for
> > > supplying the sign-extended linear address.
> > > The page table creation logic still maps the entire linear memory space
> > supported by the CPU.
> > >
> > > Why do you think covering the half of the space is better?
> >
> > edk2 boot services operate on the assumption that everything is identity
> > mapped, only runtime services know the concept of virtual addresses.
> >
> > The lower half of the address space can be identity-mapped (virtual
> > address == physical address). The upper half can not, so I think it's
> > better for efi boot services to restrict themself to the lower half.
> >
> > take care,
> > Gerd
> >
> >
> >
> >
> >
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory
2023-05-11 7:47 ` Ni, Ray
@ 2023-05-12 2:19 ` Wu, Jiaxin
0 siblings, 0 replies; 23+ messages in thread
From: Wu, Jiaxin @ 2023-05-12 2:19 UTC (permalink / raw)
To: Ni, Ray, devel@edk2.groups.io, kraxel@redhat.com
Cc: Dong, Eric, Zeng, Star, Kumar, Rahul R
Agree, thanks comments.
> -----Original Message-----
> From: Ni, Ray <ray.ni@intel.com>
> Sent: Thursday, May 11, 2023 3:48 PM
> To: Wu, Jiaxin <jiaxin.wu@intel.com>; devel@edk2.groups.io;
> kraxel@redhat.com
> Cc: Dong, Eric <eric.dong@intel.com>; Zeng, Star <star.zeng@intel.com>;
> Kumar, Rahul R <rahul.r.kumar@intel.com>
> Subject: RE: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page
> table to permanent memory
>
> Jiaxin,
> Let's keep using 48 or 57.
> We can use separate patch to clean all existing code to use 47 and 56.
>
> Thanks,
> Ray
>
> > -----Original Message-----
> > From: Wu, Jiaxin <jiaxin.wu@intel.com>
> > Sent: Thursday, May 11, 2023 1:08 PM
> > To: devel@edk2.groups.io; kraxel@redhat.com; Ni, Ray <ray.ni@intel.com>
> > Cc: Dong, Eric <eric.dong@intel.com>; Zeng, Star <star.zeng@intel.com>;
> > Kumar, Rahul R <rahul.r.kumar@intel.com>
> > Subject: RE: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page
> > table to permanent memory
> >
> > What's your comments to the existing code logic for the PhysicalAddressBits
> in
> > the CreateIdentityMappingPageTables()? Looks all doesn't consider the sign-
> > extended case? is it reasonable create the paging but not used? All system
> with
> > long mode are sign-extended?
> >
> > //
> > // IA-32e paging translates 48-bit linear addresses to 52-bit physical
> addresses
> > // when 5-Level Paging is disabled,
> > // due to either unsupported by HW, or disabled by PCD.
> > //
> > ASSERT (PhysicalAddressBits <= 52);
> > if (!Page5LevelSupport && (PhysicalAddressBits > 48)) {
> > PhysicalAddressBits = 48;
> > }
> >
> >
> > > -----Original Message-----
> > > From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Gerd
> > > Hoffmann
> > > Sent: Wednesday, May 10, 2023 3:48 PM
> > > To: devel@edk2.groups.io; Ni, Ray <ray.ni@intel.com>
> > > Cc: Wu, Jiaxin <jiaxin.wu@intel.com>; Dong, Eric <eric.dong@intel.com>;
> > Zeng,
> > > Star <star.zeng@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>
> > > Subject: Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate
> page
> > > table to permanent memory
> > >
> > > On Wed, May 10, 2023 at 02:48:52AM +0000, Ni, Ray wrote:
> > > > Gerd,
> > > > My understanding is that when code dereferences memory address, the
> code
> > > itself is responsible for
> > > > supplying the sign-extended linear address.
> > > > The page table creation logic still maps the entire linear memory space
> > > supported by the CPU.
> > > >
> > > > Why do you think covering the half of the space is better?
> > >
> > > edk2 boot services operate on the assumption that everything is identity
> > > mapped, only runtime services know the concept of virtual addresses.
> > >
> > > The lower half of the address space can be identity-mapped (virtual
> > > address == physical address). The upper half can not, so I think it's
> > > better for efi boot services to restrict themself to the lower half.
> > >
> > > take care,
> > > Gerd
> > >
> > >
> > >
> > >
> > >
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory
2023-05-10 7:48 ` Gerd Hoffmann
2023-05-11 5:08 ` Wu, Jiaxin
@ 2023-05-11 5:36 ` Ni, Ray
1 sibling, 0 replies; 23+ messages in thread
From: Ni, Ray @ 2023-05-11 5:36 UTC (permalink / raw)
To: kraxel@redhat.com, devel@edk2.groups.io, Kinney, Michael D
Cc: Wu, Jiaxin, Dong, Eric, Zeng, Star, Kumar, Rahul R
> -----Original Message-----
> From: kraxel@redhat.com <kraxel@redhat.com>
> Sent: Wednesday, May 10, 2023 3:48 PM
> To: devel@edk2.groups.io; Ni, Ray <ray.ni@intel.com>
> Cc: Wu, Jiaxin <jiaxin.wu@intel.com>; Dong, Eric <eric.dong@intel.com>; Zeng,
> Star <star.zeng@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>
> Subject: Re: [edk2-devel] [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page
> table to permanent memory
>
> On Wed, May 10, 2023 at 02:48:52AM +0000, Ni, Ray wrote:
> > Gerd,
> > My understanding is that when code dereferences memory address, the code
> itself is responsible for
> > supplying the sign-extended linear address.
> > The page table creation logic still maps the entire linear memory space
> supported by the CPU.
> >
> > Why do you think covering the half of the space is better?
>
> edk2 boot services operate on the assumption that everything is identity
> mapped, only runtime services know the concept of virtual addresses.
>
> The lower half of the address space can be identity-mapped (virtual
> address == physical address). The upper half can not, so I think it's
> better for efi boot services to restrict themself to the lower half.
Good point. I am convinced that 4-level paging only maps up to 2^47 address and
5-level only maps up to 2^56 address.
+@Kinney, Michael D, if he have other thoughts.
>
> take care,
> Gerd
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory
2023-05-09 10:22 ` [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory Wu, Jiaxin
2023-05-09 14:39 ` [edk2-devel] " Gerd Hoffmann
@ 2023-05-10 7:50 ` Ni, Ray
1 sibling, 0 replies; 23+ messages in thread
From: Ni, Ray @ 2023-05-10 7:50 UTC (permalink / raw)
To: Wu, Jiaxin, devel@edk2.groups.io
Cc: Dong, Eric, Zeng, Star, Gerd Hoffmann, Kumar, Rahul R
> -----Original Message-----
> From: Wu, Jiaxin <jiaxin.wu@intel.com>
> Sent: Tuesday, May 9, 2023 6:23 PM
> To: devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Zeng, Star
> <star.zeng@intel.com>; Gerd Hoffmann <kraxel@redhat.com>; Kumar, Rahul R
> <rahul.r.kumar@intel.com>
> Subject: [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent
> memory
>
> Background:
> For arch X64, system will enable the page table in SPI to cover 0-512G range
> via CR4.PAE & MSR.LME & CR0.PG & CR3 setting (see ResetVector code).
> Existing
> code doesn't cover the higher address access above 512G before memory-
> discovered
> callback. That will be potential problem if system access the higher address
> after the transition from temporary RAM to permanent MEM RAM.
>
> Solution:
> This patch is to migrate page table to permanent memory to map entire physical
> address space if CR0.PG is set during temporary RAM Done.
>
> Change-Id: I29bdb078ef567ed9455d1328cb007f4f60a617a2
1. please remove Change-Id.
> +
> + //
> + // Check CPU runs in 64bit mode or 32bit mode
> + //
> + if (sizeof (UINTN) == sizeof (UINT32)) {
> + DEBUG ((DEBUG_WARN, "MigratePageTable: CPU runs in 32bit mode,
> unsupport to migrate page table to permanent memory.\n"));
2. I am ok to assume that this function should not be called when CPU runs in 32bit mode because we
assume 32bit cpu doesn't enable paging in PEI at this moment.
Can you just add assert such as ASSERT (sizeof (UINTN) == sizeof (UINT64))?
The if check and debug messages can be removed.
> + //
> + // Check Page1G Support or not.
> + //
> + Page1GSupport = FALSE;
> + AsmCpuid (CPUID_EXTENDED_FUNCTION, &RegEax, NULL, NULL, NULL);
> + if (RegEax >= CPUID_EXTENDED_CPU_SIG) {
> + AsmCpuid (CPUID_EXTENDED_CPU_SIG, NULL, NULL, NULL, &RegEdx);
> + if ((RegEdx & BIT26) != 0) {
3. can you use the bitfield in above if-check instead of checking BIT26?
> +
> + //
> + // Decide Paging Mode according Page5LevelSupport & Page1GSupport.
> + //
> + PagingMode = Paging5Level1GB;
> + if (Page5LevelSupport && !Page1GSupport) {
> + PagingMode = Paging5Level;
> + } else if (!Page5LevelSupport && Page1GSupport) {
> + PagingMode = Paging4Level1GB;
> + } else if (!Page5LevelSupport && !Page1GSupport) {
> + PagingMode = Paging4Level;
> + }
4. how about?
if (Page5LevelSupport) {
PagingMode = Page1GSupport? Paging5Level1GB: Paging5Level;
} else {
PagingMode = Page1GSupport? Paging4Level1GB: Paging4Level;
}
> +
> + DEBUG ((DEBUG_INFO, "MigratePageTable: PagingMode = 0x%lx\n", (UINTN)
> PagingMode));
5. There are lots of DEBUG() macro call in this patch. Can you think about how to merge them?
Basically if code between two DEBUG() calls is impossible to cause hang, you can combine the 1st
DEBUG() call into the 2nd.
> +
> + //
> + // Get Maximum Physical Address Bits
> + // Get the number of address lines; Maximum Physical Address is
> 2^PhysicalAddressBits - 1.
> + // If CPUID does not supported, then use a max value of 36 as per SDM 3A,
> 4.1.4.
> + //
> + AsmCpuid (CPUID_EXTENDED_FUNCTION, &MaxExtendedFunctionId, NULL,
> NULL, NULL);
> + if (MaxExtendedFunctionId >= CPUID_VIR_PHY_ADDRESS_SIZE) {
> + AsmCpuid (CPUID_VIR_PHY_ADDRESS_SIZE, &VirPhyAddressSize.Uint32,
> NULL, NULL, NULL);
> + } else {
> + VirPhyAddressSize.Bits.PhysicalAddressBits = 36;
> + }
> +
> + if (PagingMode == Paging4Level1GB || PagingMode == Paging4Level) {
> + //
> + // The max lineaddress bits is 48 for 4 level page table.
> + //
> + VirPhyAddressSize.Bits.PhysicalAddressBits = MIN
> (VirPhyAddressSize.Bits.PhysicalAddressBits, 48);
> + }
> +
> + DEBUG ((DEBUG_INFO, "MigratePageTable: Maximum Physical Address Bits
> = %d\n", VirPhyAddressSize.Bits.PhysicalAddressBits));
> +
> + //
> + // Get required buffer size for the pagetable that will be created.
> + //
> + Status = PageTableMap (&PageTable, PagingMode, 0, &BufferSize, 0,
> LShiftU64 (1, VirPhyAddressSize.Bits.PhysicalAddressBits), &MapAttribute,
> &MapMask, NULL);
> + ASSERT (Status == EFI_BUFFER_TOO_SMALL);
> + if (Status != EFI_BUFFER_TOO_SMALL) {
> + DEBUG ((DEBUG_ERROR, "MigratePageTable: Failed to get PageTable
> required BufferSize, Status = %r\n", Status));
6. I think ASSERT() is enough. No need for a debug message.
> + return Status;
> + }
> +
> + DEBUG ((DEBUG_INFO, "MigratePageTable: Get PageTable required
> BufferSize = %x\n", BufferSize));
> +
> + //
> + // Allocate required Buffer.
> + //
> + Status = PeiServicesAllocatePages (
> + EfiBootServicesData,
> + EFI_SIZE_TO_PAGES (BufferSize),
> + &((EFI_PHYSICAL_ADDRESS) Buffer)
> + );
> + ASSERT (Buffer != NULL);
> + if (EFI_ERROR (Status)) {
> + DEBUG ((DEBUG_ERROR, "MigratePageTable: Failed to allocate PageTable
> required BufferSize, Status = %r\n", Status));
> + return EFI_OUT_OF_RESOURCES;
> + }
> +
> + //
> + // Create PageTable in permanent memory.
> + //
> + Status = PageTableMap (&PageTable, PagingMode, Buffer, &BufferSize, 0,
> LShiftU64 (1, VirPhyAddressSize.Bits.PhysicalAddressBits), &MapAttribute,
> &MapMask, NULL);
> + ASSERT (!EFI_ERROR (Status) && PageTable != 0);
> + if (EFI_ERROR (Status) || PageTable == 0) {
> + DEBUG ((DEBUG_ERROR, "MigratePageTable: Failed to create PageTable,
> Status = %r, PageTable = 0x%lx\n", Status, PageTable));
> + return EFI_OUT_OF_RESOURCES;
> + }
> +
> + DEBUG ((DEBUG_INFO, "MigratePageTable: Create PageTable = 0x%lx\n",
> PageTable));
> +
> + //
> + // Write the Pagetable to CR3.
> + //
> + AsmWriteCr3 (PageTable);
> +
> + DEBUG ((DEBUG_INFO, "MigratePageTable: Write the Pagetable to CR3
> Sucessfully.\n"));
> +
> + return Status;
> +}
> +
> //
> // These are IDT entries pointing to 10:FFFFFFE4h.
> //
> UINT64 mIdtEntryTemplate = 0xffff8e000010ffe4ULL;
>
> @@ -492,10 +641,25 @@ SecTemporaryRamDone (
> if (PcdGetBool (PcdMigrateTemporaryRamFirmwareVolumes)) {
> Status = MigrateGdt ();
> ASSERT_EFI_ERROR (Status);
> }
>
> + //
> + // Migrate page table to permanent memory mapping entire physical address
> space if CR0.PG is set.
> + //
> + if ((AsmReadCr0 () & BIT31) != 0) {
7. Can you use IA32_CR0 structure in if-check?
8. can you please add ASSERT (sizeof (UINTN) == sizeof (UINT32)) before calling MigratePageTable()?
> + //
> + // CR4.PAE must be enabled.
> + //
> + ASSERT ((AsmReadCr4 () & BIT5) != 0);
> + Status = MigratePageTable ();
> + if (EFI_ERROR (Status)) {
> + DEBUG ((DEBUG_WARN, "SecTemporaryRamDone: Failed to migrate page
> table to permanent memory: %r.\n", Status));
> + ASSERT_EFI_ERROR (Status);
> + }
> + }
> +
> //
> // Disable Temporary RAM after Stack and Heap have been migrated at this
> point.
> //
> SecPlatformDisableTemporaryMemory ();
>
> diff --git a/UefiCpuPkg/SecCore/SecMain.h b/UefiCpuPkg/SecCore/SecMain.h
> index 880e6cd1b8..b50d96e45b 100644
> --- a/UefiCpuPkg/SecCore/SecMain.h
> +++ b/UefiCpuPkg/SecCore/SecMain.h
> @@ -17,10 +17,11 @@
> #include <Ppi/PeiCoreFvLocation.h>
> #include <Ppi/RepublishSecPpi.h>
>
> #include <Guid/FirmwarePerformance.h>
>
> +#include <Library/BaseLib.h>
> #include <Library/DebugLib.h>
> #include <Library/PcdLib.h>
> #include <Library/BaseMemoryLib.h>
> #include <Library/PlatformSecLib.h>
> #include <Library/CpuLib.h>
> @@ -30,10 +31,13 @@
> #include <Library/CpuExceptionHandlerLib.h>
> #include <Library/ReportStatusCodeLib.h>
> #include <Library/PeiServicesTablePointerLib.h>
> #include <Library/HobLib.h>
> #include <Library/PeiServicesLib.h>
> +#include <Library/CpuPageTableLib.h>
> +#include <Register/Intel/Cpuid.h>
> +#include <Register/Intel/Msr.h>
>
> #define SEC_IDT_ENTRY_COUNT 34
>
> typedef struct _SEC_IDT_TABLE {
> //
> --
> 2.16.2.windows.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v1 2/3] UefiCpuPkg/CpuMpPei: Enable PAE page table if CR0.PG is not set
2023-05-09 10:22 [PATCH v1 0/3] Target to enable paging from temporary RAM Done Wu, Jiaxin
2023-05-09 10:22 ` [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory Wu, Jiaxin
@ 2023-05-09 10:22 ` Wu, Jiaxin
2023-05-09 14:41 ` [edk2-devel] " Gerd Hoffmann
2023-05-10 7:59 ` Ni, Ray
2023-05-09 10:22 ` [PATCH v1 3/3] MdeModulePkg/DxeIpl: Align Page table Level setting with previous level Wu, Jiaxin
2023-05-09 14:46 ` [edk2-devel] [PATCH v1 0/3] Target to enable paging from temporary RAM Done Gerd Hoffmann
3 siblings, 2 replies; 23+ messages in thread
From: Wu, Jiaxin @ 2023-05-09 10:22 UTC (permalink / raw)
To: devel; +Cc: Eric Dong, Ray Ni, Zeng Star, Gerd Hoffmann, Rahul Kumar
Some security features depends on the page table enabling. So, This patch
is to enable the page table if page table has not been enabled during the
transition from Temporary RAM to Permanent RAM.
Note: If page table is not enabled before this point, which means the system
IA-32e Mode is not activated. Because on Intel 64 processors, IA-32e Mode
operation requires physical address extensions with 4 or 5 levels of enhanced
paging structures (see Section 4.5, "4 - Level Paging and 5 -Level Paging"
and Section 9.8, "Initializing IA-32e Mode"). So, just enable PAE page table
if CR0.PG is not set.
Change-Id: Ibfbfdace1fe7e29ab94463629d8d2f539a43f1b9
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
---
UefiCpuPkg/CpuMpPei/CpuMpPei.h | 1 +
UefiCpuPkg/CpuMpPei/CpuMpPei.inf | 1 +
UefiCpuPkg/CpuMpPei/CpuPaging.c | 228 +++++++++++++++++----------------------
3 files changed, 102 insertions(+), 128 deletions(-)
diff --git a/UefiCpuPkg/CpuMpPei/CpuMpPei.h b/UefiCpuPkg/CpuMpPei/CpuMpPei.h
index 0649c48d14..1b9a94e18f 100644
--- a/UefiCpuPkg/CpuMpPei/CpuMpPei.h
+++ b/UefiCpuPkg/CpuMpPei/CpuMpPei.h
@@ -26,10 +26,11 @@
#include <Library/ReportStatusCodeLib.h>
#include <Library/CpuExceptionHandlerLib.h>
#include <Library/MpInitLib.h>
#include <Library/BaseMemoryLib.h>
#include <Library/MemoryAllocationLib.h>
+#include <Library/CpuPageTableLib.h>
extern EFI_PEI_PPI_DESCRIPTOR mPeiCpuMpPpiDesc;
/**
This service retrieves the number of logical processor in the platform
diff --git a/UefiCpuPkg/CpuMpPei/CpuMpPei.inf b/UefiCpuPkg/CpuMpPei/CpuMpPei.inf
index 7444bdb968..865be5627e 100644
--- a/UefiCpuPkg/CpuMpPei/CpuMpPei.inf
+++ b/UefiCpuPkg/CpuMpPei/CpuMpPei.inf
@@ -44,10 +44,11 @@
CpuExceptionHandlerLib
MpInitLib
BaseMemoryLib
CpuLib
MemoryAllocationLib
+ CpuPageTableLib
[Guids]
gEdkiiMigratedFvInfoGuid ## SOMETIMES_CONSUMES ## HOB
[Ppis]
diff --git a/UefiCpuPkg/CpuMpPei/CpuPaging.c b/UefiCpuPkg/CpuMpPei/CpuPaging.c
index a471f089c8..6c113051fe 100644
--- a/UefiCpuPkg/CpuMpPei/CpuPaging.c
+++ b/UefiCpuPkg/CpuMpPei/CpuPaging.c
@@ -115,42 +115,10 @@ AllocatePageTableMemory (
}
return Address;
}
-/**
- Get the address width supported by current processor.
-
- @retval 32 If processor is in 32-bit mode.
- @retval 36-48 If processor is in 64-bit mode.
-
-**/
-UINTN
-GetPhysicalAddressWidth (
- VOID
- )
-{
- UINT32 RegEax;
-
- if (sizeof (UINTN) == 4) {
- return 32;
- }
-
- AsmCpuid (CPUID_EXTENDED_FUNCTION, &RegEax, NULL, NULL, NULL);
- if (RegEax >= CPUID_VIR_PHY_ADDRESS_SIZE) {
- AsmCpuid (CPUID_VIR_PHY_ADDRESS_SIZE, &RegEax, NULL, NULL, NULL);
- RegEax &= 0xFF;
- if (RegEax > 48) {
- return 48;
- }
-
- return (UINTN)RegEax;
- }
-
- return 36;
-}
-
/**
Get the type of top level page table.
@retval Page512G PML4 paging.
@retval Page1G PAE paging.
@@ -381,120 +349,93 @@ ConvertMemoryPageAttributes (
return RETURN_SUCCESS;
}
/**
- Get maximum size of page memory supported by current processor.
-
- @param[in] TopLevelType The type of top level page entry.
+ Enable PAE Page Table.
- @retval Page1G If processor supports 1G page and PML4.
- @retval Page2M For all other situations.
+ @retval EFI_SUCCESS The PAE Page Table was enabled successfully.
+ @retval EFI_OUT_OF_RESOURCES The PAE Page Table could not be enabled due to lack of available memory.
**/
-PAGE_ATTRIBUTE
-GetMaxMemoryPage (
- IN PAGE_ATTRIBUTE TopLevelType
- )
-{
- UINT32 RegEax;
- UINT32 RegEdx;
-
- if (TopLevelType == Page512G) {
- AsmCpuid (CPUID_EXTENDED_FUNCTION, &RegEax, NULL, NULL, NULL);
- if (RegEax >= CPUID_EXTENDED_CPU_SIG) {
- AsmCpuid (CPUID_EXTENDED_CPU_SIG, NULL, NULL, NULL, &RegEdx);
- if ((RegEdx & BIT26) != 0) {
- return Page1G;
- }
- }
- }
-
- return Page2M;
-}
-
-/**
- Create PML4 or PAE page table.
-
- @return The address of page table.
-
-**/
-UINTN
-CreatePageTable (
+EFI_STATUS
+EnablePaePageTable (
VOID
)
{
- RETURN_STATUS Status;
- UINTN PhysicalAddressBits;
- UINTN NumberOfEntries;
- PAGE_ATTRIBUTE TopLevelPageAttr;
- UINTN PageTable;
- PAGE_ATTRIBUTE MaxMemoryPage;
- UINTN Index;
- UINT64 AddressEncMask;
- UINT64 *PageEntry;
- EFI_PHYSICAL_ADDRESS PhysicalAddress;
-
- TopLevelPageAttr = (PAGE_ATTRIBUTE)GetPageTableTopLevelType ();
- PhysicalAddressBits = GetPhysicalAddressWidth ();
- NumberOfEntries = (UINTN)1 << (PhysicalAddressBits -
- mPageAttributeTable[TopLevelPageAttr].AddressBitOffset);
+ EFI_STATUS Status;
+ PAGING_MODE PagingMode;
+
+ UINTN PageTable;
+ VOID *Buffer;
+ UINTN BufferSize;
+ IA32_MAP_ATTRIBUTE MapAttribute;
+ IA32_MAP_ATTRIBUTE MapMask;
+
+ PagingMode = PagingPae;
+ PageTable = 0;
+ Buffer = NULL;
+ BufferSize = 0;
+ MapAttribute.Uint64 = 0;
+ MapMask.Uint64 = MAX_UINT64;
+ MapAttribute.Bits.Present = 1;
+ MapAttribute.Bits.ReadWrite = 1;
- PageTable = (UINTN)AllocatePageTableMemory (1);
- if (PageTable == 0) {
- return 0;
+ //
+ // Get required buffer size for the pagetable that will be created.
+ // The Max size of LinearAddress for PAE is 2^32.
+ //
+ Status = PageTableMap (&PageTable, PagingMode, 0, &BufferSize, 0, LShiftU64 (1, 32), &MapAttribute, &MapMask, NULL);
+ ASSERT (Status == EFI_BUFFER_TOO_SMALL);
+ if (Status != EFI_BUFFER_TOO_SMALL) {
+ DEBUG ((DEBUG_ERROR, "EnablePaePageTable: Failed to get PageTable required BufferSize, Status = %r\n", Status));
+ return Status;
}
- AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask);
- AddressEncMask &= mPageAttributeTable[TopLevelPageAttr].AddressMask;
- MaxMemoryPage = GetMaxMemoryPage (TopLevelPageAttr);
- PageEntry = (UINT64 *)PageTable;
+ DEBUG ((DEBUG_INFO, "EnablePaePageTable: Get PageTable required BufferSize = %x\n", BufferSize));
+
+ //
+ // Allocate required Buffer.
+ //
+ Buffer = AllocatePageTableMemory (EFI_SIZE_TO_PAGES (BufferSize));
+ ASSERT (Buffer != NULL);
+ if (Buffer == NULL) {
+ DEBUG ((DEBUG_ERROR, "EnablePaePageTable: Failed to allocate PageTable required BufferSize!\n"));
+ return EFI_OUT_OF_RESOURCES;
+ }
- PhysicalAddress = 0;
- for (Index = 0; Index < NumberOfEntries; ++Index) {
- *PageEntry = PhysicalAddress | AddressEncMask | PAGE_ATTRIBUTE_BITS;
+ //
+ // Create PageTable in permanent memory.
+ // The Max size of LinearAddress for PAE is 2^32.
+ //
+ Status = PageTableMap (&PageTable, PagingMode, Buffer, &BufferSize, 0, LShiftU64 (1, 32), &MapAttribute, &MapMask, NULL);
+ ASSERT (!EFI_ERROR (Status) && PageTable != 0);
+ if (EFI_ERROR (Status) || PageTable == 0) {
+ DEBUG ((DEBUG_ERROR, "EnablePaePageTable: Failed to create PageTable, Status = %r, PageTable = 0x%lx\n", Status, PageTable));
+ return EFI_OUT_OF_RESOURCES;
+ }
- //
- // Split the top page table down to the maximum page size supported
- //
- if (MaxMemoryPage < TopLevelPageAttr) {
- Status = SplitPage (PageEntry, TopLevelPageAttr, MaxMemoryPage, TRUE);
- ASSERT_EFI_ERROR (Status);
- }
+ DEBUG ((DEBUG_INFO, "EnablePaePageTable: Create PageTable = 0x%x\n", PageTable));
- if (TopLevelPageAttr == Page1G) {
- //
- // PDPTE[2:1] (PAE Paging) must be 0. SplitPage() might change them to 1.
- //
- *PageEntry &= ~(UINT64)(IA32_PG_RW | IA32_PG_U);
- }
+ //
+ // Write the Pagetable to CR3.
+ //
+ AsmWriteCr3 (PageTable);
- PageEntry += 1;
- PhysicalAddress += mPageAttributeTable[TopLevelPageAttr].Length;
- }
+ //
+ // Enable CR4.PAE
+ //
+ AsmWriteCr4 (AsmReadCr4 () | BIT5);
- return PageTable;
-}
+ //
+ // Enable CR0.PG
+ //
+ AsmWriteCr0 (AsmReadCr0 () | BIT31);
-/**
- Setup page tables and make them work.
+ DEBUG ((DEBUG_INFO, "EnablePaePageTable: Enabled PAE PageTable Sucessfully.\n"));
-**/
-VOID
-EnablePaging (
- VOID
- )
-{
- UINTN PageTable;
-
- PageTable = CreatePageTable ();
- ASSERT (PageTable != 0);
- if (PageTable != 0) {
- AsmWriteCr3 (PageTable);
- AsmWriteCr4 (AsmReadCr4 () | BIT5); // CR4.PAE
- AsmWriteCr0 (AsmReadCr0 () | BIT31); // CR0.PG
- }
+ return Status;
}
/**
Get the base address of current AP's stack.
@@ -622,10 +563,11 @@ MemoryDiscoveredPpiNotifyCallback (
{
EFI_STATUS Status;
BOOLEAN InitStackGuard;
EDKII_MIGRATED_FV_INFO *MigratedFvInfo;
EFI_PEI_HOB_POINTERS Hob;
+ MSR_IA32_EFER_REGISTER MsrEfer;
//
// Paging must be setup first. Otherwise the exception TSS setup during MP
// initialization later will not contain paging information and then fail
// the task switch (for the sake of stack switch).
@@ -635,12 +577,42 @@ MemoryDiscoveredPpiNotifyCallback (
if (IsIa32PaeSupported ()) {
Hob.Raw = GetFirstGuidHob (&gEdkiiMigratedFvInfoGuid);
InitStackGuard = PcdGetBool (PcdCpuStackGuard);
}
- if (InitStackGuard || (Hob.Raw != NULL)) {
- EnablePaging ();
+ //
+ // Some security features depends on the page table enabling.So, here
+ // is to enable the page table if page table has not been enabled yet.
+ // If page table is not enabled before this point, which means the system
+ // IA-32e Mode is not activated.Because on Intel 64 processors, IA-32e Mode
+ // operation requires physical address extensions with 4 or 5 levels of
+ // enhanced paging structures (see Section 4.5, "4 - Level Paging and 5 -
+ // Level Paging" and Section 9.8, "Initializing IA-32e Mode"). So, just
+ // enable PAE page table if CR0.PG is not set.
+ //
+ if (((AsmReadCr0 () & BIT31) == 0) && (InitStackGuard || (Hob.Raw != NULL))) {
+ //
+ // Check CPU runs in 32bit mode.
+ //
+ MsrEfer.Uint64 = AsmReadMsr64 (MSR_CORE_IA32_EFER);
+ if (MsrEfer.Bits.LMA == 1) {
+ //
+ // On Intel 64 processors, IA-32e Mode operation requires physical - address extensions with
+ // 4 or 5 levels of enhanced paging structures (see Section 4.5, "4 - Level Paging and
+ // 5 - Level Paging" and Section 9.8, "Initializing IA-32e Mode"). So, it must something wrong
+ // if MsrEfer.Bits.LMA == 1 with no page table enbaled before.
+ //
+ DEBUG ((DEBUG_ERROR, "MemoryDiscoveredPpiNotifyCallback: No page table with IA-32e Mode actived!\n"));
+ ASSERT (MsrEfer.Bits.LMA == 0);
+ return EFI_DEVICE_ERROR;
+ }
+
+ Status = EnablePaePageTable ();
+ if (EFI_ERROR (Status)) {
+ DEBUG ((DEBUG_WARN, "MemoryDiscoveredPpiNotifyCallback: Failed to enable PAE page table: %r.\n", Status));
+ ASSERT_EFI_ERROR (Status);
+ }
}
Status = InitializeCpuMpWorker ((CONST EFI_PEI_SERVICES **)PeiServices);
ASSERT_EFI_ERROR (Status);
--
2.16.2.windows.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 2/3] UefiCpuPkg/CpuMpPei: Enable PAE page table if CR0.PG is not set
2023-05-09 10:22 ` [PATCH v1 2/3] UefiCpuPkg/CpuMpPei: Enable PAE page table if CR0.PG is not set Wu, Jiaxin
@ 2023-05-09 14:41 ` Gerd Hoffmann
2023-05-10 1:56 ` Wu, Jiaxin
2023-05-10 7:59 ` Ni, Ray
1 sibling, 1 reply; 23+ messages in thread
From: Gerd Hoffmann @ 2023-05-09 14:41 UTC (permalink / raw)
To: devel, jiaxin.wu; +Cc: Eric Dong, Ray Ni, Zeng Star, Rahul Kumar
Hi,
> + //
> + // Check CPU runs in 32bit mode.
> + //
> + MsrEfer.Uint64 = AsmReadMsr64 (MSR_CORE_IA32_EFER);
> + if (MsrEfer.Bits.LMA == 1) {
Checking this at runtime is pointless. 32bit code would simply
not work in long mode and visa versa.
take care,
Gerd
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 2/3] UefiCpuPkg/CpuMpPei: Enable PAE page table if CR0.PG is not set
2023-05-09 14:41 ` [edk2-devel] " Gerd Hoffmann
@ 2023-05-10 1:56 ` Wu, Jiaxin
0 siblings, 0 replies; 23+ messages in thread
From: Wu, Jiaxin @ 2023-05-10 1:56 UTC (permalink / raw)
To: Gerd Hoffmann, devel@edk2.groups.io
Cc: Dong, Eric, Ni, Ray, Zeng, Star, Kumar, Rahul R
Agree, I will replace it with simple assert check:
ASSERT (sizeof (UINTN) == sizeof (UINT32));
Thanks,
Jiaxin
> -----Original Message-----
> From: Gerd Hoffmann <kraxel@redhat.com>
> Sent: Tuesday, May 9, 2023 10:42 PM
> To: devel@edk2.groups.io; Wu, Jiaxin <jiaxin.wu@intel.com>
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Zeng, Star
> <star.zeng@intel.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>
> Subject: Re: [edk2-devel] [PATCH v1 2/3] UefiCpuPkg/CpuMpPei: Enable PAE
> page table if CR0.PG is not set
>
> Hi,
>
> > + //
> > + // Check CPU runs in 32bit mode.
> > + //
> > + MsrEfer.Uint64 = AsmReadMsr64 (MSR_CORE_IA32_EFER);
> > + if (MsrEfer.Bits.LMA == 1) {
>
> Checking this at runtime is pointless. 32bit code would simply
> not work in long mode and visa versa.
>
> take care,
> Gerd
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v1 2/3] UefiCpuPkg/CpuMpPei: Enable PAE page table if CR0.PG is not set
2023-05-09 10:22 ` [PATCH v1 2/3] UefiCpuPkg/CpuMpPei: Enable PAE page table if CR0.PG is not set Wu, Jiaxin
2023-05-09 14:41 ` [edk2-devel] " Gerd Hoffmann
@ 2023-05-10 7:59 ` Ni, Ray
1 sibling, 0 replies; 23+ messages in thread
From: Ni, Ray @ 2023-05-10 7:59 UTC (permalink / raw)
To: Wu, Jiaxin, devel@edk2.groups.io
Cc: Dong, Eric, Zeng, Star, Gerd Hoffmann, Kumar, Rahul R
> -----Original Message-----
> From: Wu, Jiaxin <jiaxin.wu@intel.com>
> Sent: Tuesday, May 9, 2023 6:23 PM
> To: devel@edk2.groups.io
> Cc: Dong, Eric <eric.dong@intel.com>; Ni, Ray <ray.ni@intel.com>; Zeng, Star
> <star.zeng@intel.com>; Gerd Hoffmann <kraxel@redhat.com>; Kumar, Rahul R
> <rahul.r.kumar@intel.com>
> Subject: [PATCH v1 2/3] UefiCpuPkg/CpuMpPei: Enable PAE page table if CR0.PG
> is not set
>
> Some security features depends on the page table enabling. So, This patch
> is to enable the page table if page table has not been enabled during the
> transition from Temporary RAM to Permanent RAM.
>
> Note: If page table is not enabled before this point, which means the system
> IA-32e Mode is not activated. Because on Intel 64 processors, IA-32e Mode
> operation requires physical address extensions with 4 or 5 levels of enhanced
> paging structures (see Section 4.5, "4 - Level Paging and 5 -Level Paging"
> and Section 9.8, "Initializing IA-32e Mode"). So, just enable PAE page table
> if CR0.PG is not set.
>
> Change-Id: Ibfbfdace1fe7e29ab94463629d8d2f539a43f1b9
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Zeng Star <star.zeng@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
> ---
> UefiCpuPkg/CpuMpPei/CpuMpPei.h | 1 +
> UefiCpuPkg/CpuMpPei/CpuMpPei.inf | 1 +
> UefiCpuPkg/CpuMpPei/CpuPaging.c | 228 +++++++++++++++++-------------------
> ---
> 3 files changed, 102 insertions(+), 128 deletions(-)
>
> diff --git a/UefiCpuPkg/CpuMpPei/CpuMpPei.h
> b/UefiCpuPkg/CpuMpPei/CpuMpPei.h
> index 0649c48d14..1b9a94e18f 100644
> --- a/UefiCpuPkg/CpuMpPei/CpuMpPei.h
> +++ b/UefiCpuPkg/CpuMpPei/CpuMpPei.h
> @@ -26,10 +26,11 @@
> #include <Library/ReportStatusCodeLib.h>
> #include <Library/CpuExceptionHandlerLib.h>
> #include <Library/MpInitLib.h>
> #include <Library/BaseMemoryLib.h>
> #include <Library/MemoryAllocationLib.h>
> +#include <Library/CpuPageTableLib.h>
>
> extern EFI_PEI_PPI_DESCRIPTOR mPeiCpuMpPpiDesc;
>
> /**
> This service retrieves the number of logical processor in the platform
> diff --git a/UefiCpuPkg/CpuMpPei/CpuMpPei.inf
> b/UefiCpuPkg/CpuMpPei/CpuMpPei.inf
> index 7444bdb968..865be5627e 100644
> --- a/UefiCpuPkg/CpuMpPei/CpuMpPei.inf
> +++ b/UefiCpuPkg/CpuMpPei/CpuMpPei.inf
> @@ -44,10 +44,11 @@
> CpuExceptionHandlerLib
> MpInitLib
> BaseMemoryLib
> CpuLib
> MemoryAllocationLib
> + CpuPageTableLib
>
> [Guids]
> gEdkiiMigratedFvInfoGuid ## SOMETIMES_CONSUMES
> ## HOB
>
> [Ppis]
> diff --git a/UefiCpuPkg/CpuMpPei/CpuPaging.c
> b/UefiCpuPkg/CpuMpPei/CpuPaging.c
> index a471f089c8..6c113051fe 100644
> --- a/UefiCpuPkg/CpuMpPei/CpuPaging.c
> +++ b/UefiCpuPkg/CpuMpPei/CpuPaging.c
> @@ -115,42 +115,10 @@ AllocatePageTableMemory (
> }
>
> return Address;
> }
>
> -/**
> - Get the address width supported by current processor.
> -
> - @retval 32 If processor is in 32-bit mode.
> - @retval 36-48 If processor is in 64-bit mode.
> -
> -**/
> -UINTN
> -GetPhysicalAddressWidth (
> - VOID
> - )
> -{
> - UINT32 RegEax;
> -
> - if (sizeof (UINTN) == 4) {
> - return 32;
> - }
> -
> - AsmCpuid (CPUID_EXTENDED_FUNCTION, &RegEax, NULL, NULL, NULL);
> - if (RegEax >= CPUID_VIR_PHY_ADDRESS_SIZE) {
> - AsmCpuid (CPUID_VIR_PHY_ADDRESS_SIZE, &RegEax, NULL, NULL, NULL);
> - RegEax &= 0xFF;
> - if (RegEax > 48) {
> - return 48;
> - }
> -
> - return (UINTN)RegEax;
> - }
> -
> - return 36;
> -}
> -
> /**
> Get the type of top level page table.
>
> @retval Page512G PML4 paging.
> @retval Page1G PAE paging.
> @@ -381,120 +349,93 @@ ConvertMemoryPageAttributes (
>
> return RETURN_SUCCESS;
> }
>
> /**
> - Get maximum size of page memory supported by current processor.
> -
> - @param[in] TopLevelType The type of top level page entry.
> + Enable PAE Page Table.
>
> - @retval Page1G If processor supports 1G page and PML4.
> - @retval Page2M For all other situations.
> + @retval EFI_SUCCESS The PAE Page Table was enabled successfully.
> + @retval EFI_OUT_OF_RESOURCES The PAE Page Table could not be enabled
> due to lack of available memory.
>
> **/
> -PAGE_ATTRIBUTE
> -GetMaxMemoryPage (
> - IN PAGE_ATTRIBUTE TopLevelType
> - )
> -{
> - UINT32 RegEax;
> - UINT32 RegEdx;
> -
> - if (TopLevelType == Page512G) {
> - AsmCpuid (CPUID_EXTENDED_FUNCTION, &RegEax, NULL, NULL, NULL);
> - if (RegEax >= CPUID_EXTENDED_CPU_SIG) {
> - AsmCpuid (CPUID_EXTENDED_CPU_SIG, NULL, NULL, NULL, &RegEdx);
> - if ((RegEdx & BIT26) != 0) {
> - return Page1G;
> - }
> - }
> - }
> -
> - return Page2M;
> -}
> -
> -/**
> - Create PML4 or PAE page table.
> -
> - @return The address of page table.
> -
> -**/
> -UINTN
> -CreatePageTable (
> +EFI_STATUS
> +EnablePaePageTable (
> VOID
> )
> {
> - RETURN_STATUS Status;
> - UINTN PhysicalAddressBits;
> - UINTN NumberOfEntries;
> - PAGE_ATTRIBUTE TopLevelPageAttr;
> - UINTN PageTable;
> - PAGE_ATTRIBUTE MaxMemoryPage;
> - UINTN Index;
> - UINT64 AddressEncMask;
> - UINT64 *PageEntry;
> - EFI_PHYSICAL_ADDRESS PhysicalAddress;
> -
> - TopLevelPageAttr = (PAGE_ATTRIBUTE)GetPageTableTopLevelType ();
> - PhysicalAddressBits = GetPhysicalAddressWidth ();
> - NumberOfEntries = (UINTN)1 << (PhysicalAddressBits -
> - mPageAttributeTable[TopLevelPageAttr].AddressBitOffset);
> + EFI_STATUS Status;
> + PAGING_MODE PagingMode;
> +
> + UINTN PageTable;
> + VOID *Buffer;
> + UINTN BufferSize;
> + IA32_MAP_ATTRIBUTE MapAttribute;
> + IA32_MAP_ATTRIBUTE MapMask;
> +
> + PagingMode = PagingPae;
> + PageTable = 0;
> + Buffer = NULL;
> + BufferSize = 0;
> + MapAttribute.Uint64 = 0;
> + MapMask.Uint64 = MAX_UINT64;
> + MapAttribute.Bits.Present = 1;
> + MapAttribute.Bits.ReadWrite = 1;
>
> - PageTable = (UINTN)AllocatePageTableMemory (1);
> - if (PageTable == 0) {
> - return 0;
> + //
> + // Get required buffer size for the pagetable that will be created.
> + // The Max size of LinearAddress for PAE is 2^32.
> + //
> + Status = PageTableMap (&PageTable, PagingMode, 0, &BufferSize, 0,
> LShiftU64 (1, 32), &MapAttribute, &MapMask, NULL);
1. how about directly use SIZE_4GB macro in above?
> + ASSERT (Status == EFI_BUFFER_TOO_SMALL);
> + if (Status != EFI_BUFFER_TOO_SMALL) {
> + DEBUG ((DEBUG_ERROR, "EnablePaePageTable: Failed to get PageTable
> required BufferSize, Status = %r\n", Status));
2. no need for debug message.
> + return Status;
> }
>
> - AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask);
> - AddressEncMask &= mPageAttributeTable[TopLevelPageAttr].AddressMask;
3. Good to remove the address encryption mask for 32bit env.
> - MaxMemoryPage = GetMaxMemoryPage (TopLevelPageAttr);
> - PageEntry = (UINT64 *)PageTable;
> + DEBUG ((DEBUG_INFO, "EnablePaePageTable: Get PageTable required
> BufferSize = %x\n", BufferSize));
4. Can you remove this debug log?
> +
> + //
> + // Allocate required Buffer.
> + //
> + Buffer = AllocatePageTableMemory (EFI_SIZE_TO_PAGES (BufferSize));
> + ASSERT (Buffer != NULL);
> + if (Buffer == NULL) {
> + DEBUG ((DEBUG_ERROR, "EnablePaePageTable: Failed to allocate PageTable
> required BufferSize!\n"));
5. can you please dump the failure message but also the BufferSize value?
> + return EFI_OUT_OF_RESOURCES;
> + }
>
> - PhysicalAddress = 0;
> - for (Index = 0; Index < NumberOfEntries; ++Index) {
> - *PageEntry = PhysicalAddress | AddressEncMask | PAGE_ATTRIBUTE_BITS;
> + //
> + // Create PageTable in permanent memory.
> + // The Max size of LinearAddress for PAE is 2^32.
> + //
> + Status = PageTableMap (&PageTable, PagingMode, Buffer, &BufferSize, 0,
> LShiftU64 (1, 32), &MapAttribute, &MapMask, NULL);
> + ASSERT (!EFI_ERROR (Status) && PageTable != 0);
6. ASSERT_EFI_ERROR (Status) is enough.
> + if (EFI_ERROR (Status) || PageTable == 0) {
> + DEBUG ((DEBUG_ERROR, "EnablePaePageTable: Failed to create PageTable,
> Status = %r, PageTable = 0x%lx\n", Status, PageTable));
7. if error happens, no need to dump PageTable value.
> + return EFI_OUT_OF_RESOURCES;
> + }
>
> - //
> - // Split the top page table down to the maximum page size supported
> - //
> - if (MaxMemoryPage < TopLevelPageAttr) {
> - Status = SplitPage (PageEntry, TopLevelPageAttr, MaxMemoryPage, TRUE);
> - ASSERT_EFI_ERROR (Status);
> - }
> + DEBUG ((DEBUG_INFO, "EnablePaePageTable: Create PageTable = 0x%x\n",
> PageTable));
>
> - if (TopLevelPageAttr == Page1G) {
> - //
> - // PDPTE[2:1] (PAE Paging) must be 0. SplitPage() might change them to 1.
> - //
> - *PageEntry &= ~(UINT64)(IA32_PG_RW | IA32_PG_U);
> - }
> + //
> + // Write the Pagetable to CR3.
> + //
> + AsmWriteCr3 (PageTable);
>
> - PageEntry += 1;
> - PhysicalAddress += mPageAttributeTable[TopLevelPageAttr].Length;
> - }
> + //
> + // Enable CR4.PAE
> + //
> + AsmWriteCr4 (AsmReadCr4 () | BIT5);
>
> - return PageTable;
> -}
> + //
> + // Enable CR0.PG
> + //
> + AsmWriteCr0 (AsmReadCr0 () | BIT31);
>
> -/**
> - Setup page tables and make them work.
> + DEBUG ((DEBUG_INFO, "EnablePaePageTable: Enabled PAE PageTable
> Sucessfully.\n"));
>
> -**/
> -VOID
> -EnablePaging (
> - VOID
> - )
> -{
> - UINTN PageTable;
> -
> - PageTable = CreatePageTable ();
> - ASSERT (PageTable != 0);
> - if (PageTable != 0) {
> - AsmWriteCr3 (PageTable);
> - AsmWriteCr4 (AsmReadCr4 () | BIT5); // CR4.PAE
> - AsmWriteCr0 (AsmReadCr0 () | BIT31); // CR0.PG
> - }
> + return Status;
> }
>
> /**
> Get the base address of current AP's stack.
>
> @@ -622,10 +563,11 @@ MemoryDiscoveredPpiNotifyCallback (
> {
> EFI_STATUS Status;
> BOOLEAN InitStackGuard;
> EDKII_MIGRATED_FV_INFO *MigratedFvInfo;
> EFI_PEI_HOB_POINTERS Hob;
> + MSR_IA32_EFER_REGISTER MsrEfer;
>
> //
> // Paging must be setup first. Otherwise the exception TSS setup during MP
> // initialization later will not contain paging information and then fail
> // the task switch (for the sake of stack switch).
> @@ -635,12 +577,42 @@ MemoryDiscoveredPpiNotifyCallback (
> if (IsIa32PaeSupported ()) {
> Hob.Raw = GetFirstGuidHob (&gEdkiiMigratedFvInfoGuid);
> InitStackGuard = PcdGetBool (PcdCpuStackGuard);
> }
>
> - if (InitStackGuard || (Hob.Raw != NULL)) {
> - EnablePaging ();
> + //
> + // Some security features depends on the page table enabling.So, here
> + // is to enable the page table if page table has not been enabled yet.
> + // If page table is not enabled before this point, which means the system
> + // IA-32e Mode is not activated.Because on Intel 64 processors, IA-32e Mode
> + // operation requires physical address extensions with 4 or 5 levels of
> + // enhanced paging structures (see Section 4.5, "4 - Level Paging and 5 -
> + // Level Paging" and Section 9.8, "Initializing IA-32e Mode"). So, just
> + // enable PAE page table if CR0.PG is not set.
> + //
> + if (((AsmReadCr0 () & BIT31) == 0) && (InitStackGuard || (Hob.Raw != NULL))) {
8. Can you use IA32_CR0 structure in if-check?
> + //
> + // Check CPU runs in 32bit mode.
> + //
> + MsrEfer.Uint64 = AsmReadMsr64 (MSR_CORE_IA32_EFER);
> + if (MsrEfer.Bits.LMA == 1) {
> + //
> + // On Intel 64 processors, IA-32e Mode operation requires physical -
> address extensions with
> + // 4 or 5 levels of enhanced paging structures (see Section 4.5, "4 - Level
> Paging and
> + // 5 - Level Paging" and Section 9.8, "Initializing IA-32e Mode"). So, it must
> something wrong
> + // if MsrEfer.Bits.LMA == 1 with no page table enbaled before.
> + //
> + DEBUG ((DEBUG_ERROR, "MemoryDiscoveredPpiNotifyCallback: No page
> table with IA-32e Mode actived!\n"));
> + ASSERT (MsrEfer.Bits.LMA == 0);
> + return EFI_DEVICE_ERROR;
> + }
9. I don't think we need to check if CPU runs in 32bit mode. We should have confidence that
when paging is disabled, CPU should run in 32bit mode.
I agree adding ASSERT (sizeof (UINTN) == sizeof (UINT32)) can improve the readability.
> +
> + Status = EnablePaePageTable ();
> + if (EFI_ERROR (Status)) {
> + DEBUG ((DEBUG_WARN, "MemoryDiscoveredPpiNotifyCallback: Failed to
> enable PAE page table: %r.\n", Status));
> + ASSERT_EFI_ERROR (Status);
> + }
> }
>
> Status = InitializeCpuMpWorker ((CONST EFI_PEI_SERVICES **)PeiServices);
> ASSERT_EFI_ERROR (Status);
>
> --
> 2.16.2.windows.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v1 3/3] MdeModulePkg/DxeIpl: Align Page table Level setting with previous level.
2023-05-09 10:22 [PATCH v1 0/3] Target to enable paging from temporary RAM Done Wu, Jiaxin
2023-05-09 10:22 ` [PATCH v1 1/3] UefiCpuPkg/SecCore: Migrate page table to permanent memory Wu, Jiaxin
2023-05-09 10:22 ` [PATCH v1 2/3] UefiCpuPkg/CpuMpPei: Enable PAE page table if CR0.PG is not set Wu, Jiaxin
@ 2023-05-09 10:22 ` Wu, Jiaxin
2023-05-09 14:44 ` [edk2-devel] " Gerd Hoffmann
2023-05-10 8:01 ` Ni, Ray
2023-05-09 14:46 ` [edk2-devel] [PATCH v1 0/3] Target to enable paging from temporary RAM Done Gerd Hoffmann
3 siblings, 2 replies; 23+ messages in thread
From: Wu, Jiaxin @ 2023-05-09 10:22 UTC (permalink / raw)
To: devel
Cc: Dandan Bi, Liming Gao, Eric Dong, Ray Ni, Zeng Star,
Gerd Hoffmann, Rahul Kumar
System paging 5 level enabled or not can be checked via CR4.LA57, system
preferred Page table Level (PcdUse5LevelPageTable) must align
with previous level for X64 mode.
This patch is to do the wise check:
If X64, Page table Level setting in PcdUse5LevelPageTable must align with
previous level.
If IA32, Page table Level is decided by PcdUse5LevelPageTable and feature
capability.
Change-Id: Ia7f7e365c7354cc49f971209bfcbc5af5aded062
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Zeng Star <star.zeng@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Rahul Kumar <rahul1.kumar@intel.com>
Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
---
MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c | 39 ++++++++++++++++--------
1 file changed, 27 insertions(+), 12 deletions(-)
diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
index 18b121d768..301e200cd8 100644
--- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
+++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
@@ -737,22 +737,37 @@ CreateIdentityMappingPageTables (
} else {
PhysicalAddressBits = 36;
}
}
- Page5LevelSupport = FALSE;
- if (PcdGetBool (PcdUse5LevelPageTable)) {
- AsmCpuidEx (
- CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS,
- CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS_SUB_LEAF_INFO,
- NULL,
- NULL,
- &EcxFlags.Uint32,
- NULL
- );
- if (EcxFlags.Bits.FiveLevelPage != 0) {
- Page5LevelSupport = TRUE;
+ //
+ // Check run in X64 or IA32
+ //
+ if (sizeof (UINTN) == sizeof (UINT64)) {
+ //
+ // If X64, Page table Level must align with previous level.
+ //
+ Cr4.UintN = AsmReadCr4 ();
+ Page5LevelSupport = Cr4.Bits.LA57 ? TRUE : FALSE;
+ ASSERT (PcdGetBool (PcdUse5LevelPageTable) == Page5LevelSupport);
+ } else {
+ //
+ // If IA32, Page table Level is decided by PCD and feature capbility.
+ //
+ Page5LevelSupport = FALSE;
+ if (PcdGetBool (PcdUse5LevelPageTable)) {
+ AsmCpuidEx (
+ CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS,
+ CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS_SUB_LEAF_INFO,
+ NULL,
+ NULL,
+ &EcxFlags.Uint32,
+ NULL
+ );
+ if (EcxFlags.Bits.FiveLevelPage != 0) {
+ Page5LevelSupport = TRUE;
+ }
}
}
DEBUG ((DEBUG_INFO, "AddressBits=%u 5LevelPaging=%u 1GPage=%u\n", PhysicalAddressBits, Page5LevelSupport, Page1GSupport));
--
2.16.2.windows.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 3/3] MdeModulePkg/DxeIpl: Align Page table Level setting with previous level.
2023-05-09 10:22 ` [PATCH v1 3/3] MdeModulePkg/DxeIpl: Align Page table Level setting with previous level Wu, Jiaxin
@ 2023-05-09 14:44 ` Gerd Hoffmann
2023-05-10 1:56 ` Wu, Jiaxin
2023-05-10 8:01 ` Ni, Ray
1 sibling, 1 reply; 23+ messages in thread
From: Gerd Hoffmann @ 2023-05-09 14:44 UTC (permalink / raw)
To: devel, jiaxin.wu
Cc: Dandan Bi, Liming Gao, Eric Dong, Ray Ni, Zeng Star, Rahul Kumar
> + //
> + // If IA32, Page table Level is decided by PCD and feature capbility.
> + //
There is neither 5-level nor 4-level paging support in ia32 mode.
take care,
Gerd
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 3/3] MdeModulePkg/DxeIpl: Align Page table Level setting with previous level.
2023-05-09 14:44 ` [edk2-devel] " Gerd Hoffmann
@ 2023-05-10 1:56 ` Wu, Jiaxin
2023-05-10 7:51 ` Gerd Hoffmann
0 siblings, 1 reply; 23+ messages in thread
From: Wu, Jiaxin @ 2023-05-10 1:56 UTC (permalink / raw)
To: Gerd Hoffmann, devel@edk2.groups.io
Cc: Bi, Dandan, Gao, Liming, Dong, Eric, Ni, Ray, Zeng, Star,
Kumar, Rahul R
This happens to transfer PEI control to DXE. The paging is created for DXE phase, so, here, it's means the cpu running in the ia32 pei. I will refine the comments as below:
If cpu has already runned in X64 PEI, Page table Level in DXE must align with previous level.
If cpu runs in IA32 PEI, Page table Level in DXE is decided by PCD and feature capbility.
Thanks,
Jiaxin
> -----Original Message-----
> From: Gerd Hoffmann <kraxel@redhat.com>
> Sent: Tuesday, May 9, 2023 10:44 PM
> To: devel@edk2.groups.io; Wu, Jiaxin <jiaxin.wu@intel.com>
> Cc: Bi, Dandan <dandan.bi@intel.com>; Gao, Liming
> <gaoliming@byosoft.com.cn>; Dong, Eric <eric.dong@intel.com>; Ni, Ray
> <ray.ni@intel.com>; Zeng, Star <star.zeng@intel.com>; Kumar, Rahul R
> <rahul.r.kumar@intel.com>
> Subject: Re: [edk2-devel] [PATCH v1 3/3] MdeModulePkg/DxeIpl: Align Page
> table Level setting with previous level.
>
> > + //
> > + // If IA32, Page table Level is decided by PCD and feature capbility.
> > + //
>
> There is neither 5-level nor 4-level paging support in ia32 mode.
>
> take care,
> Gerd
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 3/3] MdeModulePkg/DxeIpl: Align Page table Level setting with previous level.
2023-05-10 1:56 ` Wu, Jiaxin
@ 2023-05-10 7:51 ` Gerd Hoffmann
0 siblings, 0 replies; 23+ messages in thread
From: Gerd Hoffmann @ 2023-05-10 7:51 UTC (permalink / raw)
To: devel, jiaxin.wu
Cc: Bi, Dandan, Gao, Liming, Dong, Eric, Ni, Ray, Zeng, Star,
Kumar, Rahul R
On Wed, May 10, 2023 at 01:56:59AM +0000, Wu, Jiaxin wrote:
> This happens to transfer PEI control to DXE. The paging is created for DXE phase,
Ah, ok. The code looks fine then.
> so, here, it's means the cpu running in the ia32 pei. I will refine the comments as below:
>
> If cpu has already runned in X64 PEI, Page table Level in DXE must align with previous level.
> If cpu runs in IA32 PEI, Page table Level in DXE is decided by PCD and feature capbility.
Good idea.
thanks,
Gerd
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 3/3] MdeModulePkg/DxeIpl: Align Page table Level setting with previous level.
2023-05-09 10:22 ` [PATCH v1 3/3] MdeModulePkg/DxeIpl: Align Page table Level setting with previous level Wu, Jiaxin
2023-05-09 14:44 ` [edk2-devel] " Gerd Hoffmann
@ 2023-05-10 8:01 ` Ni, Ray
1 sibling, 0 replies; 23+ messages in thread
From: Ni, Ray @ 2023-05-10 8:01 UTC (permalink / raw)
To: devel@edk2.groups.io, Wu, Jiaxin
Cc: Bi, Dandan, Gao, Liming, Dong, Eric, Zeng, Star, Gerd Hoffmann,
Kumar, Rahul R
Minor comments:
IA32 -> 32bit protected mode.
X64 -> 64bit long mode.
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Wu, Jiaxin
> Sent: Tuesday, May 9, 2023 6:23 PM
> To: devel@edk2.groups.io
> Cc: Bi, Dandan <dandan.bi@intel.com>; Gao, Liming
> <gaoliming@byosoft.com.cn>; Dong, Eric <eric.dong@intel.com>; Ni, Ray
> <ray.ni@intel.com>; Zeng, Star <star.zeng@intel.com>; Gerd Hoffmann
> <kraxel@redhat.com>; Kumar, Rahul R <rahul.r.kumar@intel.com>
> Subject: [edk2-devel] [PATCH v1 3/3] MdeModulePkg/DxeIpl: Align Page table
> Level setting with previous level.
>
> System paging 5 level enabled or not can be checked via CR4.LA57, system
> preferred Page table Level (PcdUse5LevelPageTable) must align
> with previous level for X64 mode.
>
> This patch is to do the wise check:
> If X64, Page table Level setting in PcdUse5LevelPageTable must align with
> previous level.
> If IA32, Page table Level is decided by PcdUse5LevelPageTable and feature
> capability.
>
> Change-Id: Ia7f7e365c7354cc49f971209bfcbc5af5aded062
> Cc: Dandan Bi <dandan.bi@intel.com>
> Cc: Liming Gao <gaoliming@byosoft.com.cn>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Ray Ni <ray.ni@intel.com>
> Cc: Zeng Star <star.zeng@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Rahul Kumar <rahul1.kumar@intel.com>
> Signed-off-by: Jiaxin Wu <jiaxin.wu@intel.com>
> ---
> MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c | 39
> ++++++++++++++++--------
> 1 file changed, 27 insertions(+), 12 deletions(-)
>
> diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
> b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
> index 18b121d768..301e200cd8 100644
> --- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
> +++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
> @@ -737,22 +737,37 @@ CreateIdentityMappingPageTables (
> } else {
> PhysicalAddressBits = 36;
> }
> }
>
> - Page5LevelSupport = FALSE;
> - if (PcdGetBool (PcdUse5LevelPageTable)) {
> - AsmCpuidEx (
> - CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS,
> - CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS_SUB_LEAF_INFO,
> - NULL,
> - NULL,
> - &EcxFlags.Uint32,
> - NULL
> - );
> - if (EcxFlags.Bits.FiveLevelPage != 0) {
> - Page5LevelSupport = TRUE;
> + //
> + // Check run in X64 or IA32
> + //
> + if (sizeof (UINTN) == sizeof (UINT64)) {
> + //
> + // If X64, Page table Level must align with previous level.
> + //
> + Cr4.UintN = AsmReadCr4 ();
> + Page5LevelSupport = Cr4.Bits.LA57 ? TRUE : FALSE;
> + ASSERT (PcdGetBool (PcdUse5LevelPageTable) == Page5LevelSupport);
> + } else {
> + //
> + // If IA32, Page table Level is decided by PCD and feature capbility.
> + //
> + Page5LevelSupport = FALSE;
> + if (PcdGetBool (PcdUse5LevelPageTable)) {
> + AsmCpuidEx (
> + CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS,
> + CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS_SUB_LEAF_INFO,
> + NULL,
> + NULL,
> + &EcxFlags.Uint32,
> + NULL
> + );
> + if (EcxFlags.Bits.FiveLevelPage != 0) {
> + Page5LevelSupport = TRUE;
> + }
> }
> }
>
> DEBUG ((DEBUG_INFO, "AddressBits=%u 5LevelPaging=%u 1GPage=%u\n",
> PhysicalAddressBits, Page5LevelSupport, Page1GSupport));
>
> --
> 2.16.2.windows.1
>
>
>
>
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 0/3] Target to enable paging from temporary RAM Done
2023-05-09 10:22 [PATCH v1 0/3] Target to enable paging from temporary RAM Done Wu, Jiaxin
` (2 preceding siblings ...)
2023-05-09 10:22 ` [PATCH v1 3/3] MdeModulePkg/DxeIpl: Align Page table Level setting with previous level Wu, Jiaxin
@ 2023-05-09 14:46 ` Gerd Hoffmann
2023-05-10 1:58 ` Wu, Jiaxin
3 siblings, 1 reply; 23+ messages in thread
From: Gerd Hoffmann @ 2023-05-09 14:46 UTC (permalink / raw)
To: devel, jiaxin.wu
On Tue, May 09, 2023 at 06:22:50PM +0800, Wu, Jiaxin wrote:
> For arch X64, system will enable the page table in SPI to cover 0-512G
> range via CR4.PAE & MSR.LME & CR0.PG & CR3 setting. Existing code doesn't
> cover the higher address access above 512G before memory-discovered
> callback. This series patches provide the solution to enable paging from
> temporary RAM Done.
>
> Jiaxin Wu (3):
> UefiCpuPkg/SecCore: Migrate page table to permanent memory
> UefiCpuPkg/CpuMpPei: Enable PAE page table if CR0.PG is not set
> MdeModulePkg/DxeIpl: Align Page table Level setting with previous
> level.
Fails to build OvmfPkg/OvmfPkgX64.dsc
Please run this through CI before sending out the patches.
thanks,
Gerd
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [edk2-devel] [PATCH v1 0/3] Target to enable paging from temporary RAM Done
2023-05-09 14:46 ` [edk2-devel] [PATCH v1 0/3] Target to enable paging from temporary RAM Done Gerd Hoffmann
@ 2023-05-10 1:58 ` Wu, Jiaxin
0 siblings, 0 replies; 23+ messages in thread
From: Wu, Jiaxin @ 2023-05-10 1:58 UTC (permalink / raw)
To: devel@edk2.groups.io, kraxel@redhat.com
I will fix it in next version. Thanks comments.
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Gerd
> Hoffmann
> Sent: Tuesday, May 9, 2023 10:47 PM
> To: devel@edk2.groups.io; Wu, Jiaxin <jiaxin.wu@intel.com>
> Subject: Re: [edk2-devel] [PATCH v1 0/3] Target to enable paging from
> temporary RAM Done
>
> On Tue, May 09, 2023 at 06:22:50PM +0800, Wu, Jiaxin wrote:
> > For arch X64, system will enable the page table in SPI to cover 0-512G
> > range via CR4.PAE & MSR.LME & CR0.PG & CR3 setting. Existing code doesn't
> > cover the higher address access above 512G before memory-discovered
> > callback. This series patches provide the solution to enable paging from
> > temporary RAM Done.
> >
> > Jiaxin Wu (3):
> > UefiCpuPkg/SecCore: Migrate page table to permanent memory
> > UefiCpuPkg/CpuMpPei: Enable PAE page table if CR0.PG is not set
> > MdeModulePkg/DxeIpl: Align Page table Level setting with previous
> > level.
>
> Fails to build OvmfPkg/OvmfPkgX64.dsc
>
> Please run this through CI before sending out the patches.
>
> thanks,
> Gerd
>
>
>
>
>
^ permalink raw reply [flat|nested] 23+ messages in thread