From: "duntan" <dun.tan@intel.com>
To: devel@edk2.groups.io
Cc: Dandan Bi <dandan.bi@intel.com>,
Liming Gao <gaoliming@byosoft.com.cn>, Ray Ni <ray.ni@intel.com>,
Jian J Wang <jian.j.wang@intel.com>
Subject: [PATCH 7/9] MdeModulePkg/DxeIpl: Create page table by CpuPageTableLib
Date: Tue, 28 Mar 2023 10:43:00 +0800 [thread overview]
Message-ID: <20230328024302.2085-8-dun.tan@intel.com> (raw)
In-Reply-To: <20230328024302.2085-1-dun.tan@intel.com>
Modify CreateIdentityMappingPageTables() to create page table
based on CpuPageTableLib in DxeIpl module. This function can
be used to create both IA32 PAE paging and long mode 4-level,
5-level paging structure. With the PageTableMap() API in the
CpuPageTableLib, we can remove the complicated page table
manipulating code. This commit doesn't change any functionality.
Signed-off-by: Dun Tan <dun.tan@intel.com>
Cc: Dandan Bi <dandan.bi@intel.com>
Cc: Liming Gao <gaoliming@byosoft.com.cn>
Cc: Ray Ni <ray.ni@intel.com>
Cc: Jian J Wang <jian.j.wang@intel.com>
---
MdeModulePkg/Core/DxeIplPeim/DxeIpl.h | 3 ++-
MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf | 4 +++-
MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c | 109 ++++---------------------------------------------------------------------------------------------------------
MdeModulePkg/Core/DxeIplPeim/X64/DxeLoadFunc.c | 5 +++--
MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c | 557 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h | 167 ++++++++++-------------------------------------------------------------------------------------------------------------------------------------------------------------
6 files changed, 166 insertions(+), 679 deletions(-)
diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.h b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.h
index 2f015befce..03e6f8cff7 100644
--- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.h
+++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.h
@@ -2,7 +2,7 @@
Master header file for DxeIpl PEIM. All source files in this module should
include this file for common definitions.
-Copyright (c) 2006 - 2019, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
**/
@@ -42,6 +42,7 @@ SPDX-License-Identifier: BSD-2-Clause-Patent
#include <Library/DebugAgentLib.h>
#include <Library/PeiServicesTablePointerLib.h>
#include <Library/PerformanceLib.h>
+#include <Library/CpuPageTableLib.h>
#define STACK_SIZE 0x20000
#define BSP_STORE_SIZE 0x4000
diff --git a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf
index 052ea0ec1a..60623b4f66 100644
--- a/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf
+++ b/MdeModulePkg/Core/DxeIplPeim/DxeIpl.inf
@@ -5,7 +5,7 @@
# PPI to discover and dispatch the DXE Foundation and components that are
# needed to run the DXE Foundation.
#
-# Copyright (c) 2006 - 2019, Intel Corporation. All rights reserved.<BR>
+# Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
# Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
# Copyright (c) 2020, Hewlett Packard Enterprise Development LP. All rights reserved.<BR>
# Copyright (c) 2022, Loongson Technology Corporation Limited. All rights reserved.<BR>
@@ -60,6 +60,7 @@
[Packages]
MdePkg/MdePkg.dec
MdeModulePkg/MdeModulePkg.dec
+ UefiCpuPkg/UefiCpuPkg.dec
[Packages.ARM, Packages.AARCH64]
ArmPkg/ArmPkg.dec
@@ -79,6 +80,7 @@
DebugAgentLib
PeiServicesTablePointerLib
PerformanceLib
+ CpuPageTableLib
[LibraryClasses.ARM, LibraryClasses.AARCH64]
ArmMmuLib
diff --git a/MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c b/MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c
index fdeaaa39d8..e0e2601637 100644
--- a/MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c
+++ b/MdeModulePkg/Core/DxeIplPeim/Ia32/DxeLoadFunc.c
@@ -1,7 +1,7 @@
/** @file
Ia32-specific functionality for DxeLoad.
-Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
@@ -70,107 +70,6 @@ GLOBAL_REMOVE_IF_UNREFERENCED IA32_DESCRIPTOR gLidtDescriptor = {
0
};
-/**
- Allocates and fills in the Page Directory and Page Table Entries to
- establish a 4G page table.
-
- @param[in] StackBase Stack base address.
- @param[in] StackSize Stack size.
-
- @return The address of page table.
-
-**/
-UINTN
-Create4GPageTablesIa32Pae (
- IN EFI_PHYSICAL_ADDRESS StackBase,
- IN UINTN StackSize
- )
-{
- UINT8 PhysicalAddressBits;
- EFI_PHYSICAL_ADDRESS PhysicalAddress;
- UINTN IndexOfPdpEntries;
- UINTN IndexOfPageDirectoryEntries;
- UINT32 NumberOfPdpEntriesNeeded;
- PAGE_MAP_AND_DIRECTORY_POINTER *PageMap;
- PAGE_MAP_AND_DIRECTORY_POINTER *PageDirectoryPointerEntry;
- PAGE_TABLE_ENTRY *PageDirectoryEntry;
- UINTN TotalPagesNum;
- UINTN PageAddress;
- UINT64 AddressEncMask;
-
- //
- // Make sure AddressEncMask is contained to smallest supported address field
- //
- AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
-
- PhysicalAddressBits = 32;
-
- //
- // Calculate the table entries needed.
- //
- NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 (1, (PhysicalAddressBits - 30));
-
- TotalPagesNum = NumberOfPdpEntriesNeeded + 1;
- PageAddress = (UINTN)AllocatePageTableMemory (TotalPagesNum);
- ASSERT (PageAddress != 0);
-
- PageMap = (VOID *)PageAddress;
- PageAddress += SIZE_4KB;
-
- PageDirectoryPointerEntry = PageMap;
- PhysicalAddress = 0;
-
- for (IndexOfPdpEntries = 0; IndexOfPdpEntries < NumberOfPdpEntriesNeeded; IndexOfPdpEntries++, PageDirectoryPointerEntry++) {
- //
- // Each Directory Pointer entries points to a page of Page Directory entires.
- // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop.
- //
- PageDirectoryEntry = (VOID *)PageAddress;
- PageAddress += SIZE_4KB;
-
- //
- // Fill in a Page Directory Pointer Entries
- //
- PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask;
- PageDirectoryPointerEntry->Bits.Present = 1;
-
- for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PhysicalAddress += SIZE_2MB) {
- if ( (IsNullDetectionEnabled () && (PhysicalAddress == 0))
- || ( (PhysicalAddress < StackBase + StackSize)
- && ((PhysicalAddress + SIZE_2MB) > StackBase)))
- {
- //
- // Need to split this 2M page that covers stack range.
- //
- Split2MPageTo4K (PhysicalAddress, (UINT64 *)PageDirectoryEntry, StackBase, StackSize, 0, 0);
- } else {
- //
- // Fill in the Page Directory entries
- //
- PageDirectoryEntry->Uint64 = (UINT64)PhysicalAddress | AddressEncMask;
- PageDirectoryEntry->Bits.ReadWrite = 1;
- PageDirectoryEntry->Bits.Present = 1;
- PageDirectoryEntry->Bits.MustBe1 = 1;
- }
- }
- }
-
- for ( ; IndexOfPdpEntries < 512; IndexOfPdpEntries++, PageDirectoryPointerEntry++) {
- ZeroMem (
- PageDirectoryPointerEntry,
- sizeof (PAGE_MAP_AND_DIRECTORY_POINTER)
- );
- }
-
- //
- // Protect the page table by marking the memory used for page table to be
- // read-only.
- //
- EnablePageTableProtection ((UINTN)PageMap, FALSE);
-
- return (UINTN)PageMap;
-}
-
/**
The function will check if IA32 PAE is supported.
@@ -299,9 +198,9 @@ HandOffToDxeCore (
//
AsmWriteGdtr (&gGdt);
//
- // Create page table and save PageMapLevel4 to CR3
+ // Create page table and save PageMapLevel4 or PageMapLevel5 to CR3
//
- PageTables = CreateIdentityMappingPageTables (BaseOfStack, STACK_SIZE, 0, 0);
+ PageTables = CreateIdentityMappingPageTables (FALSE, BaseOfStack, STACK_SIZE, 0, 0);
//
// End of PEI phase signal
@@ -422,7 +321,7 @@ HandOffToDxeCore (
PageTables = 0;
BuildPageTablesIa32Pae = ToBuildPageTable ();
if (BuildPageTablesIa32Pae) {
- PageTables = Create4GPageTablesIa32Pae (BaseOfStack, STACK_SIZE);
+ PageTables = CreateIdentityMappingPageTables (TRUE, BaseOfStack, STACK_SIZE, 0, 0);
if (IsEnableNonExecNeeded ()) {
EnableExecuteDisableBit ();
}
diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/DxeLoadFunc.c b/MdeModulePkg/Core/DxeIplPeim/X64/DxeLoadFunc.c
index fa2050cf02..36e32d05e3 100644
--- a/MdeModulePkg/Core/DxeIplPeim/X64/DxeLoadFunc.c
+++ b/MdeModulePkg/Core/DxeIplPeim/X64/DxeLoadFunc.c
@@ -1,7 +1,7 @@
/** @file
x64-specifc functionality for DxeLoad.
-Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
**/
@@ -91,9 +91,10 @@ HandOffToDxeCore (
PageTables = 0;
if (FeaturePcdGet (PcdDxeIplBuildPageTables)) {
//
- // Create page table and save PageMapLevel4 to CR3
+ // Create page table and save PageMapLevel4 or PageMapLevel5 to CR3
//
PageTables = CreateIdentityMappingPageTables (
+ FALSE,
(EFI_PHYSICAL_ADDRESS)(UINTN)BaseOfStack,
STACK_SIZE,
(EFI_PHYSICAL_ADDRESS)(UINTN)GhcbBase,
diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
index 18b121d768..ac3a2b2dc4 100644
--- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
+++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c
@@ -15,7 +15,7 @@
2) IA-32 Intel(R) Architecture Software Developer's Manual Volume 2:Instruction Set Reference, Intel
3) IA-32 Intel(R) Architecture Software Developer's Manual Volume 3:System Programmer's Guide, Intel
-Copyright (c) 2006 - 2022, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
@@ -186,55 +186,6 @@ EnableExecuteDisableBit (
}
}
-/**
- The function will check if page table entry should be splitted to smaller
- granularity.
-
- @param Address Physical memory address.
- @param Size Size of the given physical memory.
- @param StackBase Base address of stack.
- @param StackSize Size of stack.
- @param GhcbBase Base address of GHCB pages.
- @param GhcbSize Size of GHCB area.
-
- @retval TRUE Page table should be split.
- @retval FALSE Page table should not be split.
-**/
-BOOLEAN
-ToSplitPageTable (
- IN EFI_PHYSICAL_ADDRESS Address,
- IN UINTN Size,
- IN EFI_PHYSICAL_ADDRESS StackBase,
- IN UINTN StackSize,
- IN EFI_PHYSICAL_ADDRESS GhcbBase,
- IN UINTN GhcbSize
- )
-{
- if (IsNullDetectionEnabled () && (Address == 0)) {
- return TRUE;
- }
-
- if (PcdGetBool (PcdCpuStackGuard)) {
- if ((StackBase >= Address) && (StackBase < (Address + Size))) {
- return TRUE;
- }
- }
-
- if (PcdGetBool (PcdSetNxForStack)) {
- if ((Address < StackBase + StackSize) && ((Address + Size) > StackBase)) {
- return TRUE;
- }
- }
-
- if (GhcbBase != 0) {
- if ((Address < GhcbBase + GhcbSize) && ((Address + Size) > GhcbBase)) {
- return TRUE;
- }
- }
-
- return FALSE;
-}
-
/**
Initialize a buffer pool for page table use only.
@@ -341,143 +292,42 @@ AllocatePageTableMemory (
}
/**
- Split 2M page to 4K.
-
- @param[in] PhysicalAddress Start physical address the 2M page covered.
- @param[in, out] PageEntry2M Pointer to 2M page entry.
- @param[in] StackBase Stack base address.
- @param[in] StackSize Stack size.
- @param[in] GhcbBase GHCB page area base address.
- @param[in] GhcbSize GHCB page area size.
-
+ This function create new page table or modifies the page MapAttribute for the memory region
+ specified by BaseAddress and Length from their current attributes to the attributes specified
+ by MapAttribute and Mask.
+
+ @param[in] PageTable Pointer to Page table address.
+ @param[in] PagingMode The paging mode.
+ @param[in] BaseAddress The start of the linear address range.
+ @param[in] Length The length of the linear address range.
+ @param[in] MapAttribute The attribute of the linear address range.
+ @param[in] MapMask The mask used for attribute.
**/
VOID
-Split2MPageTo4K (
- IN EFI_PHYSICAL_ADDRESS PhysicalAddress,
- IN OUT UINT64 *PageEntry2M,
- IN EFI_PHYSICAL_ADDRESS StackBase,
- IN UINTN StackSize,
- IN EFI_PHYSICAL_ADDRESS GhcbBase,
- IN UINTN GhcbSize
+CreateOrUpdatePageTable (
+ IN UINTN *PageTable,
+ IN PAGING_MODE PagingMode,
+ IN PHYSICAL_ADDRESS BaseAddress,
+ IN UINT64 Length,
+ IN IA32_MAP_ATTRIBUTE *MapAttribute,
+ IN IA32_MAP_ATTRIBUTE *MapMask
)
{
- EFI_PHYSICAL_ADDRESS PhysicalAddress4K;
- UINTN IndexOfPageTableEntries;
- PAGE_TABLE_4K_ENTRY *PageTableEntry;
- UINT64 AddressEncMask;
-
- //
- // Make sure AddressEncMask is contained to smallest supported address field
- //
- AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
-
- PageTableEntry = AllocatePageTableMemory (1);
- ASSERT (PageTableEntry != NULL);
-
- //
- // Fill in 2M page entry.
- //
- *PageEntry2M = (UINT64)(UINTN)PageTableEntry | AddressEncMask | IA32_PG_P | IA32_PG_RW;
-
- PhysicalAddress4K = PhysicalAddress;
- for (IndexOfPageTableEntries = 0; IndexOfPageTableEntries < 512; IndexOfPageTableEntries++, PageTableEntry++, PhysicalAddress4K += SIZE_4KB) {
- //
- // Fill in the Page Table entries
- //
- PageTableEntry->Uint64 = (UINT64)PhysicalAddress4K;
-
- //
- // The GHCB range consists of two pages per CPU, the GHCB and a
- // per-CPU variable page. The GHCB page needs to be mapped as an
- // unencrypted page while the per-CPU variable page needs to be
- // mapped encrypted. These pages alternate in assignment.
- //
- if ( (GhcbBase == 0)
- || (PhysicalAddress4K < GhcbBase)
- || (PhysicalAddress4K >= GhcbBase + GhcbSize)
- || (((PhysicalAddress4K - GhcbBase) & SIZE_4KB) != 0))
- {
- PageTableEntry->Uint64 |= AddressEncMask;
- }
-
- PageTableEntry->Bits.ReadWrite = 1;
-
- if ((IsNullDetectionEnabled () && (PhysicalAddress4K == 0)) ||
- (PcdGetBool (PcdCpuStackGuard) && (PhysicalAddress4K == StackBase)))
- {
- PageTableEntry->Bits.Present = 0;
- } else {
- PageTableEntry->Bits.Present = 1;
- }
-
- if ( PcdGetBool (PcdSetNxForStack)
- && (PhysicalAddress4K >= StackBase)
- && (PhysicalAddress4K < StackBase + StackSize))
- {
- //
- // Set Nx bit for stack.
- //
- PageTableEntry->Bits.Nx = 1;
- }
+ RETURN_STATUS Status;
+ UINTN PageTableBufferSize;
+ VOID *PageTableBuffer;
+
+ PageTableBufferSize = 0;
+ Status = PageTableMap (PageTable, PagingMode, NULL, &PageTableBufferSize, BaseAddress, Length, MapAttribute, MapMask, NULL);
+ if (Status == RETURN_BUFFER_TOO_SMALL) {
+ PageTableBuffer = AllocatePageTableMemory (EFI_SIZE_TO_PAGES (PageTableBufferSize));
+ DEBUG ((DEBUG_INFO, "DxeIpl: 0x%x bytes needed for page table\n", PageTableBufferSize));
+ ASSERT (PageTableBuffer != NULL);
+ Status = PageTableMap (PageTable, PagingMode, PageTableBuffer, &PageTableBufferSize, BaseAddress, Length, MapAttribute, MapMask, NULL);
}
-}
-
-/**
- Split 1G page to 2M.
- @param[in] PhysicalAddress Start physical address the 1G page covered.
- @param[in, out] PageEntry1G Pointer to 1G page entry.
- @param[in] StackBase Stack base address.
- @param[in] StackSize Stack size.
- @param[in] GhcbBase GHCB page area base address.
- @param[in] GhcbSize GHCB page area size.
-
-**/
-VOID
-Split1GPageTo2M (
- IN EFI_PHYSICAL_ADDRESS PhysicalAddress,
- IN OUT UINT64 *PageEntry1G,
- IN EFI_PHYSICAL_ADDRESS StackBase,
- IN UINTN StackSize,
- IN EFI_PHYSICAL_ADDRESS GhcbBase,
- IN UINTN GhcbSize
- )
-{
- EFI_PHYSICAL_ADDRESS PhysicalAddress2M;
- UINTN IndexOfPageDirectoryEntries;
- PAGE_TABLE_ENTRY *PageDirectoryEntry;
- UINT64 AddressEncMask;
-
- //
- // Make sure AddressEncMask is contained to smallest supported address field
- //
- AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
-
- PageDirectoryEntry = AllocatePageTableMemory (1);
- ASSERT (PageDirectoryEntry != NULL);
-
- //
- // Fill in 1G page entry.
- //
- *PageEntry1G = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask | IA32_PG_P | IA32_PG_RW;
-
- PhysicalAddress2M = PhysicalAddress;
- for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PhysicalAddress2M += SIZE_2MB) {
- if (ToSplitPageTable (PhysicalAddress2M, SIZE_2MB, StackBase, StackSize, GhcbBase, GhcbSize)) {
- //
- // Need to split this 2M page that covers NULL or stack range.
- //
- Split2MPageTo4K (PhysicalAddress2M, (UINT64 *)PageDirectoryEntry, StackBase, StackSize, GhcbBase, GhcbSize);
- } else {
- //
- // Fill in the Page Directory entries
- //
- PageDirectoryEntry->Uint64 = (UINT64)PhysicalAddress2M | AddressEncMask;
- PageDirectoryEntry->Bits.ReadWrite = 1;
- PageDirectoryEntry->Bits.Present = 1;
- PageDirectoryEntry->Bits.MustBe1 = 1;
- }
- }
+ ASSERT_RETURN_ERROR (Status);
+ ASSERT (PageTableBufferSize == 0);
}
/**
@@ -657,19 +507,20 @@ EnablePageTableProtection (
}
/**
- Allocates and fills in the Page Directory and Page Table Entries to
+ Create IA32 PAE paging or 4-level/5-level paging for long mode to
establish a 1:1 Virtual to Physical mapping.
- @param[in] StackBase Stack base address.
- @param[in] StackSize Stack size.
- @param[in] GhcbBase GHCB base address.
- @param[in] GhcbSize GHCB size.
-
- @return The address of 4 level page map.
+ @param[in] Is32BitPageTable Whether to create 32-bit PAE page table.
+ @param[in] StackBase Stack base address.
+ @param[in] StackSize Stack size.
+ @param[in] GhcbBase GHCB base address.
+ @param[in] GhcbSize GHCB size.
+ @return PageTable Address
**/
UINTN
CreateIdentityMappingPageTables (
+ IN BOOLEAN Is32BitPageTable,
IN EFI_PHYSICAL_ADDRESS StackBase,
IN UINTN StackSize,
IN EFI_PHYSICAL_ADDRESS GhcbBase,
@@ -680,274 +531,154 @@ CreateIdentityMappingPageTables (
CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS_ECX EcxFlags;
UINT32 RegEdx;
UINT8 PhysicalAddressBits;
- EFI_PHYSICAL_ADDRESS PageAddress;
- UINTN IndexOfPml5Entries;
- UINTN IndexOfPml4Entries;
- UINTN IndexOfPdpEntries;
- UINTN IndexOfPageDirectoryEntries;
- UINT32 NumberOfPml5EntriesNeeded;
- UINT32 NumberOfPml4EntriesNeeded;
- UINT32 NumberOfPdpEntriesNeeded;
- PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel5Entry;
- PAGE_MAP_AND_DIRECTORY_POINTER *PageMapLevel4Entry;
- PAGE_MAP_AND_DIRECTORY_POINTER *PageMap;
- PAGE_MAP_AND_DIRECTORY_POINTER *PageDirectoryPointerEntry;
- PAGE_TABLE_ENTRY *PageDirectoryEntry;
- UINTN TotalPagesNum;
- UINTN BigPageAddress;
VOID *Hob;
BOOLEAN Page5LevelSupport;
BOOLEAN Page1GSupport;
- PAGE_TABLE_1G_ENTRY *PageDirectory1GEntry;
UINT64 AddressEncMask;
IA32_CR4 Cr4;
-
- //
- // Set PageMapLevel5Entry to suppress incorrect compiler/analyzer warnings
- //
- PageMapLevel5Entry = NULL;
+ PAGING_MODE PagingMode;
+ UINTN PageTable;
+ IA32_MAP_ATTRIBUTE MapAttribute;
+ IA32_MAP_ATTRIBUTE MapMask;
+ EFI_PHYSICAL_ADDRESS GhcbBase4K;
//
// Make sure AddressEncMask is contained to smallest supported address field
//
- AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
-
- Page1GSupport = FALSE;
- if (PcdGetBool (PcdUse1GPageTable)) {
- AsmCpuid (0x80000000, &RegEax, NULL, NULL, NULL);
- if (RegEax >= 0x80000001) {
- AsmCpuid (0x80000001, NULL, NULL, NULL, &RegEdx);
- if ((RegEdx & BIT26) != 0) {
- Page1GSupport = TRUE;
+ AddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
+ Page5LevelSupport = FALSE;
+ Page1GSupport = FALSE;
+
+ if (Is32BitPageTable) {
+ PagingMode = PagingPae;
+ PhysicalAddressBits = 32;
+ } else {
+ if (PcdGetBool (PcdUse1GPageTable)) {
+ AsmCpuid (0x80000000, &RegEax, NULL, NULL, NULL);
+ if (RegEax >= 0x80000001) {
+ AsmCpuid (0x80000001, NULL, NULL, NULL, &RegEdx);
+ if ((RegEdx & BIT26) != 0) {
+ Page1GSupport = TRUE;
+ }
}
}
- }
- //
- // Get physical address bits supported.
- //
- Hob = GetFirstHob (EFI_HOB_TYPE_CPU);
- if (Hob != NULL) {
- PhysicalAddressBits = ((EFI_HOB_CPU *)Hob)->SizeOfMemorySpace;
- } else {
- AsmCpuid (0x80000000, &RegEax, NULL, NULL, NULL);
- if (RegEax >= 0x80000008) {
- AsmCpuid (0x80000008, &RegEax, NULL, NULL, NULL);
- PhysicalAddressBits = (UINT8)RegEax;
+ //
+ // Get physical address bits supported.
+ //
+ Hob = GetFirstHob (EFI_HOB_TYPE_CPU);
+ if (Hob != NULL) {
+ PhysicalAddressBits = ((EFI_HOB_CPU *)Hob)->SizeOfMemorySpace;
} else {
- PhysicalAddressBits = 36;
+ AsmCpuid (0x80000000, &RegEax, NULL, NULL, NULL);
+ if (RegEax >= 0x80000008) {
+ AsmCpuid (0x80000008, &RegEax, NULL, NULL, NULL);
+ PhysicalAddressBits = (UINT8)RegEax;
+ } else {
+ PhysicalAddressBits = 36;
+ }
}
- }
- Page5LevelSupport = FALSE;
- if (PcdGetBool (PcdUse5LevelPageTable)) {
- AsmCpuidEx (
- CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS,
- CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS_SUB_LEAF_INFO,
- NULL,
- NULL,
- &EcxFlags.Uint32,
- NULL
- );
- if (EcxFlags.Bits.FiveLevelPage != 0) {
- Page5LevelSupport = TRUE;
+ if (PcdGetBool (PcdUse5LevelPageTable)) {
+ AsmCpuidEx (
+ CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS,
+ CPUID_STRUCTURED_EXTENDED_FEATURE_FLAGS_SUB_LEAF_INFO,
+ NULL,
+ NULL,
+ &EcxFlags.Uint32,
+ NULL
+ );
+ if (EcxFlags.Bits.FiveLevelPage != 0) {
+ Page5LevelSupport = TRUE;
+ }
}
- }
-
- DEBUG ((DEBUG_INFO, "AddressBits=%u 5LevelPaging=%u 1GPage=%u\n", PhysicalAddressBits, Page5LevelSupport, Page1GSupport));
- //
- // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses
- // when 5-Level Paging is disabled,
- // due to either unsupported by HW, or disabled by PCD.
- //
- ASSERT (PhysicalAddressBits <= 52);
- if (!Page5LevelSupport && (PhysicalAddressBits > 48)) {
- PhysicalAddressBits = 48;
- }
-
- //
- // Calculate the table entries needed.
- //
- NumberOfPml5EntriesNeeded = 1;
- if (PhysicalAddressBits > 48) {
- NumberOfPml5EntriesNeeded = (UINT32)LShiftU64 (1, PhysicalAddressBits - 48);
- PhysicalAddressBits = 48;
- }
+ if (Page5LevelSupport) {
+ if (Page1GSupport) {
+ PagingMode = Paging5Level1GB;
+ } else {
+ PagingMode = Paging5Level;
+ }
+ } else {
+ if (Page1GSupport) {
+ PagingMode = Paging4Level1GB;
+ } else {
+ PagingMode = Paging4Level;
+ }
+ }
- NumberOfPml4EntriesNeeded = 1;
- if (PhysicalAddressBits > 39) {
- NumberOfPml4EntriesNeeded = (UINT32)LShiftU64 (1, PhysicalAddressBits - 39);
- PhysicalAddressBits = 39;
+ DEBUG ((DEBUG_INFO, "AddressBits=%u 5LevelPaging=%u 1GPage=%u\n", PhysicalAddressBits, Page5LevelSupport, Page1GSupport));
+ //
+ // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses
+ // when 5-Level Paging is disabled, due to either unsupported by HW, or disabled by PCD.
+ //
+ ASSERT (PhysicalAddressBits <= 52);
+ if (!Page5LevelSupport && (PhysicalAddressBits > 48)) {
+ PhysicalAddressBits = 48;
+ }
}
- NumberOfPdpEntriesNeeded = 1;
- ASSERT (PhysicalAddressBits > 30);
- NumberOfPdpEntriesNeeded = (UINT32)LShiftU64 (1, PhysicalAddressBits - 30);
+ PageTable = 0;
+ MapAttribute.Uint64 = AddressEncMask;
+ MapAttribute.Bits.Present = 1;
+ MapAttribute.Bits.ReadWrite = 1;
+ MapMask.Uint64 = MAX_UINT64;
+ CreateOrUpdatePageTable (&PageTable, PagingMode, 0, LShiftU64 (1, PhysicalAddressBits), &MapAttribute, &MapMask);
- //
- // Pre-allocate big pages to avoid later allocations.
- //
- if (!Page1GSupport) {
- TotalPagesNum = ((NumberOfPdpEntriesNeeded + 1) * NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1;
- } else {
- TotalPagesNum = (NumberOfPml4EntriesNeeded + 1) * NumberOfPml5EntriesNeeded + 1;
- }
-
- //
- // Substract the one page occupied by PML5 entries if 5-Level Paging is disabled.
- //
- if (!Page5LevelSupport) {
- TotalPagesNum--;
+ if ((GhcbBase > 0) && (GhcbSize > 0) && (AddressEncMask != 0)) {
+ //
+ // The GHCB range consists of two pages per CPU, the GHCB and a
+ // per-CPU variable page. The GHCB page needs to be mapped as an
+ // unencrypted page while the per-CPU variable page needs to be
+ // mapped encrypted. These pages alternate in assignment.
+ //
+ ASSERT (Is32BitPageTable == FALSE);
+ GhcbBase4K = ALIGN_VALUE (GhcbBase, SIZE_4KB);
+ MapAttribute.Uint64 = GhcbBase4K;
+ MapMask.Uint64 = 0;
+ MapMask.Bits.PageTableBaseAddressLow = 1;
+ CreateOrUpdatePageTable (&PageTable, PagingMode, GhcbBase4K, SIZE_4KB, &MapAttribute, &MapMask);
}
- DEBUG ((
- DEBUG_INFO,
- "Pml5=%u Pml4=%u Pdp=%u TotalPage=%Lu\n",
- NumberOfPml5EntriesNeeded,
- NumberOfPml4EntriesNeeded,
- NumberOfPdpEntriesNeeded,
- (UINT64)TotalPagesNum
- ));
-
- BigPageAddress = (UINTN)AllocatePageTableMemory (TotalPagesNum);
- ASSERT (BigPageAddress != 0);
-
- //
- // By architecture only one PageMapLevel4 exists - so lets allocate storage for it.
- //
- PageMap = (VOID *)BigPageAddress;
- if (Page5LevelSupport) {
+ if (PcdGetBool (PcdSetNxForStack)) {
//
- // By architecture only one PageMapLevel5 exists - so lets allocate storage for it.
+ // Set the stack as Nx in page table.
//
- PageMapLevel5Entry = PageMap;
- BigPageAddress += SIZE_4KB;
+ MapAttribute.Uint64 = 0;
+ MapAttribute.Bits.Nx = 1;
+ MapMask.Uint64 = 0;
+ MapMask.Bits.Nx = 1;
+ CreateOrUpdatePageTable (&PageTable, PagingMode, StackBase, StackSize, &MapAttribute, &MapMask);
}
- PageAddress = 0;
-
- for ( IndexOfPml5Entries = 0
- ; IndexOfPml5Entries < NumberOfPml5EntriesNeeded
- ; IndexOfPml5Entries++)
- {
+ MapAttribute.Uint64 = 0;
+ MapMask.Uint64 = 0;
+ MapMask.Bits.Present = 1;
+ if (IsNullDetectionEnabled ()) {
//
- // Each PML5 entry points to a page of PML4 entires.
- // So lets allocate space for them and fill them in in the IndexOfPml4Entries loop.
- // When 5-Level Paging is disabled, below allocation happens only once.
+ // Set [0, 4KB] as not-present in page table.
//
- PageMapLevel4Entry = (VOID *)BigPageAddress;
- BigPageAddress += SIZE_4KB;
-
- if (Page5LevelSupport) {
- //
- // Make a PML5 Entry
- //
- PageMapLevel5Entry->Uint64 = (UINT64)(UINTN)PageMapLevel4Entry | AddressEncMask;
- PageMapLevel5Entry->Bits.ReadWrite = 1;
- PageMapLevel5Entry->Bits.Present = 1;
- PageMapLevel5Entry++;
- }
-
- for ( IndexOfPml4Entries = 0
- ; IndexOfPml4Entries < (NumberOfPml5EntriesNeeded == 1 ? NumberOfPml4EntriesNeeded : 512)
- ; IndexOfPml4Entries++, PageMapLevel4Entry++)
- {
- //
- // Each PML4 entry points to a page of Page Directory Pointer entires.
- // So lets allocate space for them and fill them in in the IndexOfPdpEntries loop.
- //
- PageDirectoryPointerEntry = (VOID *)BigPageAddress;
- BigPageAddress += SIZE_4KB;
-
- //
- // Make a PML4 Entry
- //
- PageMapLevel4Entry->Uint64 = (UINT64)(UINTN)PageDirectoryPointerEntry | AddressEncMask;
- PageMapLevel4Entry->Bits.ReadWrite = 1;
- PageMapLevel4Entry->Bits.Present = 1;
-
- if (Page1GSupport) {
- PageDirectory1GEntry = (VOID *)PageDirectoryPointerEntry;
-
- for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectory1GEntry++, PageAddress += SIZE_1GB) {
- if (ToSplitPageTable (PageAddress, SIZE_1GB, StackBase, StackSize, GhcbBase, GhcbSize)) {
- Split1GPageTo2M (PageAddress, (UINT64 *)PageDirectory1GEntry, StackBase, StackSize, GhcbBase, GhcbSize);
- } else {
- //
- // Fill in the Page Directory entries
- //
- PageDirectory1GEntry->Uint64 = (UINT64)PageAddress | AddressEncMask;
- PageDirectory1GEntry->Bits.ReadWrite = 1;
- PageDirectory1GEntry->Bits.Present = 1;
- PageDirectory1GEntry->Bits.MustBe1 = 1;
- }
- }
- } else {
- for ( IndexOfPdpEntries = 0
- ; IndexOfPdpEntries < (NumberOfPml4EntriesNeeded == 1 ? NumberOfPdpEntriesNeeded : 512)
- ; IndexOfPdpEntries++, PageDirectoryPointerEntry++)
- {
- //
- // Each Directory Pointer entries points to a page of Page Directory entires.
- // So allocate space for them and fill them in in the IndexOfPageDirectoryEntries loop.
- //
- PageDirectoryEntry = (VOID *)BigPageAddress;
- BigPageAddress += SIZE_4KB;
-
- //
- // Fill in a Page Directory Pointer Entries
- //
- PageDirectoryPointerEntry->Uint64 = (UINT64)(UINTN)PageDirectoryEntry | AddressEncMask;
- PageDirectoryPointerEntry->Bits.ReadWrite = 1;
- PageDirectoryPointerEntry->Bits.Present = 1;
-
- for (IndexOfPageDirectoryEntries = 0; IndexOfPageDirectoryEntries < 512; IndexOfPageDirectoryEntries++, PageDirectoryEntry++, PageAddress += SIZE_2MB) {
- if (ToSplitPageTable (PageAddress, SIZE_2MB, StackBase, StackSize, GhcbBase, GhcbSize)) {
- //
- // Need to split this 2M page that covers NULL or stack range.
- //
- Split2MPageTo4K (PageAddress, (UINT64 *)PageDirectoryEntry, StackBase, StackSize, GhcbBase, GhcbSize);
- } else {
- //
- // Fill in the Page Directory entries
- //
- PageDirectoryEntry->Uint64 = (UINT64)PageAddress | AddressEncMask;
- PageDirectoryEntry->Bits.ReadWrite = 1;
- PageDirectoryEntry->Bits.Present = 1;
- PageDirectoryEntry->Bits.MustBe1 = 1;
- }
- }
- }
-
- //
- // Fill with null entry for unused PDPTE
- //
- ZeroMem (PageDirectoryPointerEntry, (512 - IndexOfPdpEntries) * sizeof (PAGE_MAP_AND_DIRECTORY_POINTER));
- }
- }
+ CreateOrUpdatePageTable (&PageTable, PagingMode, 0, SIZE_4KB, &MapAttribute, &MapMask);
+ }
+ if (PcdGetBool (PcdCpuStackGuard)) {
//
- // For the PML4 entries we are not using fill in a null entry.
+ // Set the the last 4KB of stack as not-present in page table.
//
- ZeroMem (PageMapLevel4Entry, (512 - IndexOfPml4Entries) * sizeof (PAGE_MAP_AND_DIRECTORY_POINTER));
+ CreateOrUpdatePageTable (&PageTable, PagingMode, StackBase, SIZE_4KB, &MapAttribute, &MapMask);
}
if (Page5LevelSupport) {
Cr4.UintN = AsmReadCr4 ();
Cr4.Bits.LA57 = 1;
AsmWriteCr4 (Cr4.UintN);
- //
- // For the PML5 entries we are not using fill in a null entry.
- //
- ZeroMem (PageMapLevel5Entry, (512 - IndexOfPml5Entries) * sizeof (PAGE_MAP_AND_DIRECTORY_POINTER));
}
//
// Protect the page table by marking the memory used for page table to be
// read-only.
//
- EnablePageTableProtection ((UINTN)PageMap, TRUE);
+ EnablePageTableProtection ((UINTN)PageTable, TRUE);
//
// Set IA32_EFER.NXE if necessary.
@@ -956,5 +687,5 @@ CreateIdentityMappingPageTables (
EnableExecuteDisableBit ();
}
- return (UINTN)PageMap;
+ return PageTable;
}
diff --git a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h
index 616ebe42b0..7d4bc4e4ba 100644
--- a/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h
+++ b/MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.h
@@ -7,7 +7,7 @@
3) IA-32 Intel(R) Architecture Software Developer's Manual Volume 3:System Programmer's Guide, Intel
4) AMD64 Architecture Programmer's Manual Volume 2: System Programming
-Copyright (c) 2006 - 2018, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2006 - 2023, Intel Corporation. All rights reserved.<BR>
Copyright (c) 2017, AMD Incorporated. All rights reserved.<BR>
SPDX-License-Identifier: BSD-2-Clause-Patent
@@ -46,99 +46,6 @@ typedef struct {
UINT32 Reserved;
} X64_IDT_GATE_DESCRIPTOR;
-//
-// Page-Map Level-4 Offset (PML4) and
-// Page-Directory-Pointer Offset (PDPE) entries 4K & 2MB
-//
-
-typedef union {
- struct {
- UINT64 Present : 1; // 0 = Not present in memory, 1 = Present in memory
- UINT64 ReadWrite : 1; // 0 = Read-Only, 1= Read/Write
- UINT64 UserSupervisor : 1; // 0 = Supervisor, 1=User
- UINT64 WriteThrough : 1; // 0 = Write-Back caching, 1=Write-Through caching
- UINT64 CacheDisabled : 1; // 0 = Cached, 1=Non-Cached
- UINT64 Accessed : 1; // 0 = Not accessed, 1 = Accessed (set by CPU)
- UINT64 Reserved : 1; // Reserved
- UINT64 MustBeZero : 2; // Must Be Zero
- UINT64 Available : 3; // Available for use by system software
- UINT64 PageTableBaseAddress : 40; // Page Table Base Address
- UINT64 AvabilableHigh : 11; // Available for use by system software
- UINT64 Nx : 1; // No Execute bit
- } Bits;
- UINT64 Uint64;
-} PAGE_MAP_AND_DIRECTORY_POINTER;
-
-//
-// Page Table Entry 4KB
-//
-typedef union {
- struct {
- UINT64 Present : 1; // 0 = Not present in memory, 1 = Present in memory
- UINT64 ReadWrite : 1; // 0 = Read-Only, 1= Read/Write
- UINT64 UserSupervisor : 1; // 0 = Supervisor, 1=User
- UINT64 WriteThrough : 1; // 0 = Write-Back caching, 1=Write-Through caching
- UINT64 CacheDisabled : 1; // 0 = Cached, 1=Non-Cached
- UINT64 Accessed : 1; // 0 = Not accessed, 1 = Accessed (set by CPU)
- UINT64 Dirty : 1; // 0 = Not Dirty, 1 = written by processor on access to page
- UINT64 PAT : 1; //
- UINT64 Global : 1; // 0 = Not global page, 1 = global page TLB not cleared on CR3 write
- UINT64 Available : 3; // Available for use by system software
- UINT64 PageTableBaseAddress : 40; // Page Table Base Address
- UINT64 AvabilableHigh : 11; // Available for use by system software
- UINT64 Nx : 1; // 0 = Execute Code, 1 = No Code Execution
- } Bits;
- UINT64 Uint64;
-} PAGE_TABLE_4K_ENTRY;
-
-//
-// Page Table Entry 2MB
-//
-typedef union {
- struct {
- UINT64 Present : 1; // 0 = Not present in memory, 1 = Present in memory
- UINT64 ReadWrite : 1; // 0 = Read-Only, 1= Read/Write
- UINT64 UserSupervisor : 1; // 0 = Supervisor, 1=User
- UINT64 WriteThrough : 1; // 0 = Write-Back caching, 1=Write-Through caching
- UINT64 CacheDisabled : 1; // 0 = Cached, 1=Non-Cached
- UINT64 Accessed : 1; // 0 = Not accessed, 1 = Accessed (set by CPU)
- UINT64 Dirty : 1; // 0 = Not Dirty, 1 = written by processor on access to page
- UINT64 MustBe1 : 1; // Must be 1
- UINT64 Global : 1; // 0 = Not global page, 1 = global page TLB not cleared on CR3 write
- UINT64 Available : 3; // Available for use by system software
- UINT64 PAT : 1; //
- UINT64 MustBeZero : 8; // Must be zero;
- UINT64 PageTableBaseAddress : 31; // Page Table Base Address
- UINT64 AvabilableHigh : 11; // Available for use by system software
- UINT64 Nx : 1; // 0 = Execute Code, 1 = No Code Execution
- } Bits;
- UINT64 Uint64;
-} PAGE_TABLE_ENTRY;
-
-//
-// Page Table Entry 1GB
-//
-typedef union {
- struct {
- UINT64 Present : 1; // 0 = Not present in memory, 1 = Present in memory
- UINT64 ReadWrite : 1; // 0 = Read-Only, 1= Read/Write
- UINT64 UserSupervisor : 1; // 0 = Supervisor, 1=User
- UINT64 WriteThrough : 1; // 0 = Write-Back caching, 1=Write-Through caching
- UINT64 CacheDisabled : 1; // 0 = Cached, 1=Non-Cached
- UINT64 Accessed : 1; // 0 = Not accessed, 1 = Accessed (set by CPU)
- UINT64 Dirty : 1; // 0 = Not Dirty, 1 = written by processor on access to page
- UINT64 MustBe1 : 1; // Must be 1
- UINT64 Global : 1; // 0 = Not global page, 1 = global page TLB not cleared on CR3 write
- UINT64 Available : 3; // Available for use by system software
- UINT64 PAT : 1; //
- UINT64 MustBeZero : 17; // Must be zero;
- UINT64 PageTableBaseAddress : 22; // Page Table Base Address
- UINT64 AvabilableHigh : 11; // Available for use by system software
- UINT64 Nx : 1; // 0 = Execute Code, 1 = No Code Execution
- } Bits;
- UINT64 Uint64;
-} PAGE_TABLE_1G_ENTRY;
-
#pragma pack()
#define CR0_WP BIT16
@@ -194,44 +101,25 @@ EnableExecuteDisableBit (
);
/**
- Split 2M page to 4K.
-
- @param[in] PhysicalAddress Start physical address the 2M page covered.
- @param[in, out] PageEntry2M Pointer to 2M page entry.
- @param[in] StackBase Stack base address.
- @param[in] StackSize Stack size.
- @param[in] GhcbBase GHCB page area base address.
- @param[in] GhcbSize GHCB page area size.
-
-**/
-VOID
-Split2MPageTo4K (
- IN EFI_PHYSICAL_ADDRESS PhysicalAddress,
- IN OUT UINT64 *PageEntry2M,
- IN EFI_PHYSICAL_ADDRESS StackBase,
- IN UINTN StackSize,
- IN EFI_PHYSICAL_ADDRESS GhcbBase,
- IN UINTN GhcbSize
- );
-
-/**
- Allocates and fills in the Page Directory and Page Table Entries to
+ Create IA32 PAE paging or 4-level/5-level paging for long mode to
establish a 1:1 Virtual to Physical mapping.
- @param[in] StackBase Stack base address.
- @param[in] StackSize Stack size.
- @param[in] GhcbBase GHCB page area base address.
- @param[in] GhcbSize GHCB page area size.
+ @param[in] Is32BitPageTable Whether to create 32-bit PAE page table.
+ @param[in] StackBase Stack base address.
+ @param[in] StackSize Stack size.
+ @param[in] GhcbBase GHCB page area base address.
+ @param[in] GhcbSize GHCB page area size.
- @return The address of 4 level page map.
+ @return The address of page table.
**/
UINTN
CreateIdentityMappingPageTables (
+ IN BOOLEAN Is32BitPageTable,
IN EFI_PHYSICAL_ADDRESS StackBase,
IN UINTN StackSize,
IN EFI_PHYSICAL_ADDRESS GhcbBase,
- IN UINTN GhcbkSize
+ IN UINTN GhcbSize
);
/**
@@ -289,39 +177,4 @@ IsNullDetectionEnabled (
VOID
);
-/**
- Prevent the memory pages used for page table from been overwritten.
-
- @param[in] PageTableBase Base address of page table (CR3).
- @param[in] Level4Paging Level 4 paging flag.
-
-**/
-VOID
-EnablePageTableProtection (
- IN UINTN PageTableBase,
- IN BOOLEAN Level4Paging
- );
-
-/**
- This API provides a way to allocate memory for page table.
-
- This API can be called more than once to allocate memory for page tables.
-
- Allocates the number of 4KB pages and returns a pointer to the allocated
- buffer. The buffer returned is aligned on a 4KB boundary.
-
- If Pages is 0, then NULL is returned.
- If there is not enough memory remaining to satisfy the request, then NULL is
- returned.
-
- @param Pages The number of 4 KB pages to allocate.
-
- @return A pointer to the allocated buffer or NULL if allocation fails.
-
-**/
-VOID *
-AllocatePageTableMemory (
- IN UINTN Pages
- );
-
#endif
--
2.31.1.windows.1
next prev parent reply other threads:[~2023-03-28 2:44 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-28 2:42 [PATCH 0/9] Create page table by CpuPageTableLib in DxeIpl duntan
2023-03-28 2:42 ` [PATCH 1/9] ArmVirtPkg: Add CpuPageTableLib required by DxeIpl in DSC duntan
2023-03-28 6:53 ` Gerd Hoffmann
2023-03-28 11:30 ` duntan
2023-03-28 2:42 ` [PATCH 2/9] EmulatorPkg: " duntan
2023-03-28 2:42 ` [PATCH 3/9] IntelFsp2Pkg: " duntan
2023-03-28 2:56 ` Chiu, Chasel
2023-03-28 2:42 ` [PATCH 4/9] MdeModulePkg: " duntan
2023-03-28 2:42 ` [PATCH 5/9] OvmfPkg: Add CpuPageTableLib required by DxeIpl in DSC file duntan
2023-03-28 2:42 ` [PATCH 6/9] MdeModulePkg: Add UefiCpuPkg.dec to pass DependencyCheck duntan
2023-03-31 7:08 ` Ni, Ray
2023-03-28 2:43 ` duntan [this message]
2023-03-31 7:01 ` [PATCH 7/9] MdeModulePkg/DxeIpl: Create page table by CpuPageTableLib Ni, Ray
2023-03-31 7:09 ` duntan
2023-03-28 2:43 ` [PATCH 8/9] MdeModulePkg/DxeIpl: Remove duplicated code to enable NX duntan
2023-03-31 7:02 ` Ni, Ray
2023-03-28 2:43 ` [PATCH 9/9] MdeModulePkg/DxeIpl: Refinement to the code to set PageTable as RO duntan
2023-03-31 7:05 ` Ni, Ray
2023-03-31 7:10 ` duntan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-list from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230328024302.2085-8-dun.tan@intel.com \
--to=devel@edk2.groups.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox