* [PATCH 0/5] Implement heap guard feature
@ 2017-10-11 3:18 Jian J Wang
2017-10-11 3:18 ` [PATCH 1/5] MdeModulePkg/DxeCore: Implement heap guard feature for UEFI Jian J Wang
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Jian J Wang @ 2017-10-11 3:18 UTC (permalink / raw)
To: edk2-devel
Cc: Star Zeng, Eric Dong, Jiewen Yao, Michael Kinney, Ayellet Wolman
This feature makes use of paging mechanism to add a hidden (not present)
page just before and after the allocated memory block. If the code tries
to access memory outside of the allocated part, page fault exception will
be triggered.
This feature is disabled by default and is not recommended to enable it
in production build of BIOS.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ayellet Wolman <ayellet.wolman@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
Jian J Wang (5):
MdeModulePkg/DxeCore: Implement heap guard feature for UEFI
MdeModulePkg/PiSmmCore: Implement heap guard feature for SMM mode
MdeModulePkg/MdeModulePkg.dec,.uni: Add heap guard related PCDs and
string tokens
UefiCpuPkg/CpuDxe: Reduce debug message
UefiCpuPkg/PiSmmCpuDxeSmm: Disable page table protection
MdeModulePkg/Core/Dxe/DxeMain.inf | 4 +
MdeModulePkg/Core/Dxe/Mem/HeapGuard.c | 1171 +++++++++++++++++++++
MdeModulePkg/Core/Dxe/Mem/HeapGuard.h | 391 +++++++
MdeModulePkg/Core/Dxe/Mem/Imem.h | 38 +-
MdeModulePkg/Core/Dxe/Mem/Page.c | 129 ++-
MdeModulePkg/Core/Dxe/Mem/Pool.c | 154 ++-
MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c | 1438 ++++++++++++++++++++++++++
MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h | 395 +++++++
MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c | 704 +++++++++++++
MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h | 174 ++++
MdeModulePkg/Core/PiSmmCore/Page.c | 51 +-
MdeModulePkg/Core/PiSmmCore/PiSmmCore.c | 12 +-
MdeModulePkg/Core/PiSmmCore/PiSmmCore.h | 80 +-
MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf | 8 +
MdeModulePkg/Core/PiSmmCore/Pool.c | 77 +-
MdeModulePkg/MdeModulePkg.dec | 57 +
MdeModulePkg/MdeModulePkg.uni | 58 ++
UefiCpuPkg/CpuDxe/CpuPageTable.c | 5 +-
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 1 +
UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 2 +-
20 files changed, 4854 insertions(+), 95 deletions(-)
create mode 100644 MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
create mode 100644 MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c
create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h
create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c
create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h
--
2.14.1.windows.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/5] MdeModulePkg/DxeCore: Implement heap guard feature for UEFI
2017-10-11 3:18 [PATCH 0/5] Implement heap guard feature Jian J Wang
@ 2017-10-11 3:18 ` Jian J Wang
2017-10-11 3:18 ` [PATCH 2/5] MdeModulePkg/PiSmmCore: Implement heap guard feature for SMM mode Jian J Wang
` (3 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Jian J Wang @ 2017-10-11 3:18 UTC (permalink / raw)
To: edk2-devel
Cc: Star Zeng, Eric Dong, Jiewen Yao, Michael Kinney, Ayellet Wolman
This feature makes use of paging mechanism to add a hidden (not present)
page just before and after the allocated memory block. If the code tries
to access memory outside of the allocated part, page fault exception will
be triggered.
This feature is controlled by three PCDs:
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPageType
BIT0 and BIT1 of PcdHeapGuardPropertyMask can be used to enable or disable
memory guard for page and pool respectively. PcdHeapGuardPoolType and/or
PcdHeapGuardPageType are used to enable or disable guard for specific type
of memory. For example, we can turn on guard only for EfiBootServicesData
and EfiRuntimeServicesData by setting the PCD with value 0x50.
Pool memory is not ususally integer multiple of one page, and is more likely
less than a page. There's no way to monitor the overflow at both top and
bottom of pool memory. BIT7 of PcdHeapGuardPropertyMask is used to control
how to position the head of pool memory so that it's easier to catch memory
overflow in memory growing direction or in decreasing direction.
Note: Turning on heap guard, especially pool guard, will introduce too many
memory fragments. Windows 10 has a limitation in its boot loader, which
accepts at most 512 memory descriptors passed from BIOS. This will prevent
Windows 10 from booting if heap guard is enabled. The latest Linux
distribution with grub boot loader has no such issue. Normally it's not
recommended to enable this feature in production build of BIOS.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ayellet Wolman <ayellet.wolman@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
MdeModulePkg/Core/Dxe/DxeMain.inf | 4 +
MdeModulePkg/Core/Dxe/Mem/HeapGuard.c | 1171 +++++++++++++++++++++++++++++++++
MdeModulePkg/Core/Dxe/Mem/HeapGuard.h | 391 +++++++++++
MdeModulePkg/Core/Dxe/Mem/Imem.h | 38 +-
MdeModulePkg/Core/Dxe/Mem/Page.c | 129 +++-
MdeModulePkg/Core/Dxe/Mem/Pool.c | 154 ++++-
6 files changed, 1823 insertions(+), 64 deletions(-)
create mode 100644 MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
create mode 100644 MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf b/MdeModulePkg/Core/Dxe/DxeMain.inf
index e29d6c83ae..6b27714a79 100644
--- a/MdeModulePkg/Core/Dxe/DxeMain.inf
+++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
@@ -56,6 +56,7 @@
Mem/MemData.c
Mem/Imem.h
Mem/MemoryProfileRecord.c
+ Mem/HeapGuard.c
FwVolBlock/FwVolBlock.c
FwVolBlock/FwVolBlock.h
FwVol/FwVolWrite.c
@@ -192,6 +193,9 @@
gEfiMdeModulePkgTokenSpaceGuid.PcdPropertiesTableEnable ## CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdImageProtectionPolicy ## CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdDxeNxMemoryProtectionPolicy ## CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPageType ## CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType ## CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask ## CONSUMES
# [Hob]
# RESOURCE_DESCRIPTOR ## CONSUMES
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
new file mode 100644
index 0000000000..36e41d9a87
--- /dev/null
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
@@ -0,0 +1,1171 @@
+/** @file
+ UEFI Heap Guard functions.
+
+Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
+This program and the accompanying materials
+are licensed and made available under the terms and conditions of the BSD License
+which accompanies this distribution. The full text of the license may be found at
+http://opensource.org/licenses/bsd-license.php
+
+THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+
+**/
+
+#include "DxeMain.h"
+#include "Imem.h"
+#include "HeapGuard.h"
+
+//
+// Global to avoid infinite reentrance of memory allocation when updating
+// page table attributes, which may need allocate pages for new PDE/PTE.
+//
+GLOBAL_REMOVE_IF_UNREFERENCED BOOLEAN mOnGuarding = FALSE;
+
+//
+// Pointer to table tracking the Guarded memory with bitmap, in which '1'
+// is used to indicate memory guarded. '0' might be free memory or Guard
+// page itself, depending on status of memory adjacent to it.
+//
+GLOBAL_REMOVE_IF_UNREFERENCED UINT64 *mGuardedMemoryMap = NULL;
+
+//
+// Current depth level of map table pointed by mGuardedMemoryMap.
+// mMapLevel must be initialized at least by 1. It will be automatically
+// updated according to the address of memory just tracked.
+//
+GLOBAL_REMOVE_IF_UNREFERENCED UINTN mMapLevel = 1;
+
+/**
+ Set corresponding bits in bitmap table to 1 according to the address
+
+ @param[in] Address Start address to set for
+ @param[in] BitNumber Number of bits to set
+ @param[in] BitMap Pointer to bitmap which covers the Address
+
+ @return VOID
+**/
+STATIC
+VOID
+SetBits (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN UINTN BitNumber,
+ IN UINT64 *BitMap
+ )
+{
+ UINTN Lsbs;
+ UINTN Qwords;
+ UINTN Msbs;
+ UINTN StartBit;
+ UINTN EndBit;
+
+ StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
+ EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
+
+ if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
+ Msbs = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
+ GUARDED_HEAP_MAP_ENTRY_BITS;
+ Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
+ Qwords = (BitNumber - Msbs) / GUARDED_HEAP_MAP_ENTRY_BITS;
+ } else {
+ Msbs = BitNumber;
+ Lsbs = 0;
+ Qwords = 0;
+ }
+
+ if (Msbs > 0) {
+ *BitMap |= LShiftU64 (LShiftU64 (1, Msbs) - 1, StartBit);
+ BitMap += 1;
+ }
+
+ if (Qwords > 0) {
+ SetMem64 ((VOID *)BitMap, Qwords * GUARDED_HEAP_MAP_ENTRY_BYTES,
+ (UINT64)-1);
+ BitMap += Qwords;
+ }
+
+ if (Lsbs > 0) {
+ *BitMap |= (LShiftU64 (1, Lsbs) - 1);
+ }
+}
+
+/**
+ Set corresponding bits in bitmap table to 0 according to the address
+
+ @param[in] Address Start address to set for
+ @param[in] BitNumber Number of bits to set
+ @param[in] BitMap Pointer to bitmap which covers the Address
+
+ @return VOID
+**/
+STATIC
+VOID
+ClearBits (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN UINTN BitNumber,
+ IN UINT64 *BitMap
+ )
+{
+ UINTN Lsbs;
+ UINTN Qwords;
+ UINTN Msbs;
+ UINTN StartBit;
+ UINTN EndBit;
+
+ StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
+ EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
+
+ if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
+ Msbs = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
+ GUARDED_HEAP_MAP_ENTRY_BITS;
+ Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
+ Qwords = (BitNumber - Msbs) / GUARDED_HEAP_MAP_ENTRY_BITS;
+ } else {
+ Msbs = BitNumber;
+ Lsbs = 0;
+ Qwords = 0;
+ }
+
+ if (Msbs > 0) {
+ *BitMap &= ~LShiftU64 (LShiftU64 (1, Msbs) - 1, StartBit);
+ BitMap += 1;
+ }
+
+ if (Qwords > 0) {
+ SetMem64 ((VOID *)BitMap, Qwords * GUARDED_HEAP_MAP_ENTRY_BYTES, 0);
+ BitMap += Qwords;
+ }
+
+ if (Lsbs > 0) {
+ *BitMap &= ~(LShiftU64 (1, Lsbs) - 1);
+ }
+}
+
+/**
+ Get corresponding bits in bitmap table according to the address
+
+ The value of bit 0 corresponds to the status of memory at given Address.
+ No more than 64 bits can be retrieved in one call.
+
+ @param[in] Address Start address to retrieve bits for
+ @param[in] BitNumber Number of bits to get
+ @param[in] BitMap Pointer to bitmap which covers the Address
+
+ @return An integer containing the bits information
+**/
+STATIC
+UINT64
+GetBits (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN UINTN BitNumber,
+ IN UINT64 *BitMap
+ )
+{
+ UINTN StartBit;
+ UINTN EndBit;
+ UINTN Lsbs;
+ UINTN Msbs;
+ UINT64 Result;
+
+ ASSERT (BitNumber <= GUARDED_HEAP_MAP_ENTRY_BITS);
+
+ StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
+ EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
+
+ if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
+ Msbs = GUARDED_HEAP_MAP_ENTRY_BITS - StartBit;
+ Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
+ } else {
+ Msbs = BitNumber;
+ Lsbs = 0;
+ }
+
+ Result = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1);
+ if (Lsbs > 0) {
+ BitMap += 1;
+ Result |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
+ }
+
+ return Result;
+}
+
+/**
+ Locate the pointer of bitmap from the guarded memory bitmap tables, which
+ covers the given Address.
+
+ @param[in] Address Start address to search the bitmap for
+ @param[in] AllocMapUnit Flag to indicate memory allocation for the table
+ @param[out] BitMap Pointer to bitmap which covers the Address
+
+ @return The bit number from given Address to the end of current map table
+**/
+UINTN
+FindGuardedMemoryMap (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN BOOLEAN AllocMapUnit,
+ OUT UINT64 **BitMap
+ )
+{
+ UINTN Level;
+ UINTN LevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
+ = GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS;
+ UINTN LevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
+ = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
+ UINT64 **GuardMap;
+ UINT64 *MapMemory;
+ UINTN Index;
+ UINTN Size;
+ UINTN BitsToUnitEnd;
+ EFI_STATUS Status;
+
+ //
+ // Adjust current map table depth according to the address to access
+ //
+ while (mMapLevel < GUARDED_HEAP_MAP_TABLE_DEPTH
+ &&
+ RShiftU64 (
+ Address,
+ LevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel - 1]
+ ) != 0) {
+
+ if (mGuardedMemoryMap != NULL) {
+ Size = (LevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel - 1] + 1)
+ * GUARDED_HEAP_MAP_ENTRY_BYTES;
+ Status = CoreInternalAllocatePages (
+ AllocateAnyPages,
+ EfiBootServicesData,
+ EFI_SIZE_TO_PAGES (Size),
+ (EFI_PHYSICAL_ADDRESS *)&MapMemory,
+ FALSE
+ );
+ ASSERT_EFI_ERROR (Status);
+ ASSERT (MapMemory != NULL);
+
+ SetMem ((VOID *)MapMemory, Size, 0);
+
+ *(UINT64 **)MapMemory = mGuardedMemoryMap;
+ mGuardedMemoryMap = MapMemory;
+ }
+
+ mMapLevel++;
+
+ }
+
+ GuardMap = &mGuardedMemoryMap;
+ for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+ Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
+ ++Level) {
+
+ if (*GuardMap == NULL) {
+ if (!AllocMapUnit) {
+ GuardMap = NULL;
+ break;
+ }
+
+ Size = (LevelMask[Level] + 1) * GUARDED_HEAP_MAP_ENTRY_BYTES;
+ Status = CoreInternalAllocatePages (
+ AllocateAnyPages,
+ EfiBootServicesData,
+ EFI_SIZE_TO_PAGES (Size),
+ (EFI_PHYSICAL_ADDRESS *)&MapMemory,
+ FALSE
+ );
+ ASSERT_EFI_ERROR (Status);
+ ASSERT (MapMemory != NULL);
+
+ SetMem ((VOID *)MapMemory, Size, 0);
+ *GuardMap = (UINT64 *)MapMemory;
+ }
+
+ Index = (UINTN)RShiftU64 (Address, LevelShift[Level]);
+ Index &= LevelMask[Level];
+ GuardMap = (UINT64 **)((*GuardMap) + Index);
+
+ }
+
+ BitsToUnitEnd = GUARDED_HEAP_MAP_BITS - GUARDED_HEAP_MAP_BIT_INDEX (Address);
+ *BitMap = (UINT64 *)GuardMap;
+
+ return BitsToUnitEnd;
+}
+
+/**
+ Set corresponding bits in bitmap table to 1 according to given memory range
+
+ @param[in] Address Memory address to guard from
+ @param[in] NumberOfPages Number of pages to guard
+
+ @return VOID
+**/
+VOID
+EFIAPI
+SetGuardedMemoryBits (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN UINTN NumberOfPages
+ )
+{
+ UINT64 *BitMap;
+ UINTN Bits;
+ UINTN BitsToUnitEnd;
+
+ while (NumberOfPages > 0) {
+ BitsToUnitEnd = FindGuardedMemoryMap (Address, TRUE, &BitMap);
+ ASSERT (BitMap != NULL);
+
+ if (NumberOfPages > BitsToUnitEnd) {
+ // Cross map unit
+ Bits = BitsToUnitEnd;
+ } else {
+ Bits = NumberOfPages;
+ }
+
+ SetBits (Address, Bits, BitMap);
+
+ NumberOfPages -= Bits;
+ Address += EFI_PAGES_TO_SIZE (Bits);
+ }
+}
+
+/**
+ Clear corresponding bits in bitmap table according to given memory range
+
+ @param[in] Address Memory address to unset from
+ @param[in] NumberOfPages Number of pages to unset guard
+
+ @return VOID
+**/
+VOID
+EFIAPI
+ClearGuardedMemoryBits (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN UINTN NumberOfPages
+ )
+{
+ UINT64 *BitMap;
+ UINTN Bits;
+ UINTN BitsToUnitEnd;
+
+ while (NumberOfPages > 0) {
+ BitsToUnitEnd = FindGuardedMemoryMap (Address, TRUE, &BitMap);
+ ASSERT (BitMap != NULL);
+
+ if (NumberOfPages > BitsToUnitEnd) {
+ // Cross map unit
+ Bits = BitsToUnitEnd;
+ } else {
+ Bits = NumberOfPages;
+ }
+
+ ClearBits (Address, Bits, BitMap);
+
+ NumberOfPages -= Bits;
+ Address += EFI_PAGES_TO_SIZE (Bits);
+ }
+}
+
+/**
+ Retrieve corresponding bits in bitmap table according to given memory range
+
+ @param[in] Address Memory address to retrieve from
+ @param[in] NumberOfPages Number of pages to retrieve
+
+ @return VOID
+**/
+UINTN
+GetGuardedMemoryBits (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN UINTN NumberOfPages
+ )
+{
+ UINT64 *BitMap;
+ UINTN Bits;
+ UINTN Result;
+ UINTN Shift;
+ UINTN BitsToUnitEnd;
+
+ ASSERT (NumberOfPages <= GUARDED_HEAP_MAP_ENTRY_BITS);
+
+ Result = 0;
+ Shift = 0;
+ while (NumberOfPages > 0) {
+ BitsToUnitEnd = FindGuardedMemoryMap (Address, FALSE, &BitMap);
+
+ if (NumberOfPages > BitsToUnitEnd) {
+ // Cross map unit
+ Bits = BitsToUnitEnd;
+ } else {
+ Bits = NumberOfPages;
+ }
+
+ if (BitMap != NULL) {
+ Result |= LShiftU64 (GetBits (Address, Bits, BitMap), Shift);
+ }
+
+ Shift += Bits;
+ NumberOfPages -= Bits;
+ Address += EFI_PAGES_TO_SIZE (Bits);
+ }
+
+ return Result;
+}
+
+/**
+ Get bit value in bitmap table for the given address
+
+ @param[in] Address The address to retrieve for
+
+ @return 1 or 0
+**/
+UINTN
+EFIAPI
+GetGuardMapBit (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ UINT64 *GuardMap;
+
+ FindGuardedMemoryMap (Address, FALSE, &GuardMap);
+ if (GuardMap != NULL) {
+ if (RShiftU64 (*GuardMap,
+ GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address)) & 1) {
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ Set the bit in bitmap table for the given address
+
+ @param[in] Address The address to set for
+
+ @return VOID
+**/
+VOID
+EFIAPI
+SetGuardMapBit (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ UINT64 *GuardMap;
+ UINT64 BitMask;
+
+ FindGuardedMemoryMap (Address, TRUE, &GuardMap);
+ if (GuardMap != NULL) {
+ BitMask = LShiftU64 (1, GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address));
+ *GuardMap |= BitMask;
+ }
+}
+
+/**
+ Clear the bit in bitmap table for the given address
+
+ @param[in] Address The address to clear for
+
+ @return VOID
+**/
+VOID
+EFIAPI
+ClearGuardMapBit (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ UINT64 *GuardMap;
+ UINTN BitMask;
+
+ FindGuardedMemoryMap (Address, TRUE, &GuardMap);
+ if (GuardMap != NULL) {
+ BitMask = LShiftU64 (1, GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address));
+ *GuardMap &= ~BitMask;
+ }
+}
+
+/**
+ Check to see if the page at the given address is a Guard page or not
+
+ @param[in] Address The address to check for
+
+ @return TRUE The page at Address is a Guard page
+ @return FALSE The page at Address is not a Guard page
+**/
+BOOLEAN
+EFIAPI
+IsGuardPage (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ UINTN BitMap;
+
+ BitMap = GetGuardedMemoryBits (Address - EFI_PAGE_SIZE, 3);
+ return (BitMap == 0b001 || BitMap == 0b100 || BitMap == 0b101);
+}
+
+/**
+ Check to see if the page at the given address is a head Guard page or not
+
+ @param[in] Address The address to check for
+
+ @return TRUE The page at Address is a head Guard page
+ @return FALSE The page at Address is not a head Guard page
+**/
+BOOLEAN
+EFIAPI
+IsHeadGuard (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ return (GetGuardedMemoryBits (Address, 2) == 0b10);
+}
+
+/**
+ Check to see if the page at the given address is a tail Guard page or not
+
+ @param[in] Address The address to check for
+
+ @return TRUE The page at Address is a tail Guard page
+ @return FALSE The page at Address is not a tail Guard page
+**/
+BOOLEAN
+EFIAPI
+IsTailGuard (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ return (GetGuardedMemoryBits (Address - EFI_PAGE_SIZE, 2) == 0b01);
+}
+
+/**
+ Check to see if the page at the given address is guarded or not
+
+ @param[in] Address The address to check for
+
+ @return TRUE The page at Address is guarded
+ @return FALSE The page at Address is not guarded
+**/
+BOOLEAN
+EFIAPI
+IsMemoryGuarded (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ return (GetGuardMapBit (Address) == 1);
+}
+
+/**
+ Set the page at the given address to be a Guard page.
+
+ This is done by changing the page table attribute to be NOT PRSENT.
+
+ @param[in] Address Page address to Guard at
+
+ @return VOID
+**/
+VOID
+EFIAPI
+SetGuardPage (
+ IN EFI_PHYSICAL_ADDRESS BaseAddress
+ )
+{
+ mOnGuarding = TRUE;
+ gCpu->SetMemoryAttributes (gCpu, BaseAddress, EFI_PAGE_SIZE, EFI_MEMORY_RP);
+ mOnGuarding = FALSE;
+}
+
+/**
+ Unset the Guard page at the given address to the normal memory.
+
+ This is done by changing the page table attribute to be PRSENT.
+
+ @param[in] Address Page address to Guard at
+
+ @return VOID
+**/
+VOID
+EFIAPI
+UnsetGuardPage (
+ IN EFI_PHYSICAL_ADDRESS BaseAddress
+ )
+{
+ mOnGuarding = TRUE;
+ gCpu->SetMemoryAttributes (gCpu, BaseAddress, EFI_PAGE_SIZE, 0);
+ mOnGuarding = FALSE;
+}
+
+/**
+ Check to see if the memory at the given address should be guarded or not
+
+ @param[in] MemoryType Memory type to check
+ @param[in] AllocateType Allocation type to check
+ @param[in] PageOrPool Indicate a page allocation or pool allocation
+
+
+ @return TRUE The given type of memory should be guarded
+ @return FALSE The given type of memory should not be guarded
+**/
+BOOLEAN
+IsMemoryTypeToGuard (
+ IN EFI_MEMORY_TYPE MemoryType,
+ IN EFI_ALLOCATE_TYPE AllocateType,
+ IN UINT8 PageOrPool
+ )
+{
+ UINT64 TestBit;
+ UINT64 ConfigBit;
+ BOOLEAN InSmm;
+
+ if (gCpu == NULL || AllocateType == AllocateAddress) {
+ return FALSE;
+ }
+
+ InSmm = FALSE;
+ if (gSmmBase2 != NULL) {
+ gSmmBase2->InSmm (gSmmBase2, &InSmm);
+ }
+
+ if (InSmm) {
+ return FALSE;
+ }
+
+ if ((PcdGet8 (PcdHeapGuardPropertyMask) & PageOrPool) == 0) {
+ return FALSE;
+ }
+
+ if (PageOrPool == GUARD_HEAP_TYPE_POOL) {
+ ConfigBit = PcdGet64 (PcdHeapGuardPoolType);
+ } else if (PageOrPool == GUARD_HEAP_TYPE_PAGE) {
+ ConfigBit = PcdGet64 (PcdHeapGuardPageType);
+ } else {
+ ConfigBit = (UINT64)-1;
+ }
+
+ if ((UINT32)MemoryType >= MEMORY_TYPE_OS_RESERVED_MIN) {
+ TestBit = BIT63;
+ } else if ((UINT32) MemoryType >= MEMORY_TYPE_OEM_RESERVED_MIN) {
+ TestBit = BIT62;
+ } else if (MemoryType < EfiMaxMemoryType) {
+ TestBit = LShiftU64 (1, MemoryType);
+ } else if (MemoryType == EfiMaxMemoryType) {
+ TestBit = (UINT64)-1;
+ } else {
+ TestBit = 0;
+ }
+
+ return ((ConfigBit & TestBit) != 0);
+}
+
+/**
+ Check to see if the pool at the given address should be guarded or not
+
+ @param[in] MemoryType Pool type to check
+
+
+ @return TRUE The given type of pool should be guarded
+ @return FALSE The given type of pool should not be guarded
+**/
+BOOLEAN
+IsPoolTypeToGuard (
+ IN EFI_MEMORY_TYPE MemoryType
+ )
+{
+ return IsMemoryTypeToGuard (MemoryType, AllocateAnyPages,
+ GUARD_HEAP_TYPE_POOL);
+}
+
+/**
+ Check to see if the page at the given address should be guarded or not
+
+ @param[in] MemoryType Page type to check
+ @param[in] AllocateType Allocation type to check
+
+ @return TRUE The given type of page should be guarded
+ @return FALSE The given type of page should not be guarded
+**/
+BOOLEAN
+IsPageTypeToGuard (
+ IN EFI_MEMORY_TYPE MemoryType,
+ IN EFI_ALLOCATE_TYPE AllocateType
+ )
+{
+ return IsMemoryTypeToGuard (MemoryType, AllocateType, GUARD_HEAP_TYPE_PAGE);
+}
+
+/**
+ Set head Guard and tail Guard for the given memory range
+
+ @param[in] Memory Base address of memory to set guard for
+ @param[in] NumberOfPages Memory size in pages
+
+ @return VOID
+**/
+VOID
+SetGuardForMemory (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NumberOfPages
+ )
+{
+ EFI_PHYSICAL_ADDRESS GuardPage;
+
+ //
+ // Set tail Guard
+ //
+ GuardPage = Memory + EFI_PAGES_TO_SIZE (NumberOfPages);
+ if (!IsGuardPage (GuardPage)) {
+ SetGuardPage (GuardPage);
+ }
+
+ // Set head Guard
+ GuardPage = Memory - EFI_PAGES_TO_SIZE (1);
+ if (!IsGuardPage (GuardPage)) {
+ SetGuardPage (GuardPage);
+ }
+
+ //
+ // Mark the memory range as Guarded
+ //
+ SetGuardedMemoryBits (Memory, NumberOfPages);
+}
+
+/**
+ Unset head Guard and tail Guard for the given memory range
+
+ @param[in] Memory Base address of memory to unset guard for
+ @param[in] NumberOfPages Memory size in pages
+
+ @return VOID
+**/
+VOID
+UnsetGuardForMemory (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NumberOfPages
+ )
+{
+ EFI_PHYSICAL_ADDRESS GuardPage;
+
+ if (NumberOfPages == 0) {
+ return;
+ }
+
+ //
+ // Head Guard must be one page before, if any.
+ //
+ GuardPage = Memory - EFI_PAGES_TO_SIZE (1);
+ if (IsHeadGuard (GuardPage)) {
+ if (!IsMemoryGuarded (GuardPage - EFI_PAGES_TO_SIZE (1))) {
+ //
+ // If the head Guard is not a tail Guard of adjacent memory block,
+ // unset it.
+ //
+ UnsetGuardPage (GuardPage);
+ }
+ } else if (IsMemoryGuarded (GuardPage)) {
+ //
+ // Pages before memory to free are still in Guard. It's a partial free
+ // case. Turn first page of memory block to free into a new Guard.
+ //
+ SetGuardPage (Memory);
+ }
+
+ //
+ // Tail Guard must be the page after this memory block to free, if any.
+ //
+ GuardPage = Memory + EFI_PAGES_TO_SIZE (NumberOfPages);
+ if (IsTailGuard (GuardPage)) {
+ if (!IsMemoryGuarded (GuardPage + EFI_PAGES_TO_SIZE (1))) {
+ //
+ // If the tail Guard is not a head Guard of adjacent memory block,
+ // free it; otherwise, keep it.
+ //
+ UnsetGuardPage (GuardPage);
+ }
+ } else if (IsMemoryGuarded (GuardPage)) {
+ //
+ // Pages after memory to free are still in Guard. It's a partial free
+ // case. We need to keep one page to be a head Guard.
+ //
+ SetGuardPage (GuardPage - EFI_PAGES_TO_SIZE (1));
+ }
+
+ //
+ // No matter what, we just clear the mark of the Guarded memory.
+ //
+ ClearGuardedMemoryBits(Memory, NumberOfPages);
+}
+
+/**
+ Adjust address of free memory according to existing and/or required Guard
+
+ This function will check if there're existing Guard pages of adjacent
+ memory blocks, and try to use it as the Guard page of the memory to be
+ allocated.
+
+ @param[in] Start Start address of free memory block
+ @param[in] Size Size of free memory block
+ @param[in] SizeRequested Size of memory to allocate
+
+ @return The end address of memory block found
+ @return 0 if no enough space for the required size of memory and its Guard
+**/
+UINT64
+AdjustMemoryS (
+ IN UINT64 Start,
+ IN UINT64 Size,
+ IN UINT64 SizeRequested
+ )
+{
+ UINT64 Target;
+
+ Target = Start + Size - SizeRequested;
+
+ //
+ // At least one more page needed for Guard page.
+ //
+ if (Size < (SizeRequested + EFI_PAGES_TO_SIZE (1))) {
+ return 0;
+ }
+
+ if (!IsGuardPage (Start + Size)) {
+ // No Guard at tail to share. One more page is needed.
+ Target -= EFI_PAGES_TO_SIZE (1);
+ }
+
+ // Out of range?
+ if (Target < Start) {
+ return 0;
+ }
+
+ // At the edge?
+ if (Target == Start) {
+ if (!IsGuardPage (Target - EFI_PAGES_TO_SIZE (1))) {
+ // No enough space for a new head Guard if no Guard at head to share.
+ return 0;
+ }
+ }
+
+ // OK, we have enough pages for memory and its Guards. Return the End of the
+ // free space.
+ return Target + SizeRequested - 1;
+}
+
+/**
+ Adjust the start address and number of pages to free according to Guard
+
+ The purpose of this function is to keep the shared Guard page with adjacent
+ memory block if it's still in guard, or free it if no more sharing. Another
+ is to reserve pages as Guard pages in partial page free situation.
+
+ @param[in/out] Memory Base address of memory to free
+ @param[in/out] NumberOfPages Size of memory to free
+
+ @return VOID
+**/
+VOID
+AdjustMemoryF (
+ IN OUT EFI_PHYSICAL_ADDRESS *Memory,
+ IN OUT UINTN *NumberOfPages
+ )
+{
+ EFI_PHYSICAL_ADDRESS Start;
+ EFI_PHYSICAL_ADDRESS MemoryToTest;
+ UINTN PagesToFree;
+
+ if (Memory == NULL || NumberOfPages == NULL || *NumberOfPages == 0) {
+ return;
+ }
+
+ Start = *Memory;
+ PagesToFree = *NumberOfPages;
+
+ //
+ // Head Guard must be one page before, if any.
+ //
+ MemoryToTest = Start - EFI_PAGES_TO_SIZE (1);
+ if (IsHeadGuard (MemoryToTest)) {
+ if (!IsMemoryGuarded (MemoryToTest - EFI_PAGES_TO_SIZE (1))) {
+ //
+ // If the head Guard is not a tail Guard of adjacent memory block,
+ // free it; otherwise, keep it.
+ //
+ Start -= EFI_PAGES_TO_SIZE (1);
+ PagesToFree += 1;
+ }
+ } else if (IsMemoryGuarded (MemoryToTest)) {
+ //
+ // Pages before memory to free are still in Guard. It's a partial free
+ // case. We need to keep one page to be a tail Guard.
+ //
+ Start += EFI_PAGES_TO_SIZE (1);
+ PagesToFree -= 1;
+ }
+
+ //
+ // Tail Guard must be the page after this memory block to free, if any.
+ //
+ MemoryToTest = Start + EFI_PAGES_TO_SIZE (PagesToFree);
+ if (IsTailGuard (MemoryToTest)) {
+ if (!IsMemoryGuarded (MemoryToTest + EFI_PAGES_TO_SIZE (1))) {
+ //
+ // If the tail Guard is not a head Guard of adjacent memory block,
+ // free it; otherwise, keep it.
+ //
+ PagesToFree += 1;
+ }
+ } else if (IsMemoryGuarded (MemoryToTest)) {
+ //
+ // Pages after memory to free are still in Guard. It's a partial free
+ // case. We need to keep one page to be a head Guard.
+ //
+ PagesToFree -= 1;
+ }
+
+ *Memory = Start;
+ *NumberOfPages = PagesToFree;
+}
+
+/**
+ Adjust the base and number of pages to really allocate according to Guard
+
+ @param[in/out] Memory Base address of free memory
+ @param[in/out] NumberOfPages Size of memory to allocate
+
+ @return VOID
+**/
+VOID
+AdjustMemoryA (
+ IN OUT EFI_PHYSICAL_ADDRESS *Memory,
+ IN OUT UINTN *NumberOfPages
+ )
+{
+ //
+ // FindFreePages() has already taken the Guard into account. It's safe to
+ // adjust the start address and/or number of pages here, to make sure that
+ // the Guards are also "allocated".
+ //
+ if (!IsGuardPage (*Memory + EFI_PAGES_TO_SIZE (*NumberOfPages))) {
+ // No tail Guard, add one.
+ *NumberOfPages += 1;
+ }
+
+ if (!IsGuardPage (*Memory - EFI_PAGE_SIZE)) {
+ // No head Guard, add one.
+ *Memory -= EFI_PAGE_SIZE;
+ *NumberOfPages += 1;
+ }
+}
+
+/**
+ Adjust the pool head position to make sure the Guard page is adjavent to
+ pool tail or pool head.
+
+ @param[in] Memory Base address of memory allocated
+ @param[in] NoPages Number of pages actually allocated
+ @param[in] Size Size of memory requested
+ (plus pool head/tail overhead)
+
+ @return Address of pool head
+**/
+VOID *
+AdjustPoolHeadA (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NoPages,
+ IN UINTN Size
+ )
+{
+ if ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) != 0) {
+ //
+ // Pool head is put near the head Guard
+ //
+ return (VOID *)(UINTN)Memory;
+ }
+
+ //
+ // Pool head is put near the tail Guard
+ //
+ return (VOID *)(UINTN)(Memory + EFI_PAGES_TO_SIZE (NoPages) - Size);
+}
+
+/**
+ Get the page base address according to pool head address
+
+ @param[in] Memory Head address of pool to free
+
+ @return Address of pool head
+**/
+VOID *
+AdjustPoolHeadF (
+ IN EFI_PHYSICAL_ADDRESS Memory
+ )
+{
+ if ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) != 0) {
+ //
+ // Pool head is put near the head Guard
+ //
+ return (VOID *)(UINTN)Memory;
+ }
+
+ //
+ // Pool head is put near the tail Guard
+ //
+ return (VOID *)(UINTN)(Memory & ~EFI_PAGE_MASK);
+}
+
+/**
+ Allocate or free guarded memory
+
+ @param[in] Start Start address of memory to allocate or free
+ @param[in] NumberOfPages Memory size in pages
+ @param[in] NewType Memory type to convert to
+
+ @return VOID
+**/
+EFI_STATUS
+CoreConvertPagesWithGuard (
+ IN UINT64 Start,
+ IN UINT64 NumberOfPages,
+ IN EFI_MEMORY_TYPE NewType
+ )
+{
+ if (NewType == EfiConventionalMemory) {
+ AdjustMemoryF (&Start, &NumberOfPages);
+ } else {
+ AdjustMemoryA (&Start, &NumberOfPages);
+ }
+
+ return CoreConvertPages(Start, NumberOfPages, NewType);
+}
+
+/**
+ Helper function to convert a UINT64 value in binary to a string
+
+ @param[in] Value Value of a UINT64 integer
+ @param[in] BinString String buffer to contain the conversion result
+
+ @return VOID
+**/
+VOID
+Uint64ToBinString (
+ IN UINT64 Value,
+ OUT CHAR8 *BinString
+ )
+{
+ UINTN Index;
+
+ if (BinString == NULL) {
+ return;
+ }
+
+ for (Index = 64; Index > 0; --Index) {
+ BinString[Index - 1] = '0' + (Value & 1);
+ Value = RShiftU64 (Value, 1);
+ }
+ BinString[64] = '\0';
+}
+
+/**
+ Dump the guarded memory bit map
+
+ @return VOID
+**/
+VOID
+EFIAPI
+DumpGuardedMemoryBitmap (
+ VOID
+ )
+{
+ UINT64 Entries[GUARDED_HEAP_MAP_TABLE_DEPTH]
+ = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
+ UINT64 Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH]
+ = GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS;
+ UINT64 *Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
+ UINT64 Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
+ UINT64 Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
+ UINT64 TableEntry;
+ UINT64 Address;
+ INTN Level;
+ UINTN RepeatZero;
+ CHAR8 String[GUARDED_HEAP_MAP_ENTRY_BITS + 1];
+ CHAR8 *Ruler1 = " 3 2"
+ " 1 0";
+ CHAR8 *Ruler2 = "FEDCBA9876543210FEDCBA9876543210"
+ "FEDCBA9876543210FEDCBA9876543210";
+
+ if (mGuardedMemoryMap == NULL) {
+ return;
+ }
+
+ DEBUG ((DEBUG_INFO, "============================="
+ " Guarded Memory Bitmap "
+ "==============================\r\n"));
+ DEBUG ((DEBUG_INFO, " %a\r\n", Ruler1));
+ DEBUG ((DEBUG_INFO, " %a\r\n", Ruler2));
+
+
+ SetMem64 (Tables, sizeof(Tables), 0);
+ SetMem64 (Addresses, sizeof(Addresses), 0);
+ SetMem64 (Indices, sizeof(Indices), 0);
+
+ Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+ Tables[Level] = mGuardedMemoryMap;
+ Address = 0;
+ RepeatZero = 0;
+
+ while (TRUE) {
+ if (Indices[Level] > Entries[Level]) {
+
+ Tables[Level] = 0;
+ Level -= 1;
+ RepeatZero = 0;
+
+ DEBUG ((
+ DEBUG_INFO,
+ "========================================="
+ "=========================================\r\n"
+ ));
+
+ } else {
+
+ TableEntry = Tables[Level][Indices[Level]];
+ Address = Addresses[Level];
+
+ if (TableEntry == 0) {
+
+ if (Level == GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
+ if (RepeatZero == 0) {
+ Uint64ToBinString(TableEntry, String);
+ DEBUG ((DEBUG_INFO, "%016lx: %a\r\n", Address, String));
+ } else if (RepeatZero == 1) {
+ DEBUG ((DEBUG_INFO, "... : ...\r\n"));
+ }
+ RepeatZero += 1;
+ }
+
+ } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
+
+ Level += 1;
+ Tables[Level] = (UINT64 *)TableEntry;
+ Addresses[Level] = Address;
+ Indices[Level] = 0;
+ RepeatZero = 0;
+
+ continue;
+
+ } else {
+
+ RepeatZero = 0;
+ Uint64ToBinString(TableEntry, String);
+ DEBUG ((DEBUG_INFO, "%016lx: %a\r\n", Address, String));
+
+ }
+ }
+
+ if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
+ break;
+ }
+
+ Indices[Level] += 1;
+ Address = (Level == 0) ? 0 : Addresses[Level - 1];
+ Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
+
+ }
+}
+
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
new file mode 100644
index 0000000000..26712ca93f
--- /dev/null
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
@@ -0,0 +1,391 @@
+/** @file
+ Data type, macros and function prototypes of heap guard feature.
+
+Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
+This program and the accompanying materials
+are licensed and made available under the terms and conditions of the BSD License
+which accompanies this distribution. The full text of the license may be found at
+http://opensource.org/licenses/bsd-license.php
+
+THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+
+**/
+
+#ifndef _HEAPGUARD_H_
+#define _HEAPGUARD_H_
+
+//
+// Following macros are used to define and access the guarded memory bitmap
+// table.
+//
+// To simplify the access and reduce the memory used for this table, the
+// table is constructed in the similar way as page table structure but in
+// reverse direction, i.e. from bottom growing up to top.
+//
+// - 1-bit tracks 1 page (4KB)
+// - 1-UINT64 map entry tracks 256KB memory
+// - 1K-UINT64 map table tracks 256MB memory
+// - Five levels of tables can track any address of memory of 64-bit
+// system, like below.
+//
+// 512 * 512 * 512 * 512 * 1K * 64b * 4K
+// 111111111 111111111 111111111 111111111 1111111111 111111 111111111111
+// 63 54 45 36 27 17 11 0
+// 9b 9b 9b 9b 10b 6b 12b
+// L0 -> L1 -> L2 -> L3 -> L4 -> bits -> page
+// 1FF 1FF 1FF 1FF 3FF 3F FFF
+//
+// L4 table has 1K * sizeof(UINT64) = 8K (2-page), which can track 256MB
+// memory. Each table of L0-L3 will be allocated when its memory address
+// range is to be tracked. Only 1-page will be allocated each time. This
+// can save memories used to establish this map table.
+//
+// For a normal configuration of system with 4G memory, two levels of tables
+// can track the whole memory, because two levels (L3+L4) of map tables have
+// already coverred 37-bit of memory address. And for a normal UEFI BIOS,
+// less than 128M memory would be consumed during boot. That means we just
+// need
+//
+// 1-page (L3) + 2-page (L4)
+//
+// memory (3 pages) to track the memory allocation works. In this case,
+// there's no need to setup L0-L2 tables.
+//
+
+//
+// Each entry occupies 8B/64b. 1-page can hold 512 entries, which spans 9
+// bits in address. (512 = 1 << 9)
+//
+#define BYTE_LENGTH_SHIFT 3 // (8 = 1 << 3)
+
+#define GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT \
+ (EFI_PAGE_SHIFT - BYTE_LENGTH_SHIFT)
+
+#define GUARDED_HEAP_MAP_TABLE_DEPTH 5
+
+// Use UINT64_index + bit_index_of_UINT64 to locate the bit in may
+#define GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT 6 // (64 = 1 << 6)
+
+#define GUARDED_HEAP_MAP_ENTRY_BITS \
+ (1 << GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT)
+
+#define GUARDED_HEAP_MAP_ENTRY_BYTES \
+ (GUARDED_HEAP_MAP_ENTRY_BITS / 8)
+
+// L4 table address width: 64 - 9 * 4 - 6 - 12 = 10b
+#define GUARDED_HEAP_MAP_ENTRY_SHIFT \
+ (GUARDED_HEAP_MAP_ENTRY_BITS \
+ - GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT * 4 \
+ - GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT \
+ - EFI_PAGE_SHIFT)
+
+// L4 table address mask: (1 << 10 - 1) = 0x3FF
+#define GUARDED_HEAP_MAP_ENTRY_MASK \
+ ((1 << GUARDED_HEAP_MAP_ENTRY_SHIFT) - 1)
+
+// Size of each L4 table: (1 << 10) * 8 = 8KB = 2-page
+#define GUARDED_HEAP_MAP_SIZE \
+ ((1 << GUARDED_HEAP_MAP_ENTRY_SHIFT) * GUARDED_HEAP_MAP_ENTRY_BYTES)
+
+// Memory size tracked by one L4 table: 8KB * 8 * 4KB = 256MB
+#define GUARDED_HEAP_MAP_UNIT_SIZE \
+ (GUARDED_HEAP_MAP_SIZE * 8 * EFI_PAGE_SIZE)
+
+// L4 table entry number: 8KB / 8 = 1024
+#define GUARDED_HEAP_MAP_ENTRIES_PER_UNIT \
+ (GUARDED_HEAP_MAP_SIZE / GUARDED_HEAP_MAP_ENTRY_BYTES)
+
+// L4 table entry indexing
+#define GUARDED_HEAP_MAP_ENTRY_INDEX(Address) \
+ (RShiftU64 (Address, EFI_PAGE_SHIFT \
+ + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT) \
+ & GUARDED_HEAP_MAP_ENTRY_MASK)
+
+// L4 table entry bit indexing
+#define GUARDED_HEAP_MAP_ENTRY_BIT_INDEX(Address) \
+ (RShiftU64 (Address, EFI_PAGE_SHIFT) \
+ & ((1 << GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT) - 1))
+
+//
+// Total bits (pages) tracked by one L4 table (65536-bit)
+//
+#define GUARDED_HEAP_MAP_BITS \
+ (1 << (GUARDED_HEAP_MAP_ENTRY_SHIFT \
+ + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT))
+
+//
+// Bit indexing inside the whole L4 table (0 - 65535)
+//
+#define GUARDED_HEAP_MAP_BIT_INDEX(Address) \
+ (RShiftU64 (Address, EFI_PAGE_SHIFT) \
+ & ((1 << (GUARDED_HEAP_MAP_ENTRY_SHIFT \
+ + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT)) - 1))
+
+//
+// Memory address bit width tracked by L4 table: 10 + 6 + 12 = 28
+//
+#define GUARDED_HEAP_MAP_TABLE_SHIFT \
+ (GUARDED_HEAP_MAP_ENTRY_SHIFT + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT \
+ + EFI_PAGE_SHIFT)
+
+//
+// Macro used to initialize the local array variable for map table traversing
+// {55, 46, 37, 28, 18}
+//
+#define GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS \
+ { \
+ GUARDED_HEAP_MAP_TABLE_SHIFT + GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT * 3, \
+ GUARDED_HEAP_MAP_TABLE_SHIFT + GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT * 2, \
+ GUARDED_HEAP_MAP_TABLE_SHIFT + GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT, \
+ GUARDED_HEAP_MAP_TABLE_SHIFT, \
+ EFI_PAGE_SHIFT + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT \
+ }
+
+//
+// Masks used to extract address range of each level of table
+// {0x1FF, 0x1FF, 0x1FF, 0x1FF, 0x3FF}
+//
+#define GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS \
+ { \
+ (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
+ (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
+ (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
+ (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
+ (1 << GUARDED_HEAP_MAP_ENTRY_SHIFT) - 1 \
+ }
+
+//
+// Memory type to guard (matching the related PCD definition)
+//
+#define GUARD_HEAP_TYPE_POOL BIT0
+#define GUARD_HEAP_TYPE_PAGE BIT1
+
+typedef struct {
+ UINT32 TailMark;
+ UINT32 HeadMark;
+ EFI_PHYSICAL_ADDRESS Address;
+ LIST_ENTRY Link;
+} HEAP_GUARD_NODE;
+
+EFI_STATUS
+CoreConvertPages (
+ IN UINT64 Start,
+ IN UINT64 NumberOfPages,
+ IN EFI_MEMORY_TYPE NewType
+ );
+
+/**
+ Allocate or free guarded memory
+
+ @param[in] Start Start address of memory to allocate or free
+ @param[in] NumberOfPages Memory size in pages
+ @param[in] NewType Memory type to convert to
+
+ @return VOID
+**/
+EFI_STATUS
+CoreConvertPagesWithGuard (
+ IN UINT64 Start,
+ IN UINT64 NumberOfPages,
+ IN EFI_MEMORY_TYPE NewType
+ );
+
+/**
+ Set head Guard and tail Guard for the given memory range
+
+ @param[in] Memory Base address of memory to set guard for
+ @param[in] NumberOfPages Memory size in pages
+
+ @return VOID
+**/
+VOID
+SetGuardForMemory (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NumberOfPages
+ );
+
+/**
+ Unset head Guard and tail Guard for the given memory range
+
+ @param[in] Memory Base address of memory to unset guard for
+ @param[in] NumberOfPages Memory size in pages
+
+ @return VOID
+**/
+VOID
+UnsetGuardForMemory (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NumberOfPages
+ );
+
+/**
+ Adjust the base and number of pages to really allocate according to Guard
+
+ @param[in/out] Memory Base address of free memory
+ @param[in/out] NumberOfPages Size of memory to allocate
+
+ @return VOID
+**/
+VOID
+AdjustMemoryA (
+ IN OUT EFI_PHYSICAL_ADDRESS *Memory,
+ IN OUT UINTN *NumberOfPages
+ );
+
+/**
+ Adjust the start address and number of pages to free according to Guard
+
+ The purpose of this function is to keep the shared Guard page with adjacent
+ memory block if it's still in guard, or free it if no more sharing. Another
+ is to reserve pages as Guard pages in partial page free situation.
+
+ @param[in/out] Memory Base address of memory to free
+ @param[in/out] NumberOfPages Size of memory to free
+
+ @return VOID
+**/
+VOID
+AdjustMemoryF (
+ IN OUT EFI_PHYSICAL_ADDRESS *Memory,
+ IN OUT UINTN *NumberOfPages
+ );
+
+/**
+ Adjust address of free memory according to existing and/or required Guard
+
+ This function will check if there're existing Guard pages of adjacent
+ memory blocks, and try to use it as the Guard page of the memory to be
+ allocated.
+
+ @param[in] Start Start address of free memory block
+ @param[in] Size Size of free memory block
+ @param[in] SizeRequested Size of memory to allocate
+
+ @return The end address of memory block found
+ @return 0 if no enough space for the required size of memory and its Guard
+**/
+UINT64
+AdjustMemoryS (
+ IN UINT64 Start,
+ IN UINT64 Size,
+ IN UINT64 SizeRequested
+ );
+
+/**
+ Allocate or free guarded memory
+
+ @param[in] Start Start address of memory to allocate or free
+ @param[in] NumberOfPages Memory size in pages
+ @param[in] NewType Memory type to convert to
+
+ @return VOID
+**/
+EFI_STATUS
+CoreConvertPagesWithGuard (
+ IN UINT64 Start,
+ IN UINT64 NumberOfPages,
+ IN EFI_MEMORY_TYPE NewType
+ );
+
+/**
+ Check to see if the pool at the given address should be guarded or not
+
+ @param[in] MemoryType Pool type to check
+
+
+ @return TRUE The given type of pool should be guarded
+ @return FALSE The given type of pool should not be guarded
+**/
+BOOLEAN
+IsPoolTypeToGuard (
+ IN EFI_MEMORY_TYPE MemoryType
+ );
+
+/**
+ Check to see if the page at the given address should be guarded or not
+
+ @param[in] MemoryType Page type to check
+ @param[in] AllocateType Allocation type to check
+
+ @return TRUE The given type of page should be guarded
+ @return FALSE The given type of page should not be guarded
+**/
+BOOLEAN
+IsPageTypeToGuard (
+ IN EFI_MEMORY_TYPE MemoryType,
+ IN EFI_ALLOCATE_TYPE AllocateType
+ );
+
+/**
+ Check to see if the page at the given address is guarded or not
+
+ @param[in] Address The address to check for
+
+ @return TRUE The page at Address is guarded
+ @return FALSE The page at Address is not guarded
+**/
+BOOLEAN
+EFIAPI
+IsMemoryGuarded (
+ IN EFI_PHYSICAL_ADDRESS Address
+ );
+
+/**
+ Check to see if the page at the given address is a Guard page or not
+
+ @param[in] Address The address to check for
+
+ @return TRUE The page at Address is a Guard page
+ @return FALSE The page at Address is not a Guard page
+**/
+BOOLEAN
+EFIAPI
+IsGuardPage (
+ IN EFI_PHYSICAL_ADDRESS Address
+ );
+
+/**
+ Dump the guarded memory bit map
+
+ @return VOID
+**/
+VOID
+EFIAPI
+DumpGuardedMemoryBitmap (
+ VOID
+ );
+
+/**
+ Adjust the pool head position to make sure the Guard page is adjavent to
+ pool tail or pool head.
+
+ @param[in] Memory Base address of memory allocated
+ @param[in] NoPages Number of pages actually allocated
+ @param[in] Size Size of memory requested
+ (plus pool head/tail overhead)
+
+ @return Address of pool head
+**/
+VOID *
+AdjustPoolHeadA (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NoPages,
+ IN UINTN Size
+ );
+
+/**
+ Get the page base address according to pool head address
+
+ @param[in] Memory Head address of pool to free
+
+ @return Address of pool head
+**/
+VOID *
+AdjustPoolHeadF (
+ IN EFI_PHYSICAL_ADDRESS Memory
+ );
+
+extern BOOLEAN mOnGuarding;
+
+#endif
diff --git a/MdeModulePkg/Core/Dxe/Mem/Imem.h b/MdeModulePkg/Core/Dxe/Mem/Imem.h
index fb53f95575..e58a5d62ba 100644
--- a/MdeModulePkg/Core/Dxe/Mem/Imem.h
+++ b/MdeModulePkg/Core/Dxe/Mem/Imem.h
@@ -1,7 +1,7 @@
/** @file
Data structure and functions to allocate and free memory space.
-Copyright (c) 2006 - 2016, Intel Corporation. All rights reserved.<BR>
+Copyright (c) 2006 - 2017, Intel Corporation. All rights reserved.<BR>
This program and the accompanying materials
are licensed and made available under the terms and conditions of the BSD License
which accompanies this distribution. The full text of the license may be found at
@@ -61,6 +61,7 @@ typedef struct {
@param PoolType The type of memory for the new pool pages
@param NumberOfPages No of pages to allocate
@param Alignment Bits to align.
+ @param NeedGuard Flag to indicate Guard page is needed or not
@return The allocated memory, or NULL
@@ -69,7 +70,8 @@ VOID *
CoreAllocatePoolPages (
IN EFI_MEMORY_TYPE PoolType,
IN UINTN NumberOfPages,
- IN UINTN Alignment
+ IN UINTN Alignment,
+ IN BOOLEAN NeedGuard
);
@@ -95,6 +97,7 @@ CoreFreePoolPages (
@param PoolType Type of pool to allocate
@param Size The amount of pool to allocate
+ @param NeedGuard Flag to indicate Guard page is needed or not
@return The allocate pool, or NULL
@@ -102,7 +105,8 @@ CoreFreePoolPages (
VOID *
CoreAllocatePoolI (
IN EFI_MEMORY_TYPE PoolType,
- IN UINTN Size
+ IN UINTN Size,
+ IN BOOLEAN NeedGuard
);
@@ -145,6 +149,34 @@ CoreReleaseMemoryLock (
VOID
);
+/**
+ Allocates pages from the memory map.
+
+ @param Type The type of allocation to perform
+ @param MemoryType The type of memory to turn the allocated pages
+ into
+ @param NumberOfPages The number of pages to allocate
+ @param Memory A pointer to receive the base allocated memory
+ address
+ @param NeedGuard Flag to indicate Guard page is needed or not
+
+ @return Status. On success, Memory is filled in with the base address allocated
+ @retval EFI_INVALID_PARAMETER Parameters violate checking rules defined in
+ spec.
+ @retval EFI_NOT_FOUND Could not allocate pages match the requirement.
+ @retval EFI_OUT_OF_RESOURCES No enough pages to allocate.
+ @retval EFI_SUCCESS Pages successfully allocated.
+
+**/
+EFI_STATUS
+EFIAPI
+CoreInternalAllocatePages (
+ IN EFI_ALLOCATE_TYPE Type,
+ IN EFI_MEMORY_TYPE MemoryType,
+ IN UINTN NumberOfPages,
+ IN OUT EFI_PHYSICAL_ADDRESS *Memory,
+ IN BOOLEAN NeedGuard
+ );
//
// Internal Global data
diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c b/MdeModulePkg/Core/Dxe/Mem/Page.c
index 3dd6d1b4a0..648b21d429 100644
--- a/MdeModulePkg/Core/Dxe/Mem/Page.c
+++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
@@ -14,6 +14,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
#include "DxeMain.h"
#include "Imem.h"
+#include "HeapGuard.h"
//
// Entry for tracking the memory regions for each memory type to coalesce similar memory types
@@ -285,9 +286,12 @@ AllocateMemoryMapEntry (
//
// The list is empty, to allocate one page to refuel the list
//
- FreeDescriptorEntries = CoreAllocatePoolPages (EfiBootServicesData,
+ FreeDescriptorEntries = CoreAllocatePoolPages (
+ EfiBootServicesData,
EFI_SIZE_TO_PAGES (DEFAULT_PAGE_ALLOCATION_GRANULARITY),
- DEFAULT_PAGE_ALLOCATION_GRANULARITY);
+ DEFAULT_PAGE_ALLOCATION_GRANULARITY,
+ FALSE
+ );
if (FreeDescriptorEntries != NULL) {
//
// Enque the free memmory map entries into the list
@@ -894,17 +898,41 @@ CoreConvertPagesEx (
//
CoreAddRange (MemType, Start, RangeEnd, Attribute);
if (ChangingType && (MemType == EfiConventionalMemory)) {
- //
- // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because this
- // macro will ASSERT() if address is 0. Instead, CoreAddRange() guarantees
- // that the page starting at address 0 is always filled with zeros.
- //
if (Start == 0) {
+ //
+ // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because this
+ // macro will ASSERT() if address is 0. Instead, CoreAddRange()
+ // guarantees that the page starting at address 0 is always filled
+ // with zeros.
+ //
if (RangeEnd > EFI_PAGE_SIZE) {
DEBUG_CLEAR_MEMORY ((VOID *)(UINTN) EFI_PAGE_SIZE, (UINTN) (RangeEnd - EFI_PAGE_SIZE + 1));
}
} else {
- DEBUG_CLEAR_MEMORY ((VOID *)(UINTN) Start, (UINTN) (RangeEnd - Start + 1));
+ //
+ // If Heap Guard is enabled, the page at the top and/or bottom of
+ // this memory block to free might be inaccessible. Skipping them
+ // to avoid page fault exception.
+ //
+ UINT64 StartToClear;
+ UINT64 EndToClear;
+
+ StartToClear = Start;
+ EndToClear = RangeEnd;
+ if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
+ if (IsGuardPage(StartToClear)) {
+ StartToClear += EFI_PAGE_SIZE;
+ }
+ if (IsGuardPage (EndToClear)) {
+ EndToClear -= EFI_PAGE_SIZE;
+ }
+ ASSERT (EndToClear > StartToClear);
+ }
+
+ DEBUG_CLEAR_MEMORY(
+ (VOID *)(UINTN)StartToClear,
+ (UINTN)(EndToClear - StartToClear + 1)
+ );
}
}
@@ -991,6 +1019,7 @@ CoreUpdateMemoryAttributes (
@param NewType The type of memory the range is going to be
turned into
@param Alignment Bits to align with
+ @param NeedGuard Flag to indicate Guard page is needed or not
@return The base address of the range, or 0 if the range was not found
@@ -1001,7 +1030,8 @@ CoreFindFreePagesI (
IN UINT64 MinAddress,
IN UINT64 NumberOfPages,
IN EFI_MEMORY_TYPE NewType,
- IN UINTN Alignment
+ IN UINTN Alignment,
+ IN BOOLEAN NeedGuard
)
{
UINT64 NumberOfBytes;
@@ -1093,6 +1123,17 @@ CoreFindFreePagesI (
// If this is the best match so far remember it
//
if (DescEnd > Target) {
+ if (NeedGuard) {
+ DescEnd = AdjustMemoryS (
+ DescEnd + 1 - DescNumberOfBytes,
+ DescNumberOfBytes,
+ NumberOfBytes
+ );
+ if (DescEnd == 0) {
+ continue;
+ }
+ }
+
Target = DescEnd;
}
}
@@ -1123,6 +1164,7 @@ CoreFindFreePagesI (
@param NewType The type of memory the range is going to be
turned into
@param Alignment Bits to align with
+ @param NeedGuard Flag to indicate Guard page is needed or not
@return The base address of the range, or 0 if the range was not found.
@@ -1132,7 +1174,8 @@ FindFreePages (
IN UINT64 MaxAddress,
IN UINT64 NoPages,
IN EFI_MEMORY_TYPE NewType,
- IN UINTN Alignment
+ IN UINTN Alignment,
+ IN BOOLEAN NeedGuard
)
{
UINT64 Start;
@@ -1146,7 +1189,8 @@ FindFreePages (
mMemoryTypeStatistics[NewType].BaseAddress,
NoPages,
NewType,
- Alignment
+ Alignment,
+ NeedGuard
);
if (Start != 0) {
return Start;
@@ -1157,7 +1201,8 @@ FindFreePages (
// Attempt to find free pages in the default allocation bin
//
if (MaxAddress >= mDefaultMaximumAddress) {
- Start = CoreFindFreePagesI (mDefaultMaximumAddress, 0, NoPages, NewType, Alignment);
+ Start = CoreFindFreePagesI (mDefaultMaximumAddress, 0, NoPages, NewType,
+ Alignment, NeedGuard);
if (Start != 0) {
if (Start < mDefaultBaseAddress) {
mDefaultBaseAddress = Start;
@@ -1172,7 +1217,8 @@ FindFreePages (
// address range. If this allocation fails, then there are not enough
// resources anywhere to satisfy the request.
//
- Start = CoreFindFreePagesI (MaxAddress, 0, NoPages, NewType, Alignment);
+ Start = CoreFindFreePagesI (MaxAddress, 0, NoPages, NewType, Alignment,
+ NeedGuard);
if (Start != 0) {
return Start;
}
@@ -1187,7 +1233,7 @@ FindFreePages (
//
// If any memory resources were promoted, then re-attempt the allocation
//
- return FindFreePages (MaxAddress, NoPages, NewType, Alignment);
+ return FindFreePages (MaxAddress, NoPages, NewType, Alignment, NeedGuard);
}
@@ -1200,6 +1246,7 @@ FindFreePages (
@param NumberOfPages The number of pages to allocate
@param Memory A pointer to receive the base allocated memory
address
+ @param NeedGuard Flag to indicate Guard page is needed or not
@return Status. On success, Memory is filled in with the base address allocated
@retval EFI_INVALID_PARAMETER Parameters violate checking rules defined in
@@ -1215,7 +1262,8 @@ CoreInternalAllocatePages (
IN EFI_ALLOCATE_TYPE Type,
IN EFI_MEMORY_TYPE MemoryType,
IN UINTN NumberOfPages,
- IN OUT EFI_PHYSICAL_ADDRESS *Memory
+ IN OUT EFI_PHYSICAL_ADDRESS *Memory,
+ IN BOOLEAN NeedGuard
)
{
EFI_STATUS Status;
@@ -1301,7 +1349,8 @@ CoreInternalAllocatePages (
// If not a specific address, then find an address to allocate
//
if (Type != AllocateAddress) {
- Start = FindFreePages (MaxAddress, NumberOfPages, MemoryType, Alignment);
+ Start = FindFreePages (MaxAddress, NumberOfPages, MemoryType, Alignment,
+ NeedGuard);
if (Start == 0) {
Status = EFI_OUT_OF_RESOURCES;
goto Done;
@@ -1311,12 +1360,19 @@ CoreInternalAllocatePages (
//
// Convert pages from FreeMemory to the requested type
//
- Status = CoreConvertPages (Start, NumberOfPages, MemoryType);
+ if (NeedGuard) {
+ Status = CoreConvertPagesWithGuard(Start, NumberOfPages, MemoryType);
+ } else {
+ Status = CoreConvertPages(Start, NumberOfPages, MemoryType);
+ }
Done:
CoreReleaseMemoryLock ();
if (!EFI_ERROR (Status)) {
+ if (NeedGuard) {
+ SetGuardForMemory (Start, NumberOfPages);
+ }
*Memory = Start;
}
@@ -1351,8 +1407,11 @@ CoreAllocatePages (
)
{
EFI_STATUS Status;
+ BOOLEAN NeedGuard;
- Status = CoreInternalAllocatePages (Type, MemoryType, NumberOfPages, Memory);
+ NeedGuard = IsPageTypeToGuard (MemoryType, Type) && !mOnGuarding;
+ Status = CoreInternalAllocatePages (Type, MemoryType, NumberOfPages, Memory,
+ NeedGuard);
if (!EFI_ERROR (Status)) {
CoreUpdateProfile (
(EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
@@ -1393,6 +1452,7 @@ CoreInternalFreePages (
LIST_ENTRY *Link;
MEMORY_MAP *Entry;
UINTN Alignment;
+ BOOLEAN IsGuarded;
//
// Free the range
@@ -1438,14 +1498,20 @@ CoreInternalFreePages (
*MemoryType = Entry->Type;
}
- Status = CoreConvertPages (Memory, NumberOfPages, EfiConventionalMemory);
-
- if (EFI_ERROR (Status)) {
- goto Done;
+ IsGuarded = IsPageTypeToGuard (Entry->Type, AllocateAnyPages) &&
+ IsMemoryGuarded (Memory);
+ if (IsGuarded) {
+ Status = CoreConvertPagesWithGuard (Memory, NumberOfPages,
+ EfiConventionalMemory);
+ } else {
+ Status = CoreConvertPages (Memory, NumberOfPages, EfiConventionalMemory);
}
Done:
CoreReleaseMemoryLock ();
+ if (IsGuarded) {
+ UnsetGuardForMemory(Memory, NumberOfPages);
+ }
return Status;
}
@@ -1843,6 +1909,12 @@ Done:
*MemoryMapSize = BufferSize;
+ DEBUG_CODE (
+ if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
+ DumpGuardedMemoryBitmap ();
+ }
+ );
+
return Status;
}
@@ -1854,6 +1926,7 @@ Done:
@param PoolType The type of memory for the new pool pages
@param NumberOfPages No of pages to allocate
@param Alignment Bits to align.
+ @param NeedGuard Flag to indicate Guard page is needed or not
@return The allocated memory, or NULL
@@ -1862,7 +1935,8 @@ VOID *
CoreAllocatePoolPages (
IN EFI_MEMORY_TYPE PoolType,
IN UINTN NumberOfPages,
- IN UINTN Alignment
+ IN UINTN Alignment,
+ IN BOOLEAN NeedGuard
)
{
UINT64 Start;
@@ -1870,7 +1944,8 @@ CoreAllocatePoolPages (
//
// Find the pages to convert
//
- Start = FindFreePages (MAX_ADDRESS, NumberOfPages, PoolType, Alignment);
+ Start = FindFreePages (MAX_ADDRESS, NumberOfPages, PoolType, Alignment,
+ NeedGuard);
//
// Convert it to boot services data
@@ -1878,7 +1953,11 @@ CoreAllocatePoolPages (
if (Start == 0) {
DEBUG ((DEBUG_ERROR | DEBUG_PAGE, "AllocatePoolPages: failed to allocate %d pages\n", (UINT32)NumberOfPages));
} else {
- CoreConvertPages (Start, NumberOfPages, PoolType);
+ if (NeedGuard) {
+ CoreConvertPagesWithGuard (Start, NumberOfPages, PoolType);
+ } else {
+ CoreConvertPages (Start, NumberOfPages, PoolType);
+ }
}
return (VOID *)(UINTN) Start;
diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c b/MdeModulePkg/Core/Dxe/Mem/Pool.c
index dd165fea75..14aaef8250 100644
--- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
+++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
@@ -14,6 +14,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
#include "DxeMain.h"
#include "Imem.h"
+#include "HeapGuard.h"
STATIC EFI_LOCK mPoolMemoryLock = EFI_INITIALIZE_LOCK_VARIABLE (TPL_NOTIFY);
@@ -169,7 +170,7 @@ LookupPoolHead (
}
}
- Pool = CoreAllocatePoolI (EfiBootServicesData, sizeof (POOL));
+ Pool = CoreAllocatePoolI (EfiBootServicesData, sizeof (POOL), FALSE);
if (Pool == NULL) {
return NULL;
}
@@ -214,7 +215,8 @@ CoreInternalAllocatePool (
OUT VOID **Buffer
)
{
- EFI_STATUS Status;
+ EFI_STATUS Status;
+ BOOLEAN NeedGuard;
//
// If it's not a valid type, fail it
@@ -238,6 +240,8 @@ CoreInternalAllocatePool (
return EFI_OUT_OF_RESOURCES;
}
+ NeedGuard = IsPoolTypeToGuard (PoolType) && !mOnGuarding;
+
//
// Acquire the memory lock and make the allocation
//
@@ -246,7 +250,7 @@ CoreInternalAllocatePool (
return EFI_OUT_OF_RESOURCES;
}
- *Buffer = CoreAllocatePoolI (PoolType, Size);
+ *Buffer = CoreAllocatePoolI (PoolType, Size, NeedGuard);
CoreReleaseLock (&mPoolMemoryLock);
return (*Buffer != NULL) ? EFI_SUCCESS : EFI_OUT_OF_RESOURCES;
}
@@ -298,6 +302,7 @@ CoreAllocatePool (
@param PoolType The type of memory for the new pool pages
@param NoPages No of pages to allocate
@param Granularity Bits to align.
+ @param NeedGuard Flag to indicate Guard page is needed or not
@return The allocated memory, or NULL
@@ -307,7 +312,8 @@ VOID *
CoreAllocatePoolPagesI (
IN EFI_MEMORY_TYPE PoolType,
IN UINTN NoPages,
- IN UINTN Granularity
+ IN UINTN Granularity,
+ IN BOOLEAN NeedGuard
)
{
VOID *Buffer;
@@ -318,11 +324,14 @@ CoreAllocatePoolPagesI (
return NULL;
}
- Buffer = CoreAllocatePoolPages (PoolType, NoPages, Granularity);
+ Buffer = CoreAllocatePoolPages (PoolType, NoPages, Granularity, NeedGuard);
CoreReleaseMemoryLock ();
if (Buffer != NULL) {
- ApplyMemoryProtectionPolicy (EfiConventionalMemory, PoolType,
+ if (NeedGuard) {
+ SetGuardForMemory ((EFI_PHYSICAL_ADDRESS)Buffer, NoPages);
+ }
+ ApplyMemoryProtectionPolicy(EfiConventionalMemory, PoolType,
(EFI_PHYSICAL_ADDRESS)(UINTN)Buffer, EFI_PAGES_TO_SIZE (NoPages));
}
return Buffer;
@@ -334,6 +343,7 @@ CoreAllocatePoolPagesI (
@param PoolType Type of pool to allocate
@param Size The amount of pool to allocate
+ @param NeedGuard Flag to indicate Guard page is needed or not
@return The allocate pool, or NULL
@@ -341,7 +351,8 @@ CoreAllocatePoolPagesI (
VOID *
CoreAllocatePoolI (
IN EFI_MEMORY_TYPE PoolType,
- IN UINTN Size
+ IN UINTN Size,
+ IN BOOLEAN NeedGuard
)
{
POOL *Pool;
@@ -355,6 +366,7 @@ CoreAllocatePoolI (
UINTN Offset, MaxOffset;
UINTN NoPages;
UINTN Granularity;
+ BOOLEAN HasPoolTail;
ASSERT_LOCKED (&mPoolMemoryLock);
@@ -372,6 +384,9 @@ CoreAllocatePoolI (
// Adjust the size by the pool header & tail overhead
//
+ HasPoolTail = !(NeedGuard &&
+ ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
+
//
// Adjusting the Size to be of proper alignment so that
// we don't get an unaligned access fault later when
@@ -391,10 +406,16 @@ CoreAllocatePoolI (
// If allocation is over max size, just allocate pages for the request
// (slow)
//
- if (Index >= SIZE_TO_LIST (Granularity)) {
- NoPages = EFI_SIZE_TO_PAGES(Size) + EFI_SIZE_TO_PAGES (Granularity) - 1;
+ if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
+ if (!HasPoolTail) {
+ Size -= sizeof (POOL_TAIL);
+ }
+ NoPages = EFI_SIZE_TO_PAGES (Size) + EFI_SIZE_TO_PAGES (Granularity) - 1;
NoPages &= ~(UINTN)(EFI_SIZE_TO_PAGES (Granularity) - 1);
- Head = CoreAllocatePoolPagesI (PoolType, NoPages, Granularity);
+ Head = CoreAllocatePoolPagesI (PoolType, NoPages, Granularity, NeedGuard);
+ if (NeedGuard) {
+ Head = AdjustPoolHeadA ((EFI_PHYSICAL_ADDRESS)Head, NoPages, Size);
+ }
goto Done;
}
@@ -422,7 +443,8 @@ CoreAllocatePoolI (
//
// Get another page
//
- NewPage = CoreAllocatePoolPagesI (PoolType, EFI_SIZE_TO_PAGES (Granularity), Granularity);
+ NewPage = CoreAllocatePoolPagesI (PoolType, EFI_SIZE_TO_PAGES (Granularity),
+ Granularity, NeedGuard);
if (NewPage == NULL) {
goto Done;
}
@@ -468,30 +490,39 @@ Done:
if (Head != NULL) {
+ //
+ // Account the allocation
+ //
+ Pool->Used += Size;
+
//
// If we have a pool buffer, fill in the header & tail info
//
Head->Signature = POOL_HEAD_SIGNATURE;
Head->Size = Size;
Head->Type = (EFI_MEMORY_TYPE) PoolType;
- Tail = HEAD_TO_TAIL (Head);
- Tail->Signature = POOL_TAIL_SIGNATURE;
- Tail->Size = Size;
Buffer = Head->Data;
- DEBUG_CLEAR_MEMORY (Buffer, Size - POOL_OVERHEAD);
+
+ if (HasPoolTail) {
+ Tail = HEAD_TO_TAIL (Head);
+ Tail->Signature = POOL_TAIL_SIGNATURE;
+ Tail->Size = Size;
+
+ Size -= POOL_OVERHEAD;
+ } else {
+ Size -= SIZE_OF_POOL_HEAD;
+ }
+
+ DEBUG_CLEAR_MEMORY (Buffer, Size);
DEBUG ((
DEBUG_POOL,
"AllocatePoolI: Type %x, Addr %p (len %lx) %,ld\n", PoolType,
Buffer,
- (UINT64)(Size - POOL_OVERHEAD),
+ (UINT64)Size,
(UINT64) Pool->Used
));
- //
- // Account the allocation
- //
- Pool->Used += Size;
} else {
DEBUG ((DEBUG_ERROR | DEBUG_POOL, "AllocatePool: failed to allocate %ld bytes\n", (UINT64) Size));
@@ -588,6 +619,34 @@ CoreFreePoolPagesI (
(EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE (NoPages));
}
+/**
+ Internal function. Frees guarded pool pages.
+
+ @param PoolType The type of memory for the pool pages
+ @param Memory The base address to free
+ @param NoPages The number of pages to free
+
+**/
+STATIC
+VOID
+CoreFreePoolPagesWithGuard (
+ IN EFI_MEMORY_TYPE PoolType,
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NoPages
+ )
+{
+ EFI_PHYSICAL_ADDRESS MemoryGuarded;
+ UINTN NoPagesGuarded;
+
+ MemoryGuarded = Memory;
+ NoPagesGuarded = NoPages;
+
+ AdjustMemoryF (&Memory, &NoPages);
+ CoreFreePoolPagesI (PoolType, Memory, NoPages);
+
+ UnsetGuardForMemory (MemoryGuarded, NoPagesGuarded);
+}
+
/**
Internal function to free a pool entry.
Caller must have the memory lock held
@@ -616,6 +675,8 @@ CoreFreePoolI (
UINTN Offset;
BOOLEAN AllFree;
UINTN Granularity;
+ BOOLEAN IsGuarded;
+ BOOLEAN HasPoolTail;
ASSERT(Buffer != NULL);
//
@@ -628,24 +689,32 @@ CoreFreePoolI (
return EFI_INVALID_PARAMETER;
}
- Tail = HEAD_TO_TAIL (Head);
- ASSERT(Tail != NULL);
+ IsGuarded = IsPoolTypeToGuard (Head->Type) &&
+ IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)Head);
+ HasPoolTail = !(IsGuarded &&
+ ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
- //
- // Debug
- //
- ASSERT (Tail->Signature == POOL_TAIL_SIGNATURE);
- ASSERT (Head->Size == Tail->Size);
- ASSERT_LOCKED (&mPoolMemoryLock);
+ if (HasPoolTail) {
+ Tail = HEAD_TO_TAIL (Head);
+ ASSERT (Tail != NULL);
- if (Tail->Signature != POOL_TAIL_SIGNATURE) {
- return EFI_INVALID_PARAMETER;
- }
+ //
+ // Debug
+ //
+ ASSERT (Tail->Signature == POOL_TAIL_SIGNATURE);
+ ASSERT (Head->Size == Tail->Size);
- if (Head->Size != Tail->Size) {
- return EFI_INVALID_PARAMETER;
+ if (Tail->Signature != POOL_TAIL_SIGNATURE) {
+ return EFI_INVALID_PARAMETER;
+ }
+
+ if (Head->Size != Tail->Size) {
+ return EFI_INVALID_PARAMETER;
+ }
}
+ ASSERT_LOCKED (&mPoolMemoryLock);
+
//
// Determine the pool type and account for it
//
@@ -680,14 +749,27 @@ CoreFreePoolI (
//
// If it's not on the list, it must be pool pages
//
- if (Index >= SIZE_TO_LIST (Granularity)) {
+ if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
//
// Return the memory pages back to free memory
//
- NoPages = EFI_SIZE_TO_PAGES(Size) + EFI_SIZE_TO_PAGES (Granularity) - 1;
+ NoPages = EFI_SIZE_TO_PAGES (Size) + EFI_SIZE_TO_PAGES (Granularity) - 1;
NoPages &= ~(UINTN)(EFI_SIZE_TO_PAGES (Granularity) - 1);
- CoreFreePoolPagesI (Pool->MemoryType, (EFI_PHYSICAL_ADDRESS) (UINTN) Head, NoPages);
+ if (IsGuarded) {
+ Head = AdjustPoolHeadF ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
+ CoreFreePoolPagesWithGuard (
+ Pool->MemoryType,
+ (EFI_PHYSICAL_ADDRESS)(UINTN)Head,
+ NoPages
+ );
+ } else {
+ CoreFreePoolPagesI (
+ Pool->MemoryType,
+ (EFI_PHYSICAL_ADDRESS)(UINTN)Head,
+ NoPages
+ );
+ }
} else {
--
2.14.1.windows.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/5] MdeModulePkg/PiSmmCore: Implement heap guard feature for SMM mode
2017-10-11 3:18 [PATCH 0/5] Implement heap guard feature Jian J Wang
2017-10-11 3:18 ` [PATCH 1/5] MdeModulePkg/DxeCore: Implement heap guard feature for UEFI Jian J Wang
@ 2017-10-11 3:18 ` Jian J Wang
2017-10-13 1:27 ` Dong, Eric
2017-10-11 3:18 ` [PATCH 3/5] MdeModulePkg/MdeModulePkg.dec, .uni: Add heap guard related PCDs and string tokens Jian J Wang
` (2 subsequent siblings)
4 siblings, 1 reply; 10+ messages in thread
From: Jian J Wang @ 2017-10-11 3:18 UTC (permalink / raw)
To: edk2-devel
Cc: Star Zeng, Eric Dong, Jiewen Yao, Michael Kinney, Ayellet Wolman
This feature makes use of paging mechanism to add a hidden (not present)
page just before and after the allocated memory block. If the code tries
to access memory outside of the allocated part, page fault exception will
be triggered.
This feature is controlled by three PCDs:
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPageType
BIT2 and BIT3 of PcdHeapGuardPropertyMask can be used to enable or disable
memory guard for SMM page and pool respectively. PcdHeapGuardPoolType and/or
PcdHeapGuardPageType are used to enable or disable guard for specific type
of memory. For example, we can turn on guard only for EfiBootServicesData
and EfiRuntimeServicesData by setting the PCD with value 0x50.
Pool memory is not ususally integer multiple of one page, and is more likely
less than a page. There's no way to monitor the overflow at both top and
bottom of pool memory. BIT7 of PcdHeapGuardPropertyMask is used to control
how to position the head of pool memory so that it's easier to catch memory
overflow in memory growing direction or in decreasing direction.
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ayellet Wolman <ayellet.wolman@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c | 1438 ++++++++++++++++++++++++++
MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h | 395 +++++++
MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c | 704 +++++++++++++
MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h | 174 ++++
MdeModulePkg/Core/PiSmmCore/Page.c | 51 +-
MdeModulePkg/Core/PiSmmCore/PiSmmCore.c | 12 +-
MdeModulePkg/Core/PiSmmCore/PiSmmCore.h | 80 +-
MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf | 8 +
MdeModulePkg/Core/PiSmmCore/Pool.c | 77 +-
9 files changed, 2911 insertions(+), 28 deletions(-)
create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c
create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h
create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c
create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h
diff --git a/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c b/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c
new file mode 100644
index 0000000000..c64eaea5d1
--- /dev/null
+++ b/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c
@@ -0,0 +1,1438 @@
+/** @file
+ UEFI Heap Guard functions.
+
+Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
+This program and the accompanying materials
+are licensed and made available under the terms and conditions of the BSD License
+which accompanies this distribution. The full text of the license may be found at
+http://opensource.org/licenses/bsd-license.php
+
+THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+
+**/
+
+#include "HeapGuard.h"
+
+//
+// Pointer to table tracking the Guarded memory with bitmap, in which '1'
+// is used to indicate memory guarded. '0' might be free memory or Guard
+// page itself, depending on status of memory adjacent to it.
+//
+GLOBAL_REMOVE_IF_UNREFERENCED UINT64 *mGuardedMemoryMap = NULL;
+
+//
+// Current depth level of map table pointed by mGuardedMemoryMap.
+// mMapLevel must be initialized at least by 1. It will be automatically
+// updated according to the address of memory just tracked.
+//
+GLOBAL_REMOVE_IF_UNREFERENCED UINTN mMapLevel = 1;
+
+//
+// SMM status flag
+//
+BOOLEAN mIsSmmCpuMode = FALSE;
+
+/**
+ Set corresponding bits in bitmap table to 1 according to the address
+
+ @param[in] Address Start address to set for
+ @param[in] BitNumber Number of bits to set
+ @param[in] BitMap Pointer to bitmap which covers the Address
+
+ @return VOID
+**/
+STATIC
+VOID
+SetBits (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN UINTN BitNumber,
+ IN UINT64 *BitMap
+ )
+{
+ UINTN Lsbs;
+ UINTN Qwords;
+ UINTN Msbs;
+ UINTN StartBit;
+ UINTN EndBit;
+
+ StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
+ EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
+
+ if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
+ Msbs = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
+ GUARDED_HEAP_MAP_ENTRY_BITS;
+ Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
+ Qwords = (BitNumber - Msbs) / GUARDED_HEAP_MAP_ENTRY_BITS;
+ } else {
+ Msbs = BitNumber;
+ Lsbs = 0;
+ Qwords = 0;
+ }
+
+ if (Msbs > 0) {
+ *BitMap |= LShiftU64 (LShiftU64 (1, Msbs) - 1, StartBit);
+ BitMap += 1;
+ }
+
+ if (Qwords > 0) {
+ SetMem64 ((VOID *)BitMap, Qwords * GUARDED_HEAP_MAP_ENTRY_BYTES,
+ (UINT64)-1);
+ BitMap += Qwords;
+ }
+
+ if (Lsbs > 0) {
+ *BitMap |= (LShiftU64 (1, Lsbs) - 1);
+ }
+}
+
+/**
+ Set corresponding bits in bitmap table to 0 according to the address
+
+ @param[in] Address Start address to set for
+ @param[in] BitNumber Number of bits to set
+ @param[in] BitMap Pointer to bitmap which covers the Address
+
+ @return VOID
+**/
+STATIC
+VOID
+ClearBits (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN UINTN BitNumber,
+ IN UINT64 *BitMap
+ )
+{
+ UINTN Lsbs;
+ UINTN Qwords;
+ UINTN Msbs;
+ UINTN StartBit;
+ UINTN EndBit;
+
+ StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
+ EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
+
+ if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
+ Msbs = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
+ GUARDED_HEAP_MAP_ENTRY_BITS;
+ Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
+ Qwords = (BitNumber - Msbs) / GUARDED_HEAP_MAP_ENTRY_BITS;
+ } else {
+ Msbs = BitNumber;
+ Lsbs = 0;
+ Qwords = 0;
+ }
+
+ if (Msbs > 0) {
+ *BitMap &= ~LShiftU64 (LShiftU64 (1, Msbs) - 1, StartBit);
+ BitMap += 1;
+ }
+
+ if (Qwords > 0) {
+ SetMem64 ((VOID *)BitMap, Qwords * GUARDED_HEAP_MAP_ENTRY_BYTES, 0);
+ BitMap += Qwords;
+ }
+
+ if (Lsbs > 0) {
+ *BitMap &= ~(LShiftU64 (1, Lsbs) - 1);
+ }
+}
+
+/**
+ Get corresponding bits in bitmap table according to the address
+
+ The value of bit 0 corresponds to the status of memory at given Address.
+ No more than 64 bits can be retrieved in one call.
+
+ @param[in] Address Start address to retrieve bits for
+ @param[in] BitNumber Number of bits to get
+ @param[in] BitMap Pointer to bitmap which covers the Address
+
+ @return An integer containing the bits information
+**/
+STATIC
+UINT64
+GetBits (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN UINTN BitNumber,
+ IN UINT64 *BitMap
+ )
+{
+ UINTN StartBit;
+ UINTN EndBit;
+ UINTN Lsbs;
+ UINTN Msbs;
+ UINT64 Result;
+
+ ASSERT (BitNumber <= GUARDED_HEAP_MAP_ENTRY_BITS);
+
+ StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
+ EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
+
+ if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
+ Msbs = GUARDED_HEAP_MAP_ENTRY_BITS - StartBit;
+ Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
+ } else {
+ Msbs = BitNumber;
+ Lsbs = 0;
+ }
+
+ Result = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1);
+ if (Lsbs > 0) {
+ BitMap += 1;
+ Result |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
+ }
+
+ return Result;
+}
+
+/**
+ Helper function to allocate pages without Guard for internal uses
+
+ @param[in] Pages Page number
+
+ @return Address of memory allocated
+**/
+VOID *
+PageAlloc (
+ IN UINTN Pages
+ )
+{
+ EFI_STATUS Status;
+ EFI_PHYSICAL_ADDRESS Memory;
+
+ Status = SmmInternalAllocatePages (AllocateAnyPages, EfiRuntimeServicesData,
+ Pages, &Memory, FALSE);
+ if (EFI_ERROR (Status)) {
+ Memory = 0;
+ }
+
+ return (VOID *)(UINTN)Memory;
+}
+
+/**
+ Locate the pointer of bitmap from the guarded memory bitmap tables, which
+ covers the given Address.
+
+ @param[in] Address Start address to search the bitmap for
+ @param[in] AllocMapUnit Flag to indicate memory allocation for the table
+ @param[out] BitMap Pointer to bitmap which covers the Address
+
+ @return The bit number from given Address to the end of current map table
+**/
+UINTN
+FindGuardedMemoryMap (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN BOOLEAN AllocMapUnit,
+ OUT UINT64 **BitMap
+ )
+{
+ UINTN Level;
+ UINTN LevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
+ = GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS;
+ UINTN LevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
+ = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
+ UINT64 **GuardMap;
+ UINT64 *MapMemory;
+ UINTN Index;
+ UINTN Size;
+ UINTN BitsToUnitEnd;
+
+ //
+ // Adjust current map table depth according to the address to access
+ //
+ while (mMapLevel < GUARDED_HEAP_MAP_TABLE_DEPTH
+ &&
+ RShiftU64 (
+ Address,
+ LevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel - 1]
+ ) != 0) {
+
+ if (mGuardedMemoryMap != NULL) {
+ Size = (LevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel - 1] + 1)
+ * GUARDED_HEAP_MAP_ENTRY_BYTES;
+ MapMemory = PageAlloc (EFI_SIZE_TO_PAGES (Size));
+ ASSERT (MapMemory != NULL);
+
+ SetMem ((VOID *)MapMemory, Size, 0);
+
+ *(UINT64 **)MapMemory = mGuardedMemoryMap;
+ mGuardedMemoryMap = MapMemory;
+ }
+
+ mMapLevel++;
+
+ }
+
+ GuardMap = &mGuardedMemoryMap;
+ for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+ Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
+ ++Level) {
+
+ if (*GuardMap == NULL) {
+ if (!AllocMapUnit) {
+ GuardMap = NULL;
+ break;
+ }
+
+ Size = (LevelMask[Level] + 1) * GUARDED_HEAP_MAP_ENTRY_BYTES;
+ MapMemory = PageAlloc (EFI_SIZE_TO_PAGES (Size));
+ ASSERT (MapMemory != NULL);
+
+ SetMem ((VOID *)MapMemory, Size, 0);
+ *GuardMap = (UINT64 *)MapMemory;
+ }
+
+ Index = (UINTN)RShiftU64 (Address, LevelShift[Level]);
+ Index &= LevelMask[Level];
+ GuardMap = (UINT64 **)((*GuardMap) + Index);
+
+ }
+
+ BitsToUnitEnd = GUARDED_HEAP_MAP_BITS - GUARDED_HEAP_MAP_BIT_INDEX (Address);
+ *BitMap = (UINT64 *)GuardMap;
+
+ return BitsToUnitEnd;
+}
+
+/**
+ Set corresponding bits in bitmap table to 1 according to given memory range
+
+ @param[in] Address Memory address to guard from
+ @param[in] NumberOfPages Number of pages to guard
+
+ @return VOID
+**/
+VOID
+EFIAPI
+SetGuardedMemoryBits (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN UINTN NumberOfPages
+ )
+{
+ UINT64 *BitMap;
+ UINTN Bits;
+ UINTN BitsToUnitEnd;
+
+ while (NumberOfPages > 0) {
+ BitsToUnitEnd = FindGuardedMemoryMap (Address, TRUE, &BitMap);
+ ASSERT (BitMap != NULL);
+
+ if (NumberOfPages > BitsToUnitEnd) {
+ // Cross map unit
+ Bits = BitsToUnitEnd;
+ } else {
+ Bits = NumberOfPages;
+ }
+
+ SetBits (Address, Bits, BitMap);
+
+ NumberOfPages -= Bits;
+ Address += EFI_PAGES_TO_SIZE (Bits);
+ }
+}
+
+/**
+ Clear corresponding bits in bitmap table according to given memory range
+
+ @param[in] Address Memory address to unset from
+ @param[in] NumberOfPages Number of pages to unset guard
+
+ @return VOID
+**/
+VOID
+EFIAPI
+ClearGuardedMemoryBits (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN UINTN NumberOfPages
+ )
+{
+ UINT64 *BitMap;
+ UINTN Bits;
+ UINTN BitsToUnitEnd;
+
+ while (NumberOfPages > 0) {
+ BitsToUnitEnd = FindGuardedMemoryMap (Address, TRUE, &BitMap);
+ ASSERT (BitMap != NULL);
+
+ if (NumberOfPages > BitsToUnitEnd) {
+ // Cross map unit
+ Bits = BitsToUnitEnd;
+ } else {
+ Bits = NumberOfPages;
+ }
+
+ ClearBits (Address, Bits, BitMap);
+
+ NumberOfPages -= Bits;
+ Address += EFI_PAGES_TO_SIZE (Bits);
+ }
+}
+
+/**
+ Retrieve corresponding bits in bitmap table according to given memory range
+
+ @param[in] Address Memory address to retrieve from
+ @param[in] NumberOfPages Number of pages to retrieve
+
+ @return VOID
+**/
+UINTN
+GetGuardedMemoryBits (
+ IN EFI_PHYSICAL_ADDRESS Address,
+ IN UINTN NumberOfPages
+ )
+{
+ UINT64 *BitMap;
+ UINTN Bits;
+ UINTN Result;
+ UINTN Shift;
+ UINTN BitsToUnitEnd;
+
+ ASSERT (NumberOfPages <= GUARDED_HEAP_MAP_ENTRY_BITS);
+
+ Result = 0;
+ Shift = 0;
+ while (NumberOfPages > 0) {
+ BitsToUnitEnd = FindGuardedMemoryMap (Address, FALSE, &BitMap);
+
+ if (NumberOfPages > BitsToUnitEnd) {
+ // Cross map unit
+ Bits = BitsToUnitEnd;
+ } else {
+ Bits = NumberOfPages;
+ }
+
+ if (BitMap != NULL) {
+ Result |= LShiftU64 (GetBits (Address, Bits, BitMap), Shift);
+ }
+
+ Shift += Bits;
+ NumberOfPages -= Bits;
+ Address += EFI_PAGES_TO_SIZE (Bits);
+ }
+
+ return Result;
+}
+
+/**
+ Get bit value in bitmap table for the given address
+
+ @param[in] Address The address to retrieve for
+
+ @return 1 or 0
+**/
+UINTN
+EFIAPI
+GetGuardMapBit (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ UINT64 *GuardMap;
+
+ FindGuardedMemoryMap (Address, FALSE, &GuardMap);
+ if (GuardMap != NULL) {
+ if (RShiftU64 (*GuardMap,
+ GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address)) & 1) {
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ Set the bit in bitmap table for the given address
+
+ @param[in] Address The address to set for
+
+ @return VOID
+**/
+VOID
+EFIAPI
+SetGuardMapBit (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ UINT64 *GuardMap;
+ UINT64 BitMask;
+
+ FindGuardedMemoryMap (Address, TRUE, &GuardMap);
+ if (GuardMap != NULL) {
+ BitMask = LShiftU64 (1, GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address));
+ *GuardMap |= BitMask;
+ }
+}
+
+/**
+ Clear the bit in bitmap table for the given address
+
+ @param[in] Address The address to clear for
+
+ @return VOID
+**/
+VOID
+EFIAPI
+ClearGuardMapBit (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ UINT64 *GuardMap;
+ UINTN BitMask;
+
+ FindGuardedMemoryMap (Address, TRUE, &GuardMap);
+ if (GuardMap != NULL) {
+ BitMask = LShiftU64 (1, GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address));
+ *GuardMap &= ~BitMask;
+ }
+}
+
+/**
+ Check to see if the page at the given address is a Guard page or not
+
+ @param[in] Address The address to check for
+
+ @return TRUE The page at Address is a Guard page
+ @return FALSE The page at Address is not a Guard page
+**/
+BOOLEAN
+EFIAPI
+IsGuardPage (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ UINTN BitMap;
+
+ BitMap = GetGuardedMemoryBits (Address - EFI_PAGE_SIZE, 3);
+ return (BitMap == 0b001 || BitMap == 0b100 || BitMap == 0b101);
+}
+
+/**
+ Check to see if the page at the given address is a head Guard page or not
+
+ @param[in] Address The address to check for
+
+ @return TRUE The page at Address is a head Guard page
+ @return FALSE The page at Address is not a head Guard page
+**/
+BOOLEAN
+EFIAPI
+IsHeadGuard (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ return (GetGuardedMemoryBits (Address, 2) == 0b10);
+}
+
+/**
+ Check to see if the page at the given address is a tail Guard page or not
+
+ @param[in] Address The address to check for
+
+ @return TRUE The page at Address is a tail Guard page
+ @return FALSE The page at Address is not a tail Guard page
+**/
+BOOLEAN
+EFIAPI
+IsTailGuard (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ return (GetGuardedMemoryBits (Address - EFI_PAGE_SIZE, 2) == 0b01);
+}
+
+/**
+ Check to see if the page at the given address is guarded or not
+
+ @param[in] Address The address to check for
+
+ @return TRUE The page at Address is guarded
+ @return FALSE The page at Address is not guarded
+**/
+BOOLEAN
+EFIAPI
+IsMemoryGuarded (
+ IN EFI_PHYSICAL_ADDRESS Address
+ )
+{
+ return (GetGuardMapBit (Address) == 1);
+}
+
+/**
+ Set the page at the given address to be a Guard page.
+
+ This is done by changing the page table attribute to be NOT PRSENT.
+
+ @param[in] Address Page address to Guard at
+
+ @return VOID
+**/
+VOID
+EFIAPI
+SetGuardPage (
+ IN EFI_PHYSICAL_ADDRESS BaseAddress
+ )
+{
+ if (mIsSmmCpuMode) {
+ SmmSetMemoryAttributes (BaseAddress, EFI_PAGE_SIZE, EFI_MEMORY_RP);
+ }
+}
+
+/**
+ Unset the Guard page at the given address to the normal memory.
+
+ This is done by changing the page table attribute to be PRSENT.
+
+ @param[in] Address Page address to Guard at
+
+ @return VOID
+**/
+VOID
+EFIAPI
+UnsetGuardPage (
+ IN EFI_PHYSICAL_ADDRESS BaseAddress
+ )
+{
+ if (mIsSmmCpuMode) {
+ SmmClearMemoryAttributes (BaseAddress, EFI_PAGE_SIZE, EFI_MEMORY_RP);
+ }
+}
+
+/**
+ Check to see if the memory at the given address should be guarded or not
+
+ @param[in] MemoryType Memory type to check
+ @param[in] AllocateType Allocation type to check
+ @param[in] PageOrPool Indicate a page allocation or pool allocation
+
+
+ @return TRUE The given type of memory should be guarded
+ @return FALSE The given type of memory should not be guarded
+**/
+BOOLEAN
+IsMemoryTypeToGuard (
+ IN EFI_MEMORY_TYPE MemoryType,
+ IN EFI_ALLOCATE_TYPE AllocateType,
+ IN UINT8 PageOrPool
+ )
+{
+ UINT64 TestBit;
+ UINT64 ConfigBit;
+
+ if ((PcdGet8 (PcdHeapGuardPropertyMask) & PageOrPool) == 0 ||
+ AllocateType == AllocateAddress) {
+ return FALSE;
+ }
+
+ ConfigBit = 0;
+ if (PageOrPool & GUARD_HEAP_TYPE_POOL) {
+ ConfigBit |= PcdGet64 (PcdHeapGuardPoolType);
+ }
+
+ if (PageOrPool & GUARD_HEAP_TYPE_PAGE) {
+ ConfigBit |= PcdGet64 (PcdHeapGuardPageType);
+ }
+
+ if (MemoryType == EfiRuntimeServicesData ||
+ MemoryType == EfiRuntimeServicesCode) {
+ TestBit = LShiftU64 (1, MemoryType);
+ } else if (MemoryType == EfiMaxMemoryType) {
+ TestBit = (UINT64)-1;
+ } else {
+ TestBit = 0;
+ }
+
+ return ((ConfigBit & TestBit) != 0);
+}
+
+/**
+ Check to see if the pool at the given address should be guarded or not
+
+ @param[in] MemoryType Pool type to check
+
+
+ @return TRUE The given type of pool should be guarded
+ @return FALSE The given type of pool should not be guarded
+**/
+BOOLEAN
+IsPoolTypeToGuard (
+ IN EFI_MEMORY_TYPE MemoryType
+ )
+{
+ return IsMemoryTypeToGuard (MemoryType, AllocateAnyPages,
+ GUARD_HEAP_TYPE_POOL);
+}
+
+/**
+ Check to see if the page at the given address should be guarded or not
+
+ @param[in] MemoryType Page type to check
+ @param[in] AllocateType Allocation type to check
+
+ @return TRUE The given type of page should be guarded
+ @return FALSE The given type of page should not be guarded
+**/
+BOOLEAN
+IsPageTypeToGuard (
+ IN EFI_MEMORY_TYPE MemoryType,
+ IN EFI_ALLOCATE_TYPE AllocateType
+ )
+{
+ return IsMemoryTypeToGuard (MemoryType, AllocateType, GUARD_HEAP_TYPE_PAGE);
+}
+
+/**
+ Check to see if the heap guard is enabled for page and/or pool allocation
+
+ @return TRUE/FALSE
+**/
+BOOLEAN
+IsHeapGuardEnabled (
+ VOID
+ )
+{
+ return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
+ GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_PAGE);
+}
+
+/**
+ Set head Guard and tail Guard for the given memory range
+
+ @param[in] Memory Base address of memory to set guard for
+ @param[in] NumberOfPages Memory size in pages
+
+ @return VOID
+**/
+VOID
+SetGuardForMemory (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NumberOfPages
+ )
+{
+ EFI_PHYSICAL_ADDRESS GuardPage;
+
+ //
+ // Set tail Guard
+ //
+ GuardPage = Memory + EFI_PAGES_TO_SIZE (NumberOfPages);
+ if (!IsGuardPage (GuardPage)) {
+ SetGuardPage (GuardPage);
+ }
+
+ // Set head Guard
+ GuardPage = Memory - EFI_PAGES_TO_SIZE (1);
+ if (!IsGuardPage (GuardPage)) {
+ SetGuardPage (GuardPage);
+ }
+
+ //
+ // Mark the memory range as Guarded
+ //
+ SetGuardedMemoryBits (Memory, NumberOfPages);
+}
+
+/**
+ Unset head Guard and tail Guard for the given memory range
+
+ @param[in] Memory Base address of memory to unset guard for
+ @param[in] NumberOfPages Memory size in pages
+
+ @return VOID
+**/
+VOID
+UnsetGuardForMemory (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NumberOfPages
+ )
+{
+ EFI_PHYSICAL_ADDRESS GuardPage;
+
+ if (NumberOfPages == 0) {
+ return;
+ }
+
+ //
+ // Head Guard must be one page before, if any.
+ //
+ GuardPage = Memory - EFI_PAGES_TO_SIZE (1);
+ if (IsHeadGuard (GuardPage)) {
+ if (!IsMemoryGuarded (GuardPage - EFI_PAGES_TO_SIZE (1))) {
+ //
+ // If the head Guard is not a tail Guard of adjacent memory block,
+ // unset it.
+ //
+ UnsetGuardPage (GuardPage);
+ }
+ } else if (IsMemoryGuarded (GuardPage)) {
+ //
+ // Pages before memory to free are still in Guard. It's a partial free
+ // case. Turn first page of memory block to free into a new Guard.
+ //
+ SetGuardPage (Memory);
+ }
+
+ //
+ // Tail Guard must be the page after this memory block to free, if any.
+ //
+ GuardPage = Memory + EFI_PAGES_TO_SIZE (NumberOfPages);
+ if (IsTailGuard (GuardPage)) {
+ if (!IsMemoryGuarded (GuardPage + EFI_PAGES_TO_SIZE (1))) {
+ //
+ // If the tail Guard is not a head Guard of adjacent memory block,
+ // free it; otherwise, keep it.
+ //
+ UnsetGuardPage (GuardPage);
+ }
+ } else if (IsMemoryGuarded (GuardPage)) {
+ //
+ // Pages after memory to free are still in Guard. It's a partial free
+ // case. We need to keep one page to be a head Guard.
+ //
+ SetGuardPage (GuardPage - EFI_PAGES_TO_SIZE (1));
+ }
+
+ //
+ // No matter what, we just clear the mark of the Guarded memory.
+ //
+ ClearGuardedMemoryBits(Memory, NumberOfPages);
+}
+
+/**
+ Adjust address of free memory according to existing and/or required Guard
+
+ This function will check if there're existing Guard pages of adjacent
+ memory blocks, and try to use it as the Guard page of the memory to be
+ allocated.
+
+ @param[in] Start Start address of free memory block
+ @param[in] Size Size of free memory block
+ @param[in] SizeRequested Size of memory to allocate
+
+ @return The end address of memory block found
+ @return 0 if no enough space for the required size of memory and its Guard
+**/
+UINT64
+AdjustMemoryS (
+ IN UINT64 Start,
+ IN UINT64 Size,
+ IN UINT64 SizeRequested
+ )
+{
+ UINT64 Target;
+
+ Target = Start + Size - SizeRequested;
+
+ //
+ // At least one more page needed for Guard page.
+ //
+ if (Size < (SizeRequested + EFI_PAGES_TO_SIZE (1))) {
+ return 0;
+ }
+
+ if (!IsGuardPage (Start + Size)) {
+ // No Guard at tail to share. One more page is needed.
+ Target -= EFI_PAGES_TO_SIZE (1);
+ }
+
+ // Out of range?
+ if (Target < Start) {
+ return 0;
+ }
+
+ // At the edge?
+ if (Target == Start) {
+ if (!IsGuardPage (Target - EFI_PAGES_TO_SIZE (1))) {
+ // No enough space for a new head Guard if no Guard at head to share.
+ return 0;
+ }
+ }
+
+ // OK, we have enough pages for memory and its Guards. Return the End of the
+ // free space.
+ return Target + SizeRequested - 1;
+}
+
+/**
+ Adjust the start address and number of pages to free according to Guard
+
+ The purpose of this function is to keep the shared Guard page with adjacent
+ memory block if it's still in guard, or free it if no more sharing. Another
+ is to reserve pages as Guard pages in partial page free situation.
+
+ @param[in/out] Memory Base address of memory to free
+ @param[in/out] NumberOfPages Size of memory to free
+
+ @return VOID
+**/
+VOID
+AdjustMemoryF (
+ IN OUT EFI_PHYSICAL_ADDRESS *Memory,
+ IN OUT UINTN *NumberOfPages
+ )
+{
+ EFI_PHYSICAL_ADDRESS Start;
+ EFI_PHYSICAL_ADDRESS MemoryToTest;
+ UINTN PagesToFree;
+
+ if (Memory == NULL || NumberOfPages == NULL || *NumberOfPages == 0) {
+ return;
+ }
+
+ Start = *Memory;
+ PagesToFree = *NumberOfPages;
+
+ //
+ // Head Guard must be one page before, if any.
+ //
+ MemoryToTest = Start - EFI_PAGES_TO_SIZE (1);
+ if (IsHeadGuard (MemoryToTest)) {
+ if (!IsMemoryGuarded (MemoryToTest - EFI_PAGES_TO_SIZE (1))) {
+ //
+ // If the head Guard is not a tail Guard of adjacent memory block,
+ // free it; otherwise, keep it.
+ //
+ Start -= EFI_PAGES_TO_SIZE (1);
+ PagesToFree += 1;
+ }
+ } else if (IsMemoryGuarded (MemoryToTest)) {
+ //
+ // Pages before memory to free are still in Guard. It's a partial free
+ // case. We need to keep one page to be a tail Guard.
+ //
+ Start += EFI_PAGES_TO_SIZE (1);
+ PagesToFree -= 1;
+ }
+
+ //
+ // Tail Guard must be the page after this memory block to free, if any.
+ //
+ MemoryToTest = Start + EFI_PAGES_TO_SIZE (PagesToFree);
+ if (IsTailGuard (MemoryToTest)) {
+ if (!IsMemoryGuarded (MemoryToTest + EFI_PAGES_TO_SIZE (1))) {
+ //
+ // If the tail Guard is not a head Guard of adjacent memory block,
+ // free it; otherwise, keep it.
+ //
+ PagesToFree += 1;
+ }
+ } else if (IsMemoryGuarded (MemoryToTest)) {
+ //
+ // Pages after memory to free are still in Guard. It's a partial free
+ // case. We need to keep one page to be a head Guard.
+ //
+ PagesToFree -= 1;
+ }
+
+ *Memory = Start;
+ *NumberOfPages = PagesToFree;
+}
+
+/**
+ Adjust the base and number of pages to really allocate according to Guard
+
+ @param[in/out] Memory Base address of free memory
+ @param[in/out] NumberOfPages Size of memory to allocate
+
+ @return VOID
+**/
+VOID
+AdjustMemoryA (
+ IN OUT EFI_PHYSICAL_ADDRESS *Memory,
+ IN OUT UINTN *NumberOfPages
+ )
+{
+ //
+ // FindFreePages() has already taken the Guard into account. It's safe to
+ // adjust the start address and/or number of pages here, to make sure that
+ // the Guards are also "allocated".
+ //
+ if (!IsGuardPage (*Memory + EFI_PAGES_TO_SIZE (*NumberOfPages))) {
+ // No tail Guard, add one.
+ *NumberOfPages += 1;
+ }
+
+ if (!IsGuardPage (*Memory - EFI_PAGE_SIZE)) {
+ // No head Guard, add one.
+ *Memory -= EFI_PAGE_SIZE;
+ *NumberOfPages += 1;
+ }
+}
+
+/**
+ Adjust the pool head position to make sure the Guard page is adjavent to
+ pool tail or pool head.
+
+ @param[in] Memory Base address of memory allocated
+ @param[in] NoPages Number of pages actually allocated
+ @param[in] Size Size of memory requested
+ (plus pool head/tail overhead)
+
+ @return Address of pool head
+**/
+VOID *
+AdjustPoolHeadA (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NoPages,
+ IN UINTN Size
+ )
+{
+ if ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) != 0) {
+ //
+ // Pool head is put near the head Guard
+ //
+ return (VOID *)(UINTN)Memory;
+ }
+
+ //
+ // Pool head is put near the tail Guard
+ //
+ return (VOID *)(UINTN)(Memory + EFI_PAGES_TO_SIZE (NoPages) - Size);
+}
+
+/**
+ Get the page base address according to pool head address
+
+ @param[in] Memory Head address of pool to free
+
+ @return Address of pool head
+**/
+VOID *
+AdjustPoolHeadF (
+ IN EFI_PHYSICAL_ADDRESS Memory
+ )
+{
+ if ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) != 0) {
+ //
+ // Pool head is put near the head Guard
+ //
+ return (VOID *)(UINTN)Memory;
+ }
+
+ //
+ // Pool head is put near the tail Guard
+ //
+ return (VOID *)(UINTN)(Memory & ~EFI_PAGE_MASK);
+}
+
+/**
+ Helper function of memory allocation with Guard pages
+
+ @param FreePageList The free page node.
+ @param NumberOfPages Number of pages to be allocated.
+ @param MaxAddress Request to allocate memory below this address.
+ @param MemoryType Type of memory requested.
+
+ @return Memory address of allocated pages.
+**/
+UINTN
+InternalAllocMaxAddressWithGuard (
+ IN OUT LIST_ENTRY *FreePageList,
+ IN UINTN NumberOfPages,
+ IN UINTN MaxAddress,
+ IN EFI_MEMORY_TYPE MemoryType
+
+ )
+{
+ LIST_ENTRY *Node;
+ FREE_PAGE_LIST *Pages;
+ UINTN PagesToAlloc;
+ UINTN HeadGuard;
+ UINTN TailGuard;
+ UINTN Address;
+
+ for (Node = FreePageList->BackLink; Node != FreePageList;
+ Node = Node->BackLink) {
+ Pages = BASE_CR (Node, FREE_PAGE_LIST, Link);
+ if (Pages->NumberOfPages >= NumberOfPages &&
+ (UINTN)Pages + EFI_PAGES_TO_SIZE (NumberOfPages) - 1 <= MaxAddress) {
+
+ //
+ // We may need 1 or 2 more pages for Guard. Check it out.
+ //
+ PagesToAlloc = NumberOfPages;
+ TailGuard = (UINTN)Pages + EFI_PAGES_TO_SIZE (Pages->NumberOfPages);
+ if (!IsGuardPage (TailGuard)) {
+ //
+ // Add one if no Guard at the end of current free memory block.
+ //
+ PagesToAlloc += 1;
+ TailGuard = 0;
+ }
+
+ HeadGuard = (UINTN)Pages +
+ EFI_PAGES_TO_SIZE (Pages->NumberOfPages - PagesToAlloc) -
+ EFI_PAGE_SIZE;
+ if (!IsGuardPage (HeadGuard)) {
+ //
+ // Add one if no Guard at the page before the address to allocate
+ //
+ PagesToAlloc += 1;
+ HeadGuard = 0;
+ }
+
+ if (Pages->NumberOfPages < PagesToAlloc) {
+ // Not enough space to allocate memory with Guards? Try next block.
+ continue;
+ }
+
+ Address = InternalAllocPagesOnOneNode (Pages, PagesToAlloc, MaxAddress);
+ ConvertSmmMemoryMapEntry(MemoryType, Address, PagesToAlloc, FALSE);
+ CoreFreeMemoryMapStack();
+ if (!HeadGuard) {
+ // Don't pass the Guard page to user.
+ Address += EFI_PAGE_SIZE;
+ }
+ SetGuardForMemory (Address, NumberOfPages);
+ return Address;
+ }
+ }
+
+ return (UINTN)(-1);
+}
+
+/**
+ Helper function of memory free with Guard pages
+
+ @param[in] Memory Base address of memory being freed.
+ @param[in] NumberOfPages The number of pages to free.
+ @param[in] AddRegion If this memory is new added region.
+
+ @retval EFI_NOT_FOUND Could not find the entry that covers the range.
+ @retval EFI_INVALID_PARAMETER Address not aligned, Address is zero or NumberOfPages is zero.
+ @return EFI_SUCCESS Pages successfully freed.
+**/
+EFI_STATUS
+SmmInternalFreePagesExWithGuard (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NumberOfPages,
+ IN BOOLEAN AddRegion
+ )
+{
+ EFI_PHYSICAL_ADDRESS MemoryToFree;
+ UINTN PagesToFree;
+
+ MemoryToFree = Memory;
+ PagesToFree = NumberOfPages;
+
+ AdjustMemoryF (&MemoryToFree, &PagesToFree);
+ UnsetGuardForMemory (Memory, NumberOfPages);
+
+ return SmmInternalFreePagesEx (MemoryToFree, PagesToFree, AddRegion);
+}
+
+/**
+ Set all Guard pages which cannot be set during the non-SMM mode time
+**/
+VOID
+SetAllGuardPages (
+ VOID
+ )
+{
+ UINT64 Entries[GUARDED_HEAP_MAP_TABLE_DEPTH]
+ = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
+ UINT64 Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH]
+ = GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS;
+ UINT64 *Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
+ UINT64 Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
+ UINT64 Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
+ UINT64 TableEntry;
+ UINT64 Address;
+ UINT64 GuardPage;
+ INTN Level;
+ UINTN Index;
+ BOOLEAN OnGuarding;
+
+ SetMem64 (Tables, sizeof(Tables), 0);
+ SetMem64 (Addresses, sizeof(Addresses), 0);
+ SetMem64 (Indices, sizeof(Indices), 0);
+
+ Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+ Tables[Level] = mGuardedMemoryMap;
+ Address = 0;
+ OnGuarding = FALSE;
+
+ DEBUG_CODE (
+ DumpGuardedMemoryBitmap ();
+ );
+
+ while (TRUE) {
+ if (Indices[Level] > Entries[Level]) {
+ Tables[Level] = 0;
+ Level -= 1;
+ } else {
+
+ TableEntry = Tables[Level][Indices[Level]];
+ Address = Addresses[Level];
+
+ if (TableEntry == 0) {
+
+ OnGuarding = FALSE;
+
+ } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
+
+ Level += 1;
+ Tables[Level] = (UINT64 *)TableEntry;
+ Addresses[Level] = Address;
+ Indices[Level] = 0;
+
+ continue;
+
+ } else {
+
+ Index = 0;
+ while (Index < GUARDED_HEAP_MAP_ENTRY_BITS) {
+ if ((TableEntry & 1) == 1) {
+ if (OnGuarding) {
+ GuardPage = 0;
+ } else {
+ GuardPage = Address - EFI_PAGE_SIZE;
+ }
+ OnGuarding = TRUE;
+ } else {
+ if (OnGuarding) {
+ GuardPage = Address;
+ } else {
+ GuardPage = 0;
+ }
+ OnGuarding = FALSE;
+ }
+
+ if (GuardPage != 0) {
+ SetGuardPage (GuardPage);
+ }
+
+ if (TableEntry == 0) {
+ break;
+ }
+
+ TableEntry = RShiftU64 (TableEntry, 1);
+ Address += EFI_PAGE_SIZE;
+ Index += 1;
+ }
+ }
+ }
+
+ if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
+ break;
+ }
+
+ Indices[Level] += 1;
+ Address = (Level == 0) ? 0 : Addresses[Level - 1];
+ Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
+
+ }
+}
+
+/**
+ Hook function used to set all Guard pages after entering SMM mode
+**/
+VOID
+SmmEntryPointMemoryManagementHook (
+ VOID
+ )
+{
+ EFI_STATUS Status;
+ VOID *SmmCpu;
+
+ if (!mIsSmmCpuMode) {
+ Status = SmmLocateProtocol (&gEfiSmmCpuProtocolGuid, NULL, &SmmCpu);
+ if (!EFI_ERROR(Status)) {
+ mIsSmmCpuMode = TRUE;
+ SetAllGuardPages ();
+ }
+ }
+}
+
+/**
+ Helper function to convert a UINT64 value in binary to a string
+
+ @param[in] Value Value of a UINT64 integer
+ @param[in] BinString String buffer to contain the conversion result
+
+ @return VOID
+**/
+VOID
+Uint64ToBinString (
+ IN UINT64 Value,
+ OUT CHAR8 *BinString
+ )
+{
+ UINTN Index;
+
+ if (BinString == NULL) {
+ return;
+ }
+
+ for (Index = 64; Index > 0; --Index) {
+ BinString[Index - 1] = '0' + (Value & 1);
+ Value = RShiftU64 (Value, 1);
+ }
+ BinString[64] = '\0';
+}
+
+/**
+ Dump the guarded memory bit map
+
+ @return VOID
+**/
+VOID
+EFIAPI
+DumpGuardedMemoryBitmap (
+ VOID
+ )
+{
+ UINT64 Entries[GUARDED_HEAP_MAP_TABLE_DEPTH]
+ = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
+ UINT64 Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH]
+ = GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS;
+ UINT64 *Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
+ UINT64 Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
+ UINT64 Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
+ UINT64 TableEntry;
+ UINT64 Address;
+ INTN Level;
+ UINTN RepeatZero;
+ CHAR8 String[GUARDED_HEAP_MAP_ENTRY_BITS + 1];
+ CHAR8 *Ruler1 = " 3 2"
+ " 1 0";
+ CHAR8 *Ruler2 = "FEDCBA9876543210FEDCBA9876543210"
+ "FEDCBA9876543210FEDCBA9876543210";
+
+ if (mGuardedMemoryMap == NULL) {
+ return;
+ }
+
+ DEBUG ((DEBUG_INFO, "============================="
+ " Guarded Memory Bitmap "
+ "==============================\r\n"));
+ DEBUG ((DEBUG_INFO, " %a\r\n", Ruler1));
+ DEBUG ((DEBUG_INFO, " %a\r\n", Ruler2));
+
+
+ SetMem64 (Tables, sizeof(Tables), 0);
+ SetMem64 (Addresses, sizeof(Addresses), 0);
+ SetMem64 (Indices, sizeof(Indices), 0);
+
+ Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+ Tables[Level] = mGuardedMemoryMap;
+ Address = 0;
+ RepeatZero = 0;
+
+ while (TRUE) {
+ if (Indices[Level] > Entries[Level]) {
+
+ Tables[Level] = 0;
+ Level -= 1;
+ RepeatZero = 0;
+
+ DEBUG ((
+ DEBUG_INFO,
+ "========================================="
+ "=========================================\r\n"
+ ));
+
+ } else {
+
+ TableEntry = Tables[Level][Indices[Level]];
+ Address = Addresses[Level];
+
+ if (TableEntry == 0) {
+
+ if (Level == GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
+ if (RepeatZero == 0) {
+ Uint64ToBinString(TableEntry, String);
+ DEBUG ((DEBUG_INFO, "%016lx: %a\r\n", Address, String));
+ } else if (RepeatZero == 1) {
+ DEBUG ((DEBUG_INFO, "... : ...\r\n"));
+ }
+ RepeatZero += 1;
+ }
+
+ } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
+
+ Level += 1;
+ Tables[Level] = (UINT64 *)TableEntry;
+ Addresses[Level] = Address;
+ Indices[Level] = 0;
+ RepeatZero = 0;
+
+ continue;
+
+ } else {
+
+ RepeatZero = 0;
+ Uint64ToBinString(TableEntry, String);
+ DEBUG ((DEBUG_INFO, "%016lx: %a\r\n", Address, String));
+
+ }
+ }
+
+ if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
+ break;
+ }
+
+ Indices[Level] += 1;
+ Address = (Level == 0) ? 0 : Addresses[Level - 1];
+ Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
+
+ }
+}
+
+/**
+ Debug function used to verify if the Guard page is well set or not
+
+ @param[in] BaseAddress Address of memory to check
+ @param[in] NumberOfPages Size of memory in pages
+
+ @return TRUE The head Guard and tail Guard are both well set
+ @return FALSE The head Guard and/or tail Guard are not well set
+**/
+BOOLEAN
+VerifyMemoryGuard (
+ IN EFI_PHYSICAL_ADDRESS BaseAddress,
+ IN UINTN NumberOfPages
+ )
+{
+ UINT64 *PageEntry;
+ PAGE_ATTRIBUTE Attribute;
+ EFI_PHYSICAL_ADDRESS Address;
+
+ if (!mIsSmmCpuMode) {
+ return TRUE;
+ }
+
+ Address = BaseAddress - EFI_PAGE_SIZE;
+ PageEntry = GetPageTableEntry (Address, &Attribute);
+ if (PageEntry == NULL || Attribute != Page4K) {
+ DEBUG ((DEBUG_ERROR, "Head Guard is not set at: %016lx!!!\r\n", Address));
+ DumpGuardedMemoryBitmap ();
+ return FALSE;
+ }
+
+ if ((*PageEntry & IA32_PG_P) != 0) {
+ DEBUG ((DEBUG_ERROR, "Head Guard is not set at: %016lx (%016lX)!!!\r\n",
+ Address, *PageEntry));
+ *(UINT8 *) Address = 0;
+ DumpGuardedMemoryBitmap ();
+ return FALSE;
+ }
+
+ Address = BaseAddress + EFI_PAGES_TO_SIZE (NumberOfPages);
+ PageEntry = GetPageTableEntry (Address, &Attribute);
+ if (PageEntry == NULL || Attribute != Page4K) {
+ DEBUG ((DEBUG_ERROR, "Tail Guard is not set at: %016lx!!!\r\n", Address));
+ DumpGuardedMemoryBitmap ();
+ return FALSE;
+ }
+
+ if ((*PageEntry & IA32_PG_P) != 0) {
+ DEBUG ((DEBUG_ERROR, "Tail Guard is not set at: %016lx (%016lX)!!!\r\n",
+ Address, *PageEntry));
+ *(UINT8 *) Address = 0;
+ DumpGuardedMemoryBitmap ();
+ return FALSE;
+ }
+
+ return TRUE;
+}
+
diff --git a/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h b/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h
new file mode 100644
index 0000000000..ecc10e83a7
--- /dev/null
+++ b/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h
@@ -0,0 +1,395 @@
+/** @file
+ Data structure and functions to allocate and free memory space.
+
+Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
+This program and the accompanying materials
+are licensed and made available under the terms and conditions of the BSD License
+which accompanies this distribution. The full text of the license may be found at
+http://opensource.org/licenses/bsd-license.php
+
+THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+
+**/
+
+#ifndef _HEAPGUARD_H_
+#define _HEAPGUARD_H_
+
+#include "PiSmmCore.h"
+#include "PageTable.h"
+
+//
+// Following macros are used to define and access the guarded memory bitmap
+// table.
+//
+// To simplify the access and reduce the memory used for this table, the
+// table is constructed in the similar way as page table structure but in
+// reverse direction, i.e. from bottom growing up to top.
+//
+// - 1-bit tracks 1 page (4KB)
+// - 1-UINT64 map entry tracks 256KB memory
+// - 1K-UINT64 map table tracks 256MB memory
+// - Five levels of tables can track any address of memory of 64-bit
+// system, like below.
+//
+// 512 * 512 * 512 * 512 * 1K * 64b * 4K
+// 111111111 111111111 111111111 111111111 1111111111 111111 111111111111
+// 63 54 45 36 27 17 11 0
+// 9b 9b 9b 9b 10b 6b 12b
+// L0 -> L1 -> L2 -> L3 -> L4 -> bits -> page
+// 1FF 1FF 1FF 1FF 3FF 3F FFF
+//
+// L4 table has 1K * sizeof(UINT64) = 8K (2-page), which can track 256MB
+// memory. Each table of L0-L3 will be allocated when its memory address
+// range is to be tracked. Only 1-page will be allocated each time. This
+// can save memories used to establish this map table.
+//
+// For a normal configuration of system with 4G memory, two levels of tables
+// can track the whole memory, because two levels (L3+L4) of map tables have
+// already coverred 37-bit of memory address. And for a normal UEFI BIOS,
+// less than 128M memory would be consumed during boot. That means we just
+// need
+//
+// 1-page (L3) + 2-page (L4)
+//
+// memory (3 pages) to track the memory allocation works. In this case,
+// there's no need to setup L0-L2 tables.
+//
+
+//
+// Each entry occupies 8B/64b. 1-page can hold 512 entries, which spans 9
+// bits in address. (512 = 1 << 9)
+//
+#define BYTE_LENGTH_SHIFT 3 // (8 = 1 << 3)
+
+#define GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT \
+ (EFI_PAGE_SHIFT - BYTE_LENGTH_SHIFT)
+
+#define GUARDED_HEAP_MAP_TABLE_DEPTH 5
+
+// Use UINT64_index + bit_index_of_UINT64 to locate the bit in may
+#define GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT 6 // (64 = 1 << 6)
+
+#define GUARDED_HEAP_MAP_ENTRY_BITS \
+ (1 << GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT)
+
+#define GUARDED_HEAP_MAP_ENTRY_BYTES \
+ (GUARDED_HEAP_MAP_ENTRY_BITS / 8)
+
+// L4 table address width: 64 - 9 * 4 - 6 - 12 = 10b
+#define GUARDED_HEAP_MAP_ENTRY_SHIFT \
+ (GUARDED_HEAP_MAP_ENTRY_BITS \
+ - GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT * 4 \
+ - GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT \
+ - EFI_PAGE_SHIFT)
+
+// L4 table address mask: (1 << 10 - 1) = 0x3FF
+#define GUARDED_HEAP_MAP_ENTRY_MASK \
+ ((1 << GUARDED_HEAP_MAP_ENTRY_SHIFT) - 1)
+
+// Size of each L4 table: (1 << 10) * 8 = 8KB = 2-page
+#define GUARDED_HEAP_MAP_SIZE \
+ ((1 << GUARDED_HEAP_MAP_ENTRY_SHIFT) * GUARDED_HEAP_MAP_ENTRY_BYTES)
+
+// Memory size tracked by one L4 table: 8KB * 8 * 4KB = 256MB
+#define GUARDED_HEAP_MAP_UNIT_SIZE \
+ (GUARDED_HEAP_MAP_SIZE * 8 * EFI_PAGE_SIZE)
+
+// L4 table entry number: 8KB / 8 = 1024
+#define GUARDED_HEAP_MAP_ENTRIES_PER_UNIT \
+ (GUARDED_HEAP_MAP_SIZE / GUARDED_HEAP_MAP_ENTRY_BYTES)
+
+// L4 table entry indexing
+#define GUARDED_HEAP_MAP_ENTRY_INDEX(Address) \
+ (RShiftU64 (Address, EFI_PAGE_SHIFT \
+ + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT) \
+ & GUARDED_HEAP_MAP_ENTRY_MASK)
+
+// L4 table entry bit indexing
+#define GUARDED_HEAP_MAP_ENTRY_BIT_INDEX(Address) \
+ (RShiftU64 (Address, EFI_PAGE_SHIFT) \
+ & ((1 << GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT) - 1))
+
+//
+// Total bits (pages) tracked by one L4 table (65536-bit)
+//
+#define GUARDED_HEAP_MAP_BITS \
+ (1 << (GUARDED_HEAP_MAP_ENTRY_SHIFT \
+ + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT))
+
+//
+// Bit indexing inside the whole L4 table (0 - 65535)
+//
+#define GUARDED_HEAP_MAP_BIT_INDEX(Address) \
+ (RShiftU64 (Address, EFI_PAGE_SHIFT) \
+ & ((1 << (GUARDED_HEAP_MAP_ENTRY_SHIFT \
+ + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT)) - 1))
+
+//
+// Memory address bit width tracked by L4 table: 10 + 6 + 12 = 28
+//
+#define GUARDED_HEAP_MAP_TABLE_SHIFT \
+ (GUARDED_HEAP_MAP_ENTRY_SHIFT + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT \
+ + EFI_PAGE_SHIFT)
+
+//
+// Macro used to initialize the local array variable for map table traversing
+// {55, 46, 37, 28, 18}
+//
+#define GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS \
+ { \
+ GUARDED_HEAP_MAP_TABLE_SHIFT + GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT * 3, \
+ GUARDED_HEAP_MAP_TABLE_SHIFT + GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT * 2, \
+ GUARDED_HEAP_MAP_TABLE_SHIFT + GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT, \
+ GUARDED_HEAP_MAP_TABLE_SHIFT, \
+ EFI_PAGE_SHIFT + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT \
+ }
+
+//
+// Masks used to extract address range of each level of table
+// {0x1FF, 0x1FF, 0x1FF, 0x1FF, 0x3FF}
+//
+#define GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS \
+ { \
+ (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
+ (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
+ (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
+ (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
+ (1 << GUARDED_HEAP_MAP_ENTRY_SHIFT) - 1 \
+ }
+
+//
+// Memory type to guard (matching the related PCD definition)
+//
+#define GUARD_HEAP_TYPE_POOL BIT2
+#define GUARD_HEAP_TYPE_PAGE BIT3
+
+typedef struct {
+ UINT32 TailMark;
+ UINT32 HeadMark;
+ EFI_PHYSICAL_ADDRESS Address;
+ LIST_ENTRY Link;
+} HEAP_GUARD_NODE;
+
+/**
+ Set head Guard and tail Guard for the given memory range
+
+ @param[in] Memory Base address of memory to set guard for
+ @param[in] NumberOfPages Memory size in pages
+
+ @return VOID
+**/
+VOID
+SetGuardForMemory (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NumberOfPages
+ );
+
+/**
+ Unset head Guard and tail Guard for the given memory range
+
+ @param[in] Memory Base address of memory to unset guard for
+ @param[in] NumberOfPages Memory size in pages
+
+ @return VOID
+**/
+VOID
+UnsetGuardForMemory (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NumberOfPages
+ );
+
+/**
+ Adjust the base and number of pages to really allocate according to Guard
+
+ @param[in/out] Memory Base address of free memory
+ @param[in/out] NumberOfPages Size of memory to allocate
+
+ @return VOID
+**/
+VOID
+AdjustMemoryA (
+ IN OUT EFI_PHYSICAL_ADDRESS *Memory,
+ IN OUT UINTN *NumberOfPages
+ );
+
+/**
+ Adjust the start address and number of pages to free according to Guard
+
+ The purpose of this function is to keep the shared Guard page with adjacent
+ memory block if it's still in guard, or free it if no more sharing. Another
+ is to reserve pages as Guard pages in partial page free situation.
+
+ @param[in/out] Memory Base address of memory to free
+ @param[in/out] NumberOfPages Size of memory to free
+
+ @return VOID
+**/
+VOID
+AdjustMemoryF (
+ IN OUT EFI_PHYSICAL_ADDRESS *Memory,
+ IN OUT UINTN *NumberOfPages
+ );
+
+/**
+ Check to see if the pool at the given address should be guarded or not
+
+ @param[in] MemoryType Pool type to check
+
+
+ @return TRUE The given type of pool should be guarded
+ @return FALSE The given type of pool should not be guarded
+**/
+BOOLEAN
+IsPoolTypeToGuard (
+ IN EFI_MEMORY_TYPE MemoryType
+ );
+
+/**
+ Check to see if the page at the given address should be guarded or not
+
+ @param[in] MemoryType Page type to check
+ @param[in] AllocateType Allocation type to check
+
+ @return TRUE The given type of page should be guarded
+ @return FALSE The given type of page should not be guarded
+**/
+BOOLEAN
+IsPageTypeToGuard (
+ IN EFI_MEMORY_TYPE MemoryType,
+ IN EFI_ALLOCATE_TYPE AllocateType
+ );
+
+/**
+ Check to see if the page at the given address is guarded or not
+
+ @param[in] Address The address to check for
+
+ @return TRUE The page at Address is guarded
+ @return FALSE The page at Address is not guarded
+**/
+BOOLEAN
+EFIAPI
+IsMemoryGuarded (
+ IN EFI_PHYSICAL_ADDRESS Address
+ );
+
+/**
+ Check to see if the page at the given address is a Guard page or not
+
+ @param[in] Address The address to check for
+
+ @return TRUE The page at Address is a Guard page
+ @return FALSE The page at Address is not a Guard page
+**/
+BOOLEAN
+EFIAPI
+IsGuardPage (
+ IN EFI_PHYSICAL_ADDRESS Address
+ );
+
+/**
+ Dump the guarded memory bit map
+
+ @return VOID
+**/
+VOID
+EFIAPI
+DumpGuardedMemoryBitmap (
+ VOID
+ );
+
+/**
+ Adjust the pool head position to make sure the Guard page is adjavent to
+ pool tail or pool head.
+
+ @param[in] Memory Base address of memory allocated
+ @param[in] NoPages Number of pages actually allocated
+ @param[in] Size Size of memory requested
+ (plus pool head/tail overhead)
+
+ @return Address of pool head
+**/
+VOID *
+AdjustPoolHeadA (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NoPages,
+ IN UINTN Size
+ );
+
+/**
+ Get the page base address according to pool head address
+
+ @param[in] Memory Head address of pool to free
+
+ @return Address of pool head
+**/
+VOID *
+AdjustPoolHeadF (
+ IN EFI_PHYSICAL_ADDRESS Memory
+ );
+
+/**
+ Helper function of memory allocation with Guard pages
+
+ @param FreePageList The free page node.
+ @param NumberOfPages Number of pages to be allocated.
+ @param MaxAddress Request to allocate memory below this address.
+ @param MemoryType Type of memory requested.
+
+ @return Memory address of allocated pages.
+**/
+UINTN
+InternalAllocMaxAddressWithGuard (
+ IN OUT LIST_ENTRY *FreePageList,
+ IN UINTN NumberOfPages,
+ IN UINTN MaxAddress,
+ IN EFI_MEMORY_TYPE MemoryType
+ );
+
+/**
+ Helper function of memory free with Guard pages
+
+ @param[in] Memory Base address of memory being freed.
+ @param[in] NumberOfPages The number of pages to free.
+ @param[in] AddRegion If this memory is new added region.
+
+ @retval EFI_NOT_FOUND Could not find the entry that covers the range.
+ @retval EFI_INVALID_PARAMETER Address not aligned, Address is zero or NumberOfPages is zero.
+ @return EFI_SUCCESS Pages successfully freed.
+**/
+EFI_STATUS
+SmmInternalFreePagesExWithGuard (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NumberOfPages,
+ IN BOOLEAN AddRegion
+ );
+
+/**
+ Check to see if the heap guard is enabled for page and/or pool allocation
+
+ @return TRUE/FALSE
+**/
+BOOLEAN
+IsHeapGuardEnabled (
+ VOID
+ );
+
+/**
+ Debug function used to verify if the Guard page is well set or not
+
+ @param[in] BaseAddress Address of memory to check
+ @param[in] NumberOfPages Size of memory in pages
+
+ @return TRUE The head Guard and tail Guard are both well set
+ @return FALSE The head Guard and/or tail Guard are not well set
+**/
+BOOLEAN
+VerifyMemoryGuard (
+ IN EFI_PHYSICAL_ADDRESS BaseAddress,
+ IN UINTN NumberOfPages
+ );
+
+extern BOOLEAN mOnGuarding;
+
+#endif
diff --git a/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c b/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c
new file mode 100644
index 0000000000..d41b3e923f
--- /dev/null
+++ b/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c
@@ -0,0 +1,704 @@
+/** @file
+
+Copyright (c) 2016 - 2017, Intel Corporation. All rights reserved.<BR>
+This program and the accompanying materials
+are licensed and made available under the terms and conditions of the BSD License
+which accompanies this distribution. The full text of the license may be found at
+http://opensource.org/licenses/bsd-license.php
+
+THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+
+**/
+
+#include "PiSmmCore.h"
+#include "PageTable.h"
+
+#include <Library/CpuLib.h>
+
+UINT64 mAddressEncMask = 0;
+UINT8 mPhysicalAddressBits = 32;
+
+PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] = {
+ {PageNone, 0, 0},
+ {Page4K, SIZE_4KB, PAGING_4K_ADDRESS_MASK_64},
+ {Page2M, SIZE_2MB, PAGING_2M_ADDRESS_MASK_64},
+ {Page1G, SIZE_1GB, PAGING_1G_ADDRESS_MASK_64},
+};
+
+/**
+ Calculate the maximum support address.
+
+ @return the maximum support address.
+**/
+UINT8
+CalculateMaximumSupportAddress (
+ VOID
+ )
+{
+ UINT32 RegEax;
+ UINT8 PhysicalAddressBits;
+ VOID *Hob;
+
+ //
+ // Get physical address bits supported.
+ //
+ Hob = GetFirstHob (EFI_HOB_TYPE_CPU);
+ if (Hob != NULL) {
+ PhysicalAddressBits = ((EFI_HOB_CPU *) Hob)->SizeOfMemorySpace;
+ } else {
+ AsmCpuid (0x80000000, &RegEax, NULL, NULL, NULL);
+ if (RegEax >= 0x80000008) {
+ AsmCpuid (0x80000008, &RegEax, NULL, NULL, NULL);
+ PhysicalAddressBits = (UINT8) RegEax;
+ } else {
+ PhysicalAddressBits = 36;
+ }
+ }
+
+ //
+ // IA-32e paging translates 48-bit linear addresses to 52-bit physical addresses.
+ //
+ ASSERT (PhysicalAddressBits <= 52);
+ if (PhysicalAddressBits > 48) {
+ PhysicalAddressBits = 48;
+ }
+ return PhysicalAddressBits;
+}
+
+/**
+ Return page table base.
+
+ @return page table base.
+**/
+UINTN
+GetPageTableBase (
+ VOID
+ )
+{
+ return (AsmReadCr3 () & PAGING_4K_ADDRESS_MASK_64);
+}
+
+/**
+ Return length according to page attributes.
+
+ @param[in] PageAttributes The page attribute of the page entry.
+
+ @return The length of page entry.
+**/
+UINTN
+PageAttributeToLength (
+ IN PAGE_ATTRIBUTE PageAttribute
+ )
+{
+ if (PageAttribute <= Page1G) {
+ return (UINTN)mPageAttributeTable[PageAttribute].Length;
+ }
+ return 0;
+}
+
+/**
+ Return address mask according to page attributes.
+
+ @param[in] PageAttributes The page attribute of the page entry.
+
+ @return The address mask of page entry.
+**/
+UINTN
+PageAttributeToMask (
+ IN PAGE_ATTRIBUTE PageAttribute
+ )
+{
+ if (PageAttribute <= Page1G) {
+ return (UINTN)mPageAttributeTable[PageAttribute].AddressMask;
+ }
+ return 0;
+}
+
+/**
+ Return page table entry to match the address.
+
+ @param[in] Address The address to be checked.
+ @param[out] PageAttributes The page attribute of the page entry.
+
+ @return The page entry.
+**/
+VOID *
+GetPageTableEntry (
+ IN PHYSICAL_ADDRESS Address,
+ OUT PAGE_ATTRIBUTE *PageAttribute
+ )
+{
+ UINTN Index1;
+ UINTN Index2;
+ UINTN Index3;
+ UINTN Index4;
+ UINT64 *L1PageTable;
+ UINT64 *L2PageTable;
+ UINT64 *L3PageTable;
+ UINT64 *L4PageTable;
+
+ Index4 = ((UINTN)RShiftU64 (Address, 39)) & PAGING_PAE_INDEX_MASK;
+ Index3 = ((UINTN)Address >> 30) & PAGING_PAE_INDEX_MASK;
+ Index2 = ((UINTN)Address >> 21) & PAGING_PAE_INDEX_MASK;
+ Index1 = ((UINTN)Address >> 12) & PAGING_PAE_INDEX_MASK;
+
+ if (sizeof(UINTN) == sizeof(UINT64)) {
+ L4PageTable = (UINT64 *)GetPageTableBase ();
+ if (L4PageTable[Index4] == 0) {
+ *PageAttribute = PageNone;
+ return NULL;
+ }
+
+ L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] & ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
+ } else {
+ L3PageTable = (UINT64 *)GetPageTableBase ();
+ }
+ if (L3PageTable[Index3] == 0) {
+ *PageAttribute = PageNone;
+ return NULL;
+ }
+ if ((L3PageTable[Index3] & IA32_PG_PS) != 0) {
+ // 1G
+ *PageAttribute = Page1G;
+ return &L3PageTable[Index3];
+ }
+
+ L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] & ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
+ if (L2PageTable[Index2] == 0) {
+ *PageAttribute = PageNone;
+ return NULL;
+ }
+ if ((L2PageTable[Index2] & IA32_PG_PS) != 0) {
+ // 2M
+ *PageAttribute = Page2M;
+ return &L2PageTable[Index2];
+ }
+
+ // 4k
+ L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] & ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
+ if ((L1PageTable[Index1] == 0) && (Address != 0)) {
+ *PageAttribute = PageNone;
+ return NULL;
+ }
+ *PageAttribute = Page4K;
+ return &L1PageTable[Index1];
+}
+
+/**
+ Return memory attributes of page entry.
+
+ @param[in] PageEntry The page entry.
+
+ @return Memory attributes of page entry.
+**/
+UINT64
+GetAttributesFromPageEntry (
+ IN UINT64 *PageEntry
+ )
+{
+ UINT64 Attributes;
+ Attributes = 0;
+ if ((*PageEntry & IA32_PG_P) == 0) {
+ Attributes |= EFI_MEMORY_RP;
+ }
+ if ((*PageEntry & IA32_PG_RW) == 0) {
+ Attributes |= EFI_MEMORY_RO;
+ }
+ if ((*PageEntry & IA32_PG_NX) != 0) {
+ Attributes |= EFI_MEMORY_XP;
+ }
+ return Attributes;
+}
+
+/**
+ Modify memory attributes of page entry.
+
+ @param[in] PageEntry The page entry.
+ @param[in] Attributes The bit mask of attributes to modify for the memory region.
+ @param[in] IsSet TRUE means to set attributes. FALSE means to clear attributes.
+ @param[out] IsModified TRUE means page table modified. FALSE means page table not modified.
+**/
+VOID
+ConvertPageEntryAttribute (
+ IN UINT64 *PageEntry,
+ IN UINT64 Attributes,
+ IN BOOLEAN IsSet,
+ OUT BOOLEAN *IsModified
+ )
+{
+ UINT64 CurrentPageEntry;
+ UINT64 NewPageEntry;
+
+ CurrentPageEntry = *PageEntry;
+ NewPageEntry = CurrentPageEntry;
+ if ((Attributes & EFI_MEMORY_RP) != 0) {
+ if (IsSet) {
+ NewPageEntry &= ~(UINT64)IA32_PG_P;
+ } else {
+ NewPageEntry |= IA32_PG_P;
+ }
+ }
+ if ((Attributes & EFI_MEMORY_RO) != 0) {
+ if (IsSet) {
+ NewPageEntry &= ~(UINT64)IA32_PG_RW;
+ } else {
+ NewPageEntry |= IA32_PG_RW;
+ }
+ }
+ if ((Attributes & EFI_MEMORY_XP) != 0) {
+ if (IsSet) {
+ NewPageEntry |= IA32_PG_NX;
+ } else {
+ NewPageEntry &= ~IA32_PG_NX;
+ }
+ }
+
+ if (CurrentPageEntry != NewPageEntry) {
+ *PageEntry = NewPageEntry;
+ *IsModified = TRUE;
+ DEBUG ((DEBUG_INFO, "(SMM)ConvertPageEntryAttribute 0x%lx", CurrentPageEntry));
+ DEBUG ((DEBUG_INFO, "->0x%lx\n", NewPageEntry));
+ } else {
+ *IsModified = FALSE;
+ }
+}
+
+/**
+ This function returns if there is need to split page entry.
+
+ @param[in] BaseAddress The base address to be checked.
+ @param[in] Length The length to be checked.
+ @param[in] PageEntry The page entry to be checked.
+ @param[in] PageAttribute The page attribute of the page entry.
+
+ @retval SplitAttributes on if there is need to split page entry.
+**/
+PAGE_ATTRIBUTE
+NeedSplitPage (
+ IN PHYSICAL_ADDRESS BaseAddress,
+ IN UINT64 Length,
+ IN UINT64 *PageEntry,
+ IN PAGE_ATTRIBUTE PageAttribute
+ )
+{
+ UINT64 PageEntryLength;
+
+ PageEntryLength = PageAttributeToLength (PageAttribute);
+
+ if (((BaseAddress & (PageEntryLength - 1)) == 0) && (Length >= PageEntryLength)) {
+ return PageNone;
+ }
+
+ if (((BaseAddress & PAGING_2M_MASK) != 0) || (Length < SIZE_2MB)) {
+ return Page4K;
+ }
+
+ return Page2M;
+}
+
+/**
+ This function splits one page entry to small page entries.
+
+ @param[in] PageEntry The page entry to be splitted.
+ @param[in] PageAttribute The page attribute of the page entry.
+ @param[in] SplitAttribute How to split the page entry.
+
+ @retval RETURN_SUCCESS The page entry is splitted.
+ @retval RETURN_UNSUPPORTED The page entry does not support to be splitted.
+ @retval RETURN_OUT_OF_RESOURCES No resource to split page entry.
+**/
+RETURN_STATUS
+SplitPage (
+ IN UINT64 *PageEntry,
+ IN PAGE_ATTRIBUTE PageAttribute,
+ IN PAGE_ATTRIBUTE SplitAttribute
+ )
+{
+ UINT64 BaseAddress;
+ UINT64 *NewPageEntry;
+ UINTN Index;
+
+ ASSERT (PageAttribute == Page2M || PageAttribute == Page1G);
+
+ if (PageAttribute == Page2M) {
+ //
+ // Split 2M to 4K
+ //
+ ASSERT (SplitAttribute == Page4K);
+ if (SplitAttribute == Page4K) {
+ NewPageEntry = PageAlloc (1);
+ DEBUG ((DEBUG_VERBOSE, "Split - 0x%x\n", NewPageEntry));
+ if (NewPageEntry == NULL) {
+ return RETURN_OUT_OF_RESOURCES;
+ }
+ BaseAddress = *PageEntry & PAGING_2M_ADDRESS_MASK_64;
+ for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
+ NewPageEntry[Index] = (BaseAddress + SIZE_4KB * Index) | mAddressEncMask | ((*PageEntry) & PAGE_PROGATE_BITS);
+ }
+ (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
+ return RETURN_SUCCESS;
+ } else {
+ return RETURN_UNSUPPORTED;
+ }
+ } else if (PageAttribute == Page1G) {
+ //
+ // Split 1G to 2M
+ // No need support 1G->4K directly, we should use 1G->2M, then 2M->4K to get more compact page table.
+ //
+ ASSERT (SplitAttribute == Page2M || SplitAttribute == Page4K);
+ if ((SplitAttribute == Page2M || SplitAttribute == Page4K)) {
+ NewPageEntry = PageAlloc (1);
+ DEBUG ((DEBUG_VERBOSE, "Split - 0x%x\n", NewPageEntry));
+ if (NewPageEntry == NULL) {
+ return RETURN_OUT_OF_RESOURCES;
+ }
+ BaseAddress = *PageEntry & PAGING_1G_ADDRESS_MASK_64;
+ for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
+ NewPageEntry[Index] = (BaseAddress + SIZE_2MB * Index) | mAddressEncMask | IA32_PG_PS | ((*PageEntry) & PAGE_PROGATE_BITS);
+ }
+ (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask | PAGE_ATTRIBUTE_BITS;
+ return RETURN_SUCCESS;
+ } else {
+ return RETURN_UNSUPPORTED;
+ }
+ } else {
+ return RETURN_UNSUPPORTED;
+ }
+}
+
+/**
+ This function modifies the page attributes for the memory region specified by BaseAddress and
+ Length from their current attributes to the attributes specified by Attributes.
+
+ Caller should make sure BaseAddress and Length is at page boundary.
+
+ @param[in] BaseAddress The physical address that is the start address of a memory region.
+ @param[in] Length The size in bytes of the memory region.
+ @param[in] Attributes The bit mask of attributes to modify for the memory region.
+ @param[in] IsSet TRUE means to set attributes. FALSE means to clear attributes.
+ @param[out] IsSplitted TRUE means page table splitted. FALSE means page table not splitted.
+ @param[out] IsModified TRUE means page table modified. FALSE means page table not modified.
+
+ @retval RETURN_SUCCESS The attributes were modified for the memory region.
+ @retval RETURN_ACCESS_DENIED The attributes for the memory resource range specified by
+ BaseAddress and Length cannot be modified.
+ @retval RETURN_INVALID_PARAMETER Length is zero.
+ Attributes specified an illegal combination of attributes that
+ cannot be set together.
+ @retval RETURN_OUT_OF_RESOURCES There are not enough system resources to modify the attributes of
+ the memory resource range.
+ @retval RETURN_UNSUPPORTED The processor does not support one or more bytes of the memory
+ resource range specified by BaseAddress and Length.
+ The bit mask of attributes is not support for the memory resource
+ range specified by BaseAddress and Length.
+**/
+RETURN_STATUS
+EFIAPI
+ConvertMemoryPageAttributes (
+ IN PHYSICAL_ADDRESS BaseAddress,
+ IN UINT64 Length,
+ IN UINT64 Attributes,
+ IN BOOLEAN IsSet,
+ OUT BOOLEAN *IsSplitted, OPTIONAL
+ OUT BOOLEAN *IsModified OPTIONAL
+ )
+{
+ UINT64 *PageEntry;
+ PAGE_ATTRIBUTE PageAttribute;
+ UINTN PageEntryLength;
+ PAGE_ATTRIBUTE SplitAttribute;
+ RETURN_STATUS Status;
+ BOOLEAN IsEntryModified;
+ EFI_PHYSICAL_ADDRESS MaximumSupportMemAddress;
+
+ ASSERT (Attributes != 0);
+ ASSERT ((Attributes & ~(EFI_MEMORY_RP | EFI_MEMORY_RO | EFI_MEMORY_XP)) == 0);
+
+ ASSERT ((BaseAddress & (SIZE_4KB - 1)) == 0);
+ ASSERT ((Length & (SIZE_4KB - 1)) == 0);
+
+ if (Length == 0) {
+ return RETURN_INVALID_PARAMETER;
+ }
+
+ MaximumSupportMemAddress = (EFI_PHYSICAL_ADDRESS)(UINTN)(LShiftU64 (1, mPhysicalAddressBits) - 1);
+ if (BaseAddress > MaximumSupportMemAddress) {
+ return RETURN_UNSUPPORTED;
+ }
+ if (Length > MaximumSupportMemAddress) {
+ return RETURN_UNSUPPORTED;
+ }
+ if ((Length != 0) && (BaseAddress > MaximumSupportMemAddress - (Length - 1))) {
+ return RETURN_UNSUPPORTED;
+ }
+
+// DEBUG ((DEBUG_ERROR, "ConvertMemoryPageAttributes(%x) - %016lx, %016lx, %02lx\n", IsSet, BaseAddress, Length, Attributes));
+
+ if (IsSplitted != NULL) {
+ *IsSplitted = FALSE;
+ }
+ if (IsModified != NULL) {
+ *IsModified = FALSE;
+ }
+
+ //
+ // Below logic is to check 2M/4K page to make sure we do not waste memory.
+ //
+ while (Length != 0) {
+ PageEntry = GetPageTableEntry (BaseAddress, &PageAttribute);
+ if (PageEntry == NULL) {
+ return RETURN_UNSUPPORTED;
+ }
+ PageEntryLength = PageAttributeToLength (PageAttribute);
+ SplitAttribute = NeedSplitPage (BaseAddress, Length, PageEntry, PageAttribute);
+ if (SplitAttribute == PageNone) {
+ ConvertPageEntryAttribute (PageEntry, Attributes, IsSet, &IsEntryModified);
+ if (IsEntryModified) {
+ if (IsModified != NULL) {
+ *IsModified = TRUE;
+ }
+ }
+ //
+ // Convert success, move to next
+ //
+ BaseAddress += PageEntryLength;
+ Length -= PageEntryLength;
+ } else {
+ Status = SplitPage (PageEntry, PageAttribute, SplitAttribute);
+ if (RETURN_ERROR (Status)) {
+ return RETURN_UNSUPPORTED;
+ }
+ if (IsSplitted != NULL) {
+ *IsSplitted = TRUE;
+ }
+ if (IsModified != NULL) {
+ *IsModified = TRUE;
+ }
+ //
+ // Just split current page
+ // Convert success in next around
+ //
+ }
+ }
+
+ return RETURN_SUCCESS;
+}
+
+/**
+ FlushTlb on current processor.
+
+ @param[in,out] Buffer Pointer to private data buffer.
+**/
+VOID
+EFIAPI
+FlushTlbOnCurrentProcessor (
+ IN OUT VOID *Buffer
+ )
+{
+ CpuFlushTlb ();
+}
+
+/**
+ FlushTlb for all processors.
+**/
+VOID
+FlushTlbForAll (
+ VOID
+ )
+{
+ UINTN Index;
+
+ FlushTlbOnCurrentProcessor (NULL);
+
+ if (gSmmCoreSmst.SmmStartupThisAp == NULL) {
+ DEBUG ((DEBUG_WARN, "Cannot flush TLB for APs\r\n"));
+ return;
+ }
+
+ for (Index = 0; Index < gSmmCoreSmst.NumberOfCpus; Index++) {
+ if (Index != gSmmCoreSmst.CurrentlyExecutingCpu) {
+ // Force to start up AP in blocking mode,
+ gSmmCoreSmst.SmmStartupThisAp (FlushTlbOnCurrentProcessor, Index, NULL);
+ // Do not check return status, because AP might not be present in some corner cases.
+ }
+ }
+}
+
+/**
+ This function sets the attributes for the memory region specified by BaseAddress and
+ Length from their current attributes to the attributes specified by Attributes.
+
+ @param[in] BaseAddress The physical address that is the start address of a memory region.
+ @param[in] Length The size in bytes of the memory region.
+ @param[in] Attributes The bit mask of attributes to set for the memory region.
+ @param[out] IsSplitted TRUE means page table splitted. FALSE means page table not splitted.
+
+ @retval EFI_SUCCESS The attributes were set for the memory region.
+ @retval EFI_ACCESS_DENIED The attributes for the memory resource range specified by
+ BaseAddress and Length cannot be modified.
+ @retval EFI_INVALID_PARAMETER Length is zero.
+ Attributes specified an illegal combination of attributes that
+ cannot be set together.
+ @retval EFI_OUT_OF_RESOURCES There are not enough system resources to modify the attributes of
+ the memory resource range.
+ @retval EFI_UNSUPPORTED The processor does not support one or more bytes of the memory
+ resource range specified by BaseAddress and Length.
+ The bit mask of attributes is not support for the memory resource
+ range specified by BaseAddress and Length.
+
+**/
+EFI_STATUS
+EFIAPI
+SmmSetMemoryAttributesEx (
+ IN EFI_PHYSICAL_ADDRESS BaseAddress,
+ IN UINT64 Length,
+ IN UINT64 Attributes,
+ OUT BOOLEAN *IsSplitted OPTIONAL
+ )
+{
+ EFI_STATUS Status;
+ BOOLEAN IsModified;
+
+ Status = ConvertMemoryPageAttributes (BaseAddress, Length, Attributes, TRUE, IsSplitted, &IsModified);
+ if (!EFI_ERROR(Status)) {
+ if (IsModified) {
+ //
+ // Flush TLB as last step
+ //
+ FlushTlbForAll();
+ }
+ }
+
+ return Status;
+}
+
+/**
+ This function clears the attributes for the memory region specified by BaseAddress and
+ Length from their current attributes to the attributes specified by Attributes.
+
+ @param[in] BaseAddress The physical address that is the start address of a memory region.
+ @param[in] Length The size in bytes of the memory region.
+ @param[in] Attributes The bit mask of attributes to clear for the memory region.
+ @param[out] IsSplitted TRUE means page table splitted. FALSE means page table not splitted.
+
+ @retval EFI_SUCCESS The attributes were cleared for the memory region.
+ @retval EFI_ACCESS_DENIED The attributes for the memory resource range specified by
+ BaseAddress and Length cannot be modified.
+ @retval EFI_INVALID_PARAMETER Length is zero.
+ Attributes specified an illegal combination of attributes that
+ cannot be set together.
+ @retval EFI_OUT_OF_RESOURCES There are not enough system resources to modify the attributes of
+ the memory resource range.
+ @retval EFI_UNSUPPORTED The processor does not support one or more bytes of the memory
+ resource range specified by BaseAddress and Length.
+ The bit mask of attributes is not support for the memory resource
+ range specified by BaseAddress and Length.
+
+**/
+EFI_STATUS
+EFIAPI
+SmmClearMemoryAttributesEx (
+ IN EFI_PHYSICAL_ADDRESS BaseAddress,
+ IN UINT64 Length,
+ IN UINT64 Attributes,
+ OUT BOOLEAN *IsSplitted OPTIONAL
+ )
+{
+ EFI_STATUS Status;
+ BOOLEAN IsModified;
+
+ Status = ConvertMemoryPageAttributes (BaseAddress, Length, Attributes, FALSE, IsSplitted, &IsModified);
+ if (!EFI_ERROR(Status)) {
+ if (IsModified) {
+ //
+ // Flush TLB as last step
+ //
+ FlushTlbForAll();
+ }
+ }
+
+ return Status;
+}
+
+/**
+ This function sets the attributes for the memory region specified by BaseAddress and
+ Length from their current attributes to the attributes specified by Attributes.
+
+ @param[in] BaseAddress The physical address that is the start address of a memory region.
+ @param[in] Length The size in bytes of the memory region.
+ @param[in] Attributes The bit mask of attributes to set for the memory region.
+
+ @retval EFI_SUCCESS The attributes were set for the memory region.
+ @retval EFI_ACCESS_DENIED The attributes for the memory resource range specified by
+ BaseAddress and Length cannot be modified.
+ @retval EFI_INVALID_PARAMETER Length is zero.
+ Attributes specified an illegal combination of attributes that
+ cannot be set together.
+ @retval EFI_OUT_OF_RESOURCES There are not enough system resources to modify the attributes of
+ the memory resource range.
+ @retval EFI_UNSUPPORTED The processor does not support one or more bytes of the memory
+ resource range specified by BaseAddress and Length.
+ The bit mask of attributes is not support for the memory resource
+ range specified by BaseAddress and Length.
+
+**/
+EFI_STATUS
+EFIAPI
+SmmSetMemoryAttributes (
+ IN EFI_PHYSICAL_ADDRESS BaseAddress,
+ IN UINT64 Length,
+ IN UINT64 Attributes
+ )
+{
+ return SmmSetMemoryAttributesEx (BaseAddress, Length, Attributes, NULL);
+}
+
+/**
+ This function clears the attributes for the memory region specified by BaseAddress and
+ Length from their current attributes to the attributes specified by Attributes.
+
+ @param[in] BaseAddress The physical address that is the start address of a memory region.
+ @param[in] Length The size in bytes of the memory region.
+ @param[in] Attributes The bit mask of attributes to clear for the memory region.
+
+ @retval EFI_SUCCESS The attributes were cleared for the memory region.
+ @retval EFI_ACCESS_DENIED The attributes for the memory resource range specified by
+ BaseAddress and Length cannot be modified.
+ @retval EFI_INVALID_PARAMETER Length is zero.
+ Attributes specified an illegal combination of attributes that
+ cannot be set together.
+ @retval EFI_OUT_OF_RESOURCES There are not enough system resources to modify the attributes of
+ the memory resource range.
+ @retval EFI_UNSUPPORTED The processor does not support one or more bytes of the memory
+ resource range specified by BaseAddress and Length.
+ The bit mask of attributes is not support for the memory resource
+ range specified by BaseAddress and Length.
+
+**/
+EFI_STATUS
+EFIAPI
+SmmClearMemoryAttributes (
+ IN EFI_PHYSICAL_ADDRESS BaseAddress,
+ IN UINT64 Length,
+ IN UINT64 Attributes
+ )
+{
+ return SmmClearMemoryAttributesEx (BaseAddress, Length, Attributes, NULL);
+}
+
+/**
+ Initialize the Page Table lib.
+**/
+VOID
+InitializePageTableLib (
+ VOID
+ )
+{
+ mAddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask) & PAGING_1G_ADDRESS_MASK_64;
+ mPhysicalAddressBits = CalculateMaximumSupportAddress ();
+ DEBUG ((DEBUG_INFO, "mAddressEncMask = 0x%lx\r\n", mAddressEncMask));
+ DEBUG ((DEBUG_INFO, "mPhysicalAddressBits = %d\r\n", mPhysicalAddressBits));
+ return ;
+}
+
diff --git a/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h b/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h
new file mode 100644
index 0000000000..61a64af370
--- /dev/null
+++ b/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h
@@ -0,0 +1,174 @@
+/** @file
+ Page table management header file.
+
+ Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials
+ are licensed and made available under the terms and conditions of the BSD License
+ which accompanies this distribution. The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.php
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+
+**/
+
+#ifndef _PAGE_TABLE_LIB_H_
+#define _PAGE_TABLE_LIB_H_
+
+///
+/// Page Table Entry
+///
+#define IA32_PG_P BIT0
+#define IA32_PG_RW BIT1
+#define IA32_PG_U BIT2
+#define IA32_PG_WT BIT3
+#define IA32_PG_CD BIT4
+#define IA32_PG_A BIT5
+#define IA32_PG_D BIT6
+#define IA32_PG_PS BIT7
+#define IA32_PG_PAT_2M BIT12
+#define IA32_PG_PAT_4K IA32_PG_PS
+#define IA32_PG_PMNT BIT62
+#define IA32_PG_NX BIT63
+
+#define PAGE_ATTRIBUTE_BITS (IA32_PG_D | IA32_PG_A | IA32_PG_U | IA32_PG_RW | IA32_PG_P)
+//
+// Bits 1, 2, 5, 6 are reserved in the IA32 PAE PDPTE
+// X64 PAE PDPTE does not have such restriction
+//
+#define IA32_PAE_PDPTE_ATTRIBUTE_BITS (IA32_PG_P)
+
+#define PAGE_PROGATE_BITS (IA32_PG_NX | PAGE_ATTRIBUTE_BITS)
+
+#define PAGING_4K_MASK 0xFFF
+#define PAGING_2M_MASK 0x1FFFFF
+#define PAGING_1G_MASK 0x3FFFFFFF
+
+#define PAGING_PAE_INDEX_MASK 0x1FF
+
+#define PAGING_4K_ADDRESS_MASK_64 0x000FFFFFFFFFF000ull
+#define PAGING_2M_ADDRESS_MASK_64 0x000FFFFFFFE00000ull
+#define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull
+
+#define SMRR_MAX_ADDRESS BASE_4GB
+
+typedef enum {
+ PageNone = 0,
+ Page4K,
+ Page2M,
+ Page1G,
+} PAGE_ATTRIBUTE;
+
+typedef struct {
+ PAGE_ATTRIBUTE Attribute;
+ UINT64 Length;
+ UINT64 AddressMask;
+} PAGE_ATTRIBUTE_TABLE;
+
+/**
+ Helper function to allocate pages without Guard for internal uses
+
+ @param[in] Pages Page number
+
+ @return Address of memory allocated
+**/
+VOID *
+PageAlloc (
+ IN UINTN Pages
+ );
+
+/**
+ This function sets the attributes for the memory region specified by BaseAddress and
+ Length from their current attributes to the attributes specified by Attributes.
+
+ @param[in] BaseAddress The physical address that is the start address of a memory region.
+ @param[in] Length The size in bytes of the memory region.
+ @param[in] Attributes The bit mask of attributes to set for the memory region.
+ @param[out] IsSplitted TRUE means page table splitted. FALSE means page table not splitted.
+
+ @retval EFI_SUCCESS The attributes were set for the memory region.
+ @retval EFI_ACCESS_DENIED The attributes for the memory resource range specified by
+ BaseAddress and Length cannot be modified.
+ @retval EFI_INVALID_PARAMETER Length is zero.
+ Attributes specified an illegal combination of attributes that
+ cannot be set together.
+ @retval EFI_OUT_OF_RESOURCES There are not enough system resources to modify the attributes of
+ the memory resource range.
+ @retval EFI_UNSUPPORTED The processor does not support one or more bytes of the memory
+ resource range specified by BaseAddress and Length.
+ The bit mask of attributes is not support for the memory resource
+ range specified by BaseAddress and Length.
+
+**/
+EFI_STATUS
+EFIAPI
+SmmSetMemoryAttributes (
+ IN EFI_PHYSICAL_ADDRESS BaseAddress,
+ IN UINT64 Length,
+ IN UINT64 Attributes
+ );
+
+/**
+ This function clears the attributes for the memory region specified by BaseAddress and
+ Length from their current attributes to the attributes specified by Attributes.
+
+ @param[in] BaseAddress The physical address that is the start address of a memory region.
+ @param[in] Length The size in bytes of the memory region.
+ @param[in] Attributes The bit mask of attributes to clear for the memory region.
+ @param[out] IsSplitted TRUE means page table splitted. FALSE means page table not splitted.
+
+ @retval EFI_SUCCESS The attributes were cleared for the memory region.
+ @retval EFI_ACCESS_DENIED The attributes for the memory resource range specified by
+ BaseAddress and Length cannot be modified.
+ @retval EFI_INVALID_PARAMETER Length is zero.
+ Attributes specified an illegal combination of attributes that
+ cannot be set together.
+ @retval EFI_OUT_OF_RESOURCES There are not enough system resources to modify the attributes of
+ the memory resource range.
+ @retval EFI_UNSUPPORTED The processor does not support one or more bytes of the memory
+ resource range specified by BaseAddress and Length.
+ The bit mask of attributes is not support for the memory resource
+ range specified by BaseAddress and Length.
+
+**/
+EFI_STATUS
+EFIAPI
+SmmClearMemoryAttributes (
+ IN EFI_PHYSICAL_ADDRESS BaseAddress,
+ IN UINT64 Length,
+ IN UINT64 Attributes
+ );
+
+/**
+ Initialize the Page Table lib.
+**/
+VOID
+InitializePageTableLib (
+ VOID
+ );
+
+/**
+ Return page table base.
+
+ @return page table base.
+**/
+UINTN
+GetPageTableBase (
+ VOID
+ );
+
+/**
+ Return page table entry to match the address.
+
+ @param[in] Address The address to be checked.
+ @param[out] PageAttributes The page attribute of the page entry.
+
+ @return The page entry.
+**/
+VOID *
+GetPageTableEntry (
+ IN PHYSICAL_ADDRESS Address,
+ OUT PAGE_ATTRIBUTE *PageAttribute
+ );
+
+#endif
diff --git a/MdeModulePkg/Core/PiSmmCore/Page.c b/MdeModulePkg/Core/PiSmmCore/Page.c
index 4154c2e6a1..29d1311f5a 100644
--- a/MdeModulePkg/Core/PiSmmCore/Page.c
+++ b/MdeModulePkg/Core/PiSmmCore/Page.c
@@ -64,6 +64,8 @@ LIST_ENTRY mFreeMemoryMapEntryList = INITIALIZE_LIST_HEAD_VARIABLE (mFreeMemor
@param[out] Memory A pointer to receive the base allocated memory
address.
@param[in] AddRegion If this memory is new added region.
+ @param[in] NeedGuard Flag to indicate Guard page is needed
+ or not
@retval EFI_INVALID_PARAMETER Parameters violate checking rules defined in spec.
@retval EFI_NOT_FOUND Could not allocate pages match the requirement.
@@ -77,7 +79,8 @@ SmmInternalAllocatePagesEx (
IN EFI_MEMORY_TYPE MemoryType,
IN UINTN NumberOfPages,
OUT EFI_PHYSICAL_ADDRESS *Memory,
- IN BOOLEAN AddRegion
+ IN BOOLEAN AddRegion,
+ IN BOOLEAN NeedGuard
);
/**
@@ -112,7 +115,8 @@ AllocateMemoryMapEntry (
EfiRuntimeServicesData,
EFI_SIZE_TO_PAGES (RUNTIME_PAGE_ALLOCATION_GRANULARITY),
&Mem,
- TRUE
+ TRUE,
+ FALSE
);
ASSERT_EFI_ERROR (Status);
if(!EFI_ERROR (Status)) {
@@ -688,6 +692,8 @@ InternalAllocAddress (
@param[out] Memory A pointer to receive the base allocated memory
address.
@param[in] AddRegion If this memory is new added region.
+ @param[in] NeedGuard Flag to indicate Guard page is needed
+ or not
@retval EFI_INVALID_PARAMETER Parameters violate checking rules defined in spec.
@retval EFI_NOT_FOUND Could not allocate pages match the requirement.
@@ -701,7 +707,8 @@ SmmInternalAllocatePagesEx (
IN EFI_MEMORY_TYPE MemoryType,
IN UINTN NumberOfPages,
OUT EFI_PHYSICAL_ADDRESS *Memory,
- IN BOOLEAN AddRegion
+ IN BOOLEAN AddRegion,
+ IN BOOLEAN NeedGuard
)
{
UINTN RequestedAddress;
@@ -723,6 +730,21 @@ SmmInternalAllocatePagesEx (
case AllocateAnyPages:
RequestedAddress = (UINTN)(-1);
case AllocateMaxAddress:
+ if (NeedGuard) {
+ *Memory = InternalAllocMaxAddressWithGuard (
+ &mSmmMemoryMap,
+ NumberOfPages,
+ RequestedAddress,
+ MemoryType
+ );
+ if (*Memory == (UINTN)-1) {
+ return EFI_OUT_OF_RESOURCES;
+ } else {
+ ASSERT (VerifyMemoryGuard(*Memory, NumberOfPages) == TRUE);
+ return EFI_SUCCESS;
+ }
+ }
+
*Memory = InternalAllocMaxAddress (
&mSmmMemoryMap,
NumberOfPages,
@@ -766,6 +788,8 @@ SmmInternalAllocatePagesEx (
@param[in] NumberOfPages The number of pages to allocate.
@param[out] Memory A pointer to receive the base allocated memory
address.
+ @param[in] NeedGuard Flag to indicate Guard page is needed
+ or not
@retval EFI_INVALID_PARAMETER Parameters violate checking rules defined in spec.
@retval EFI_NOT_FOUND Could not allocate pages match the requirement.
@@ -779,10 +803,12 @@ SmmInternalAllocatePages (
IN EFI_ALLOCATE_TYPE Type,
IN EFI_MEMORY_TYPE MemoryType,
IN UINTN NumberOfPages,
- OUT EFI_PHYSICAL_ADDRESS *Memory
+ OUT EFI_PHYSICAL_ADDRESS *Memory,
+ IN BOOLEAN NeedGuard
)
{
- return SmmInternalAllocatePagesEx (Type, MemoryType, NumberOfPages, Memory, FALSE);
+ return SmmInternalAllocatePagesEx (Type, MemoryType, NumberOfPages, Memory,
+ FALSE, NeedGuard);
}
/**
@@ -811,8 +837,11 @@ SmmAllocatePages (
)
{
EFI_STATUS Status;
+ BOOLEAN NeedGuard;
- Status = SmmInternalAllocatePages (Type, MemoryType, NumberOfPages, Memory);
+ NeedGuard = IsPageTypeToGuard (MemoryType, Type);
+ Status = SmmInternalAllocatePages (Type, MemoryType, NumberOfPages, Memory,
+ NeedGuard);
if (!EFI_ERROR (Status)) {
SmmCoreUpdateProfile (
(EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
@@ -941,9 +970,13 @@ EFI_STATUS
EFIAPI
SmmInternalFreePages (
IN EFI_PHYSICAL_ADDRESS Memory,
- IN UINTN NumberOfPages
+ IN UINTN NumberOfPages,
+ IN BOOLEAN IsGuarded
)
{
+ if (IsGuarded) {
+ return SmmInternalFreePagesExWithGuard (Memory, NumberOfPages, FALSE);
+ }
return SmmInternalFreePagesEx (Memory, NumberOfPages, FALSE);
}
@@ -966,8 +999,10 @@ SmmFreePages (
)
{
EFI_STATUS Status;
+ BOOLEAN IsGuarded;
- Status = SmmInternalFreePages (Memory, NumberOfPages);
+ IsGuarded = IsHeapGuardEnabled () && IsMemoryGuarded (Memory);
+ Status = SmmInternalFreePages (Memory, NumberOfPages, IsGuarded);
if (!EFI_ERROR (Status)) {
SmmCoreUpdateProfile (
(EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
diff --git a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.c b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.c
index 9e4390e15a..b4609c2fed 100644
--- a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.c
+++ b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.c
@@ -451,6 +451,11 @@ SmmEntryPoint (
//
PlatformHookBeforeSmmDispatch ();
+ //
+ // Call memory management hook function
+ //
+ SmmEntryPointMemoryManagementHook ();
+
//
// If a legacy boot has occured, then make sure gSmmCorePrivate is not accessed
//
@@ -644,7 +649,12 @@ SmmMain (
//
gSmmCorePrivate->Smst = &gSmmCoreSmst;
gSmmCorePrivate->SmmEntryPoint = SmmEntryPoint;
-
+
+ //
+ // Initialize page table operations
+ //
+ InitializePageTableLib();
+
//
// No need to initialize memory service.
// It is done in constructor of PiSmmCoreMemoryAllocationLib(),
diff --git a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.h b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.h
index b6f815c68d..8c61fdcf0c 100644
--- a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.h
+++ b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.h
@@ -59,6 +59,7 @@
#include <Library/SmmMemLib.h>
#include "PiSmmCorePrivateData.h"
+#include "Misc/HeapGuard.h"
//
// Used to build a table of SMI Handlers that the SMM Core registers
@@ -317,6 +318,7 @@ SmmAllocatePages (
@param NumberOfPages The number of pages to allocate
@param Memory A pointer to receive the base allocated memory
address
+ @param NeedGuard Flag to indicate Guard page is needed or not
@retval EFI_INVALID_PARAMETER Parameters violate checking rules defined in spec.
@retval EFI_NOT_FOUND Could not allocate pages match the requirement.
@@ -330,7 +332,8 @@ SmmInternalAllocatePages (
IN EFI_ALLOCATE_TYPE Type,
IN EFI_MEMORY_TYPE MemoryType,
IN UINTN NumberOfPages,
- OUT EFI_PHYSICAL_ADDRESS *Memory
+ OUT EFI_PHYSICAL_ADDRESS *Memory,
+ IN BOOLEAN NeedGuard
);
/**
@@ -356,6 +359,8 @@ SmmFreePages (
@param Memory Base address of memory being freed
@param NumberOfPages The number of pages to free
+ @param IsGuarded Flag to indicate if the memory is guarded
+ or not
@retval EFI_NOT_FOUND Could not find the entry that covers the range
@retval EFI_INVALID_PARAMETER Address not aligned, Address is zero or NumberOfPages is zero.
@@ -366,7 +371,8 @@ EFI_STATUS
EFIAPI
SmmInternalFreePages (
IN EFI_PHYSICAL_ADDRESS Memory,
- IN UINTN NumberOfPages
+ IN UINTN NumberOfPages,
+ IN BOOLEAN IsGuarded
);
/**
@@ -1231,4 +1237,74 @@ typedef enum {
extern LIST_ENTRY mSmmPoolLists[SmmPoolTypeMax][MAX_POOL_INDEX];
+/**
+ Internal Function. Allocate n pages from given free page node.
+
+ @param Pages The free page node.
+ @param NumberOfPages Number of pages to be allocated.
+ @param MaxAddress Request to allocate memory below this address.
+
+ @return Memory address of allocated pages.
+
+**/
+UINTN
+InternalAllocPagesOnOneNode (
+ IN OUT FREE_PAGE_LIST *Pages,
+ IN UINTN NumberOfPages,
+ IN UINTN MaxAddress
+ );
+
+/**
+ Update SMM memory map entry.
+
+ @param[in] Type The type of allocation to perform.
+ @param[in] Memory The base of memory address.
+ @param[in] NumberOfPages The number of pages to allocate.
+ @param[in] AddRegion If this memory is new added region.
+**/
+VOID
+ConvertSmmMemoryMapEntry (
+ IN EFI_MEMORY_TYPE Type,
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NumberOfPages,
+ IN BOOLEAN AddRegion
+ );
+
+/**
+ Internal function. Moves any memory descriptors that are on the
+ temporary descriptor stack to heap.
+
+**/
+VOID
+CoreFreeMemoryMapStack (
+ VOID
+ );
+
+/**
+ Frees previous allocated pages.
+
+ @param[in] Memory Base address of memory being freed.
+ @param[in] NumberOfPages The number of pages to free.
+ @param[in] AddRegion If this memory is new added region.
+
+ @retval EFI_NOT_FOUND Could not find the entry that covers the range.
+ @retval EFI_INVALID_PARAMETER Address not aligned, Address is zero or NumberOfPages is zero.
+ @return EFI_SUCCESS Pages successfully freed.
+
+**/
+EFI_STATUS
+SmmInternalFreePagesEx (
+ IN EFI_PHYSICAL_ADDRESS Memory,
+ IN UINTN NumberOfPages,
+ IN BOOLEAN AddRegion
+ );
+
+/**
+ Hook function used to set all Guard pages after entering SMM mode
+**/
+VOID
+SmmEntryPointMemoryManagementHook (
+ VOID
+ );
+
#endif
diff --git a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf
index 49ae6fbb57..e505b165bc 100644
--- a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf
+++ b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf
@@ -40,6 +40,8 @@
SmramProfileRecord.c
MemoryAttributesTable.c
SmiHandlerProfile.c
+ Misc/HeapGuard.c
+ Misc/PageTable.c
[Packages]
MdePkg/MdePkg.dec
@@ -65,6 +67,7 @@
HobLib
SmmMemLib
DxeServicesLib
+ CpuLib
[Protocols]
gEfiDxeSmmReadyToLockProtocolGuid ## UNDEFINED # SmiHandlerRegister
@@ -88,6 +91,7 @@
gEfiSmmGpiDispatch2ProtocolGuid ## SOMETIMES_CONSUMES
gEfiSmmIoTrapDispatch2ProtocolGuid ## SOMETIMES_CONSUMES
gEfiSmmUsbDispatch2ProtocolGuid ## SOMETIMES_CONSUMES
+ gEfiSmmCpuProtocolGuid ## SOMETIMES_CONSUMES
[Pcd]
gEfiMdeModulePkgTokenSpaceGuid.PcdLoadFixAddressSmmCodePageNumber ## SOMETIMES_CONSUMES
@@ -96,6 +100,10 @@
gEfiMdeModulePkgTokenSpaceGuid.PcdMemoryProfilePropertyMask ## CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdMemoryProfileDriverPath ## CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdSmiHandlerProfilePropertyMask ## CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPageType ## CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType ## CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask ## CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask ## CONSUMES
[Guids]
gAprioriGuid ## SOMETIMES_CONSUMES ## File
diff --git a/MdeModulePkg/Core/PiSmmCore/Pool.c b/MdeModulePkg/Core/PiSmmCore/Pool.c
index 36317563c4..1f9213ea6e 100644
--- a/MdeModulePkg/Core/PiSmmCore/Pool.c
+++ b/MdeModulePkg/Core/PiSmmCore/Pool.c
@@ -144,7 +144,9 @@ InternalAllocPoolByIndex (
Status = EFI_SUCCESS;
Hdr = NULL;
if (PoolIndex == MAX_POOL_INDEX) {
- Status = SmmInternalAllocatePages (AllocateAnyPages, PoolType, EFI_SIZE_TO_PAGES (MAX_POOL_SIZE << 1), &Address);
+ Status = SmmInternalAllocatePages (AllocateAnyPages, PoolType,
+ EFI_SIZE_TO_PAGES (MAX_POOL_SIZE << 1),
+ &Address, FALSE);
if (EFI_ERROR (Status)) {
return EFI_OUT_OF_RESOURCES;
}
@@ -243,6 +245,9 @@ SmmInternalAllocatePool (
EFI_STATUS Status;
EFI_PHYSICAL_ADDRESS Address;
UINTN PoolIndex;
+ BOOLEAN HasPoolTail;
+ BOOLEAN NeedGuard;
+ UINTN NoPages;
Address = 0;
@@ -251,25 +256,45 @@ SmmInternalAllocatePool (
return EFI_INVALID_PARAMETER;
}
+ NeedGuard = IsPoolTypeToGuard (PoolType);
+ HasPoolTail = !(NeedGuard &&
+ ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
+
//
// Adjust the size by the pool header & tail overhead
//
Size += POOL_OVERHEAD;
- if (Size > MAX_POOL_SIZE) {
- Size = EFI_SIZE_TO_PAGES (Size);
- Status = SmmInternalAllocatePages (AllocateAnyPages, PoolType, Size, &Address);
+ if (Size > MAX_POOL_SIZE || NeedGuard) {
+ if (!HasPoolTail) {
+ Size -= sizeof (POOL_TAIL);
+ }
+
+ NoPages = EFI_SIZE_TO_PAGES (Size);
+ Status = SmmInternalAllocatePages (AllocateAnyPages, PoolType, NoPages,
+ &Address, NeedGuard);
if (EFI_ERROR (Status)) {
return Status;
}
+ if (NeedGuard) {
+ ASSERT (VerifyMemoryGuard(Address, NoPages) == TRUE);
+ DEBUG ((DEBUG_INFO, "SmmInternalAllocatePool: %lx ->", Address));
+ Address = (EFI_PHYSICAL_ADDRESS)AdjustPoolHeadA (Address, NoPages, Size);
+ DEBUG ((DEBUG_INFO, " %lx %d %x\r\n", Address, NoPages, Size));
+ }
+
PoolHdr = (POOL_HEADER*)(UINTN)Address;
PoolHdr->Signature = POOL_HEAD_SIGNATURE;
- PoolHdr->Size = EFI_PAGES_TO_SIZE (Size);
+ PoolHdr->Size = Size; //EFI_PAGES_TO_SIZE (NoPages)
PoolHdr->Available = FALSE;
PoolHdr->Type = PoolType;
- PoolTail = HEAD_TO_TAIL(PoolHdr);
- PoolTail->Signature = POOL_TAIL_SIGNATURE;
- PoolTail->Size = PoolHdr->Size;
+
+ if (HasPoolTail) {
+ PoolTail = HEAD_TO_TAIL (PoolHdr);
+ PoolTail->Signature = POOL_TAIL_SIGNATURE;
+ PoolTail->Size = PoolHdr->Size;
+ }
+
*Buffer = PoolHdr + 1;
return Status;
}
@@ -341,28 +366,45 @@ SmmInternalFreePool (
{
FREE_POOL_HEADER *FreePoolHdr;
POOL_TAIL *PoolTail;
+ BOOLEAN HasPoolTail;
+ BOOLEAN MemoryGuarded;
if (Buffer == NULL) {
return EFI_INVALID_PARAMETER;
}
+ MemoryGuarded = IsHeapGuardEnabled () &&
+ IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Buffer);
+ HasPoolTail = !(MemoryGuarded &&
+ ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
+
FreePoolHdr = (FREE_POOL_HEADER*)((POOL_HEADER*)Buffer - 1);
ASSERT (FreePoolHdr->Header.Signature == POOL_HEAD_SIGNATURE);
ASSERT (!FreePoolHdr->Header.Available);
- PoolTail = HEAD_TO_TAIL(&FreePoolHdr->Header);
- ASSERT (PoolTail->Signature == POOL_TAIL_SIGNATURE);
- ASSERT (FreePoolHdr->Header.Size == PoolTail->Size);
-
if (FreePoolHdr->Header.Signature != POOL_HEAD_SIGNATURE) {
return EFI_INVALID_PARAMETER;
}
- if (PoolTail->Signature != POOL_TAIL_SIGNATURE) {
- return EFI_INVALID_PARAMETER;
+ if (HasPoolTail) {
+ PoolTail = HEAD_TO_TAIL (&FreePoolHdr->Header);
+ ASSERT (PoolTail->Signature == POOL_TAIL_SIGNATURE);
+ ASSERT (FreePoolHdr->Header.Size == PoolTail->Size);
+ if (PoolTail->Signature != POOL_TAIL_SIGNATURE) {
+ return EFI_INVALID_PARAMETER;
+ }
+
+ if (FreePoolHdr->Header.Size != PoolTail->Size) {
+ return EFI_INVALID_PARAMETER;
+ }
}
- if (FreePoolHdr->Header.Size != PoolTail->Size) {
- return EFI_INVALID_PARAMETER;
+ if (MemoryGuarded) {
+ Buffer = AdjustPoolHeadF ((EFI_PHYSICAL_ADDRESS)(UINTN)FreePoolHdr);
+ return SmmInternalFreePages (
+ (EFI_PHYSICAL_ADDRESS)(UINTN)Buffer,
+ EFI_SIZE_TO_PAGES (FreePoolHdr->Header.Size),
+ TRUE
+ );
}
if (FreePoolHdr->Header.Size > MAX_POOL_SIZE) {
@@ -370,7 +412,8 @@ SmmInternalFreePool (
ASSERT ((FreePoolHdr->Header.Size & EFI_PAGE_MASK) == 0);
return SmmInternalFreePages (
(EFI_PHYSICAL_ADDRESS)(UINTN)FreePoolHdr,
- EFI_SIZE_TO_PAGES (FreePoolHdr->Header.Size)
+ EFI_SIZE_TO_PAGES (FreePoolHdr->Header.Size),
+ FALSE
);
}
return InternalFreePoolByIndex (FreePoolHdr, PoolTail);
--
2.14.1.windows.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/5] MdeModulePkg/MdeModulePkg.dec, .uni: Add heap guard related PCDs and string tokens
2017-10-11 3:18 [PATCH 0/5] Implement heap guard feature Jian J Wang
2017-10-11 3:18 ` [PATCH 1/5] MdeModulePkg/DxeCore: Implement heap guard feature for UEFI Jian J Wang
2017-10-11 3:18 ` [PATCH 2/5] MdeModulePkg/PiSmmCore: Implement heap guard feature for SMM mode Jian J Wang
@ 2017-10-11 3:18 ` Jian J Wang
2017-10-11 3:18 ` [PATCH 4/5] UefiCpuPkg/CpuDxe: Reduce debug message Jian J Wang
2017-10-11 3:18 ` [PATCH 5/5] UefiCpuPkg/PiSmmCpuDxeSmm: Disable page table protection Jian J Wang
4 siblings, 0 replies; 10+ messages in thread
From: Jian J Wang @ 2017-10-11 3:18 UTC (permalink / raw)
To: edk2-devel
Cc: Star Zeng, Eric Dong, Jiewen Yao, Michael Kinney, Ayellet Wolman
Cc: Star Zeng <star.zeng@intel.com>
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ayellet Wolman <ayellet.wolman@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
MdeModulePkg/MdeModulePkg.dec | 57 ++++++++++++++++++++++++++++++++++++++++++
MdeModulePkg/MdeModulePkg.uni | 58 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 115 insertions(+)
diff --git a/MdeModulePkg/MdeModulePkg.dec b/MdeModulePkg/MdeModulePkg.dec
index a3c0633ee1..99f5d88627 100644
--- a/MdeModulePkg/MdeModulePkg.dec
+++ b/MdeModulePkg/MdeModulePkg.dec
@@ -867,6 +867,63 @@
# @ValidList 0x80000006 | 0x03058002
gEfiMdeModulePkgTokenSpaceGuid.PcdErrorCodeSetVariable|0x03058002|UINT32|0x30001040
+ ## Indicates which type allocation need guard page.
+ # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
+ # EfiReservedMemoryType 0x0000000000000001<BR>
+ # EfiLoaderCode 0x0000000000000002<BR>
+ # EfiLoaderData 0x0000000000000004<BR>
+ # EfiBootServicesCode 0x0000000000000008<BR>
+ # EfiBootServicesData 0x0000000000000010<BR>
+ # EfiRuntimeServicesCode 0x0000000000000020<BR>
+ # EfiRuntimeServicesData 0x0000000000000040<BR>
+ # EfiConventionalMemory 0x0000000000000080<BR>
+ # EfiUnusableMemory 0x0000000000000100<BR>
+ # EfiACPIReclaimMemory 0x0000000000000200<BR>
+ # EfiACPIMemoryNVS 0x0000000000000400<BR>
+ # EfiMemoryMappedIO 0x0000000000000800<BR>
+ # EfiMemoryMappedIOPortSpace 0x0000000000001000<BR>
+ # EfiPalCode 0x0000000000002000<BR>
+ # EfiPersistentMemory 0x0000000000004000<BR>
+ # OEM Reserved 0x4000000000000000<BR>
+ # OS Reserved 0x8000000000000000<BR>
+ # e.g. LoaderCode+LoaderData+BootServicesCode+BootServicesData are needed, 0x1E should be used.<BR>
+ # @Prompt The memory type mask for Page Guard.
+ gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPageType|0x0|UINT64|0x30001051
+
+ ## Indicates which type allocation need guard page.
+ # Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>
+ # EfiReservedMemoryType 0x0000000000000001<BR>
+ # EfiLoaderCode 0x0000000000000002<BR>
+ # EfiLoaderData 0x0000000000000004<BR>
+ # EfiBootServicesCode 0x0000000000000008<BR>
+ # EfiBootServicesData 0x0000000000000010<BR>
+ # EfiRuntimeServicesCode 0x0000000000000020<BR>
+ # EfiRuntimeServicesData 0x0000000000000040<BR>
+ # EfiConventionalMemory 0x0000000000000080<BR>
+ # EfiUnusableMemory 0x0000000000000100<BR>
+ # EfiACPIReclaimMemory 0x0000000000000200<BR>
+ # EfiACPIMemoryNVS 0x0000000000000400<BR>
+ # EfiMemoryMappedIO 0x0000000000000800<BR>
+ # EfiMemoryMappedIOPortSpace 0x0000000000001000<BR>
+ # EfiPalCode 0x0000000000002000<BR>
+ # EfiPersistentMemory 0x0000000000004000<BR>
+ # OEM Reserved 0x4000000000000000<BR>
+ # OS Reserved 0x8000000000000000<BR>
+ # e.g. LoaderCode+LoaderData+BootServicesCode+BootServicesData are needed, 0x1E should be used.<BR>
+ # @Prompt The memory type mask for Pool Guard.
+ gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType|0x0|UINT64|0x30001052
+
+ ## This mask is to control Heap Guard behavior.
+ # BIT0 - Enable UEFI page guard.<BR>
+ # BIT1 - Enable UEFI pool guard.<BR>
+ # BIT2 - Enable SMM page guard.<BR>
+ # BIT3 - Enable SMM pool guard.<BR>
+ # BIT7 - The direction of Guard Page for Pool Guard.
+ # 0 - The returned pool is adjacent to the bottom guard page.<BR>
+ # 1 - The returned pool is adjacent to the top guard page.<BR>
+ # @Prompt The Heap Guard feature mask
+ gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask|0x0|UINT8|0x30001053
+
[PcdsFixedAtBuild, PcdsPatchableInModule]
## Dynamic type PCD can be registered callback function for Pcd setting action.
# PcdMaxPeiPcdCallBackNumberPerPcdEntry indicates the maximum number of callback function
diff --git a/MdeModulePkg/MdeModulePkg.uni b/MdeModulePkg/MdeModulePkg.uni
index d6015de75f..74c27039bf 100644
--- a/MdeModulePkg/MdeModulePkg.uni
+++ b/MdeModulePkg/MdeModulePkg.uni
@@ -1127,3 +1127,61 @@
"enabled on AMD processors supporting the Secure Encrypted Virtualization (SEV) feature.\n"
"This mask should be applied when creating 1:1 virtual to physical mapping tables."
+#string STR_gEfiMdeModulePkgTokenSpaceGuid_PcdHeapGuardPageType_PROMPT #language en-US "The memory type mask for Page Guard"
+
+#string STR_gEfiMdeModulePkgTokenSpaceGuid_PcdHeapGuardPageType_HELP #language en-US "Indicates which type allocation need guard page.\n"
+ " Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>\n"
+ " EfiReservedMemoryType 0x0000000000000001\n"
+ " EfiLoaderCode 0x0000000000000002\n"
+ " EfiLoaderData 0x0000000000000004\n"
+ " EfiBootServicesCode 0x0000000000000008\n"
+ " EfiBootServicesData 0x0000000000000010\n"
+ " EfiRuntimeServicesCode 0x0000000000000020\n"
+ " EfiRuntimeServicesData 0x0000000000000040\n"
+ " EfiConventionalMemory 0x0000000000000080\n"
+ " EfiUnusableMemory 0x0000000000000100\n"
+ " EfiACPIReclaimMemory 0x0000000000000200\n"
+ " EfiACPIMemoryNVS 0x0000000000000400\n"
+ " EfiMemoryMappedIO 0x0000000000000800\n"
+ " EfiMemoryMappedIOPortSpace 0x0000000000001000\n"
+ " EfiPalCode 0x0000000000002000\n"
+ " EfiPersistentMemory 0x0000000000004000\n"
+ " OEM Reserved 0x4000000000000000\n"
+ " OS Reserved 0x8000000000000000\n"
+ " e.g. LoaderCode+LoaderData+BootServicesCode+BootServicesData are needed, 0x1E should be used.<BR>"
+
+#string STR_gEfiMdeModulePkgTokenSpaceGuid_PcdHeapGuardPoolType_PROMPT #language en-US "The memory type mask for Pool Guard"
+
+#string STR_gEfiMdeModulePkgTokenSpaceGuid_PcdHeapGuardPoolType_HELP #language en-US "Indicates which type allocation need guard page.\n"
+ " Below is bit mask for this PCD: (Order is same as UEFI spec)<BR>\n"
+ " EfiReservedMemoryType 0x0000000000000001\n"
+ " EfiLoaderCode 0x0000000000000002\n"
+ " EfiLoaderData 0x0000000000000004\n"
+ " EfiBootServicesCode 0x0000000000000008\n"
+ " EfiBootServicesData 0x0000000000000010\n"
+ " EfiRuntimeServicesCode 0x0000000000000020\n"
+ " EfiRuntimeServicesData 0x0000000000000040\n"
+ " EfiConventionalMemory 0x0000000000000080\n"
+ " EfiUnusableMemory 0x0000000000000100\n"
+ " EfiACPIReclaimMemory 0x0000000000000200\n"
+ " EfiACPIMemoryNVS 0x0000000000000400\n"
+ " EfiMemoryMappedIO 0x0000000000000800\n"
+ " EfiMemoryMappedIOPortSpace 0x0000000000001000\n"
+ " EfiPalCode 0x0000000000002000\n"
+ " EfiPersistentMemory 0x0000000000004000\n"
+ " OEM Reserved 0x4000000000000000\n"
+ " OS Reserved 0x8000000000000000\n"
+ " e.g. LoaderCode+LoaderData+BootServicesCode+BootServicesData are needed, 0x1E should be used.<BR>"
+
+
+#string STR_gEfiMdeModulePkgTokenSpaceGuid_PcdHeapGuardPropertyMask_PROMPT #language en-US "The Heap Guard feature mask"
+
+#string STR_gEfiMdeModulePkgTokenSpaceGuid_PcdHeapGuardPropertyMask_HELP #language en-US "This mask is to control Heap Guard behavior.\n"
+ " BIT0 - Enable UEFI page guard.<BR>\n"
+ " BIT1 - Enable UEFI pool guard.<BR>\n"
+ " BIT2 - Enable SMM page guard.<BR>\n"
+ " BIT3 - Enable SMM pool guard.<BR>\n"
+ " BIT7 - The direction of Guard Page for Pool Guard.\n"
+ " 0 - The returned pool is adjacent to the bottom guard page.<BR>\n"
+ " 1 - The returned pool is adjacent to the top guard page.<BR>"
+
--
2.14.1.windows.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/5] UefiCpuPkg/CpuDxe: Reduce debug message
2017-10-11 3:18 [PATCH 0/5] Implement heap guard feature Jian J Wang
` (2 preceding siblings ...)
2017-10-11 3:18 ` [PATCH 3/5] MdeModulePkg/MdeModulePkg.dec, .uni: Add heap guard related PCDs and string tokens Jian J Wang
@ 2017-10-11 3:18 ` Jian J Wang
2017-10-11 3:18 ` [PATCH 5/5] UefiCpuPkg/PiSmmCpuDxeSmm: Disable page table protection Jian J Wang
4 siblings, 0 replies; 10+ messages in thread
From: Jian J Wang @ 2017-10-11 3:18 UTC (permalink / raw)
To: edk2-devel; +Cc: Eric Dong, Jiewen Yao, Michael Kinney, Ayellet Wolman
Heap guard feature will frequently update page attributes. The debug message
in CpuDxe driver will slow down the boot performance noticeably. Changing the
debug level to DEBUG_POOL and DEBUG_PAGE to reduce the message output for
normal debug configuration.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ayellet Wolman <ayellet.wolman@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
UefiCpuPkg/CpuDxe/CpuPageTable.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/UefiCpuPkg/CpuDxe/CpuPageTable.c b/UefiCpuPkg/CpuDxe/CpuPageTable.c
index d312eb66f8..5270a1124f 100644
--- a/UefiCpuPkg/CpuDxe/CpuPageTable.c
+++ b/UefiCpuPkg/CpuDxe/CpuPageTable.c
@@ -442,8 +442,9 @@ ConvertPageEntryAttribute (
*PageEntry = NewPageEntry;
if (CurrentPageEntry != NewPageEntry) {
*IsModified = TRUE;
- DEBUG ((DEBUG_INFO, "ConvertPageEntryAttribute 0x%lx", CurrentPageEntry));
- DEBUG ((DEBUG_INFO, "->0x%lx\n", NewPageEntry));
+ DEBUG ((DEBUG_POOL | DEBUG_PAGE, "ConvertPageEntryAttribute 0x%lx",
+ CurrentPageEntry));
+ DEBUG ((DEBUG_POOL | DEBUG_PAGE, "->0x%lx\n", NewPageEntry));
} else {
*IsModified = FALSE;
}
--
2.14.1.windows.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 5/5] UefiCpuPkg/PiSmmCpuDxeSmm: Disable page table protection
2017-10-11 3:18 [PATCH 0/5] Implement heap guard feature Jian J Wang
` (3 preceding siblings ...)
2017-10-11 3:18 ` [PATCH 4/5] UefiCpuPkg/CpuDxe: Reduce debug message Jian J Wang
@ 2017-10-11 3:18 ` Jian J Wang
2017-10-13 1:24 ` Dong, Eric
4 siblings, 1 reply; 10+ messages in thread
From: Jian J Wang @ 2017-10-11 3:18 UTC (permalink / raw)
To: edk2-devel; +Cc: Eric Dong, Jiewen Yao, Michael Kinney, Ayellet Wolman
Heap guard feature will update page attributes frequently. The page table
should not set to be read-only if heap guard feature is enabled for SMM
mode. Otherwise this feature cannot work.
Cc: Eric Dong <eric.dong@intel.com>
Cc: Jiewen Yao <jiewen.yao@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: Ayellet Wolman <ayellet.wolman@intel.com>
Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
---
UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 1 +
UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
index 099792e6ce..644709650c 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
@@ -159,6 +159,7 @@
gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmStaticPageTable ## CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdAcpiS3Enable ## CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrMask ## CONSUMES
+ gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask ## CONSUMES
[Depex]
gEfiMpServiceProtocolGuid
diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
index 3dde80f9ba..4debce3a0f 100644
--- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
+++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
@@ -902,7 +902,7 @@ SetPageTableAttributes (
BOOLEAN IsSplitted;
BOOLEAN PageTableSplitted;
- if (!mCpuSmmStaticPageTable) {
+ if (!mCpuSmmStaticPageTable || (PcdGet8 (PcdHeapGuardPropertyMask) & BIT3 | BIT2) != 0) {
return ;
}
--
2.14.1.windows.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 5/5] UefiCpuPkg/PiSmmCpuDxeSmm: Disable page table protection
2017-10-11 3:18 ` [PATCH 5/5] UefiCpuPkg/PiSmmCpuDxeSmm: Disable page table protection Jian J Wang
@ 2017-10-13 1:24 ` Dong, Eric
2017-10-13 6:14 ` Wang, Jian J
0 siblings, 1 reply; 10+ messages in thread
From: Dong, Eric @ 2017-10-13 1:24 UTC (permalink / raw)
To: Wang, Jian J, edk2-devel@lists.01.org
Cc: Yao, Jiewen, Kinney, Michael D, Wolman, Ayellet
Hi Jian,
> + if (!mCpuSmmStaticPageTable || (PcdGet8 (PcdHeapGuardPropertyMask)
> &
> + BIT3 | BIT2) != 0) {
I think above code logic is not correct, the "&" will be handled before the "|" which is not an expected order, right?
Thanks,
Eric
> -----Original Message-----
> From: Wang, Jian J
> Sent: Wednesday, October 11, 2017 11:18 AM
> To: edk2-devel@lists.01.org
> Cc: Dong, Eric <eric.dong@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>;
> Kinney, Michael D <michael.d.kinney@intel.com>; Wolman, Ayellet
> <ayellet.wolman@intel.com>
> Subject: [PATCH 5/5] UefiCpuPkg/PiSmmCpuDxeSmm: Disable page table
> protection
>
> Heap guard feature will update page attributes frequently. The page table
> should not set to be read-only if heap guard feature is enabled for SMM
> mode. Otherwise this feature cannot work.
>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Michael Kinney <michael.d.kinney@intel.com>
> Cc: Ayellet Wolman <ayellet.wolman@intel.com>
> Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
> UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 1 +
> UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 2 +-
> 2 files changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> index 099792e6ce..644709650c 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> @@ -159,6 +159,7 @@
> gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmStaticPageTable ##
> CONSUMES
> gEfiMdeModulePkgTokenSpaceGuid.PcdAcpiS3Enable ##
> CONSUMES
>
> gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrM
> ask ## CONSUMES
> + gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> ## CONSUMES
>
> [Depex]
> gEfiMpServiceProtocolGuid
> diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> index 3dde80f9ba..4debce3a0f 100644
> --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> @@ -902,7 +902,7 @@ SetPageTableAttributes (
> BOOLEAN IsSplitted;
> BOOLEAN PageTableSplitted;
>
> - if (!mCpuSmmStaticPageTable) {
> + if (!mCpuSmmStaticPageTable || (PcdGet8 (PcdHeapGuardPropertyMask)
> &
> + BIT3 | BIT2) != 0) {
> return ;
> }
>
> --
> 2.14.1.windows.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/5] MdeModulePkg/PiSmmCore: Implement heap guard feature for SMM mode
2017-10-11 3:18 ` [PATCH 2/5] MdeModulePkg/PiSmmCore: Implement heap guard feature for SMM mode Jian J Wang
@ 2017-10-13 1:27 ` Dong, Eric
2017-10-13 6:15 ` Wang, Jian J
0 siblings, 1 reply; 10+ messages in thread
From: Dong, Eric @ 2017-10-13 1:27 UTC (permalink / raw)
To: Wang, Jian J, edk2-devel@lists.01.org
Cc: Zeng, Star, Yao, Jiewen, Kinney, Michael D, Wolman, Ayellet
Hi Jian,
I think below code not follow EDKII coding style, EDKII requires definition and assignment in different code.
+ UINTN LevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
+ = GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS;
+ UINTN LevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
+ = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
Thanks,
Eric
> -----Original Message-----
> From: Wang, Jian J
> Sent: Wednesday, October 11, 2017 11:18 AM
> To: edk2-devel@lists.01.org
> Cc: Zeng, Star <star.zeng@intel.com>; Dong, Eric <eric.dong@intel.com>; Yao,
> Jiewen <jiewen.yao@intel.com>; Kinney, Michael D
> <michael.d.kinney@intel.com>; Wolman, Ayellet
> <ayellet.wolman@intel.com>
> Subject: [PATCH 2/5] MdeModulePkg/PiSmmCore: Implement heap guard
> feature for SMM mode
>
> This feature makes use of paging mechanism to add a hidden (not present)
> page just before and after the allocated memory block. If the code tries
> to access memory outside of the allocated part, page fault exception will
> be triggered.
>
> This feature is controlled by three PCDs:
>
> gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
> gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPageType
>
> BIT2 and BIT3 of PcdHeapGuardPropertyMask can be used to enable or
> disable
> memory guard for SMM page and pool respectively. PcdHeapGuardPoolType
> and/or
> PcdHeapGuardPageType are used to enable or disable guard for specific type
> of memory. For example, we can turn on guard only for EfiBootServicesData
> and EfiRuntimeServicesData by setting the PCD with value 0x50.
>
> Pool memory is not ususally integer multiple of one page, and is more likely
> less than a page. There's no way to monitor the overflow at both top and
> bottom of pool memory. BIT7 of PcdHeapGuardPropertyMask is used to
> control
> how to position the head of pool memory so that it's easier to catch memory
> overflow in memory growing direction or in decreasing direction.
>
> Cc: Star Zeng <star.zeng@intel.com>
> Cc: Eric Dong <eric.dong@intel.com>
> Cc: Jiewen Yao <jiewen.yao@intel.com>
> Cc: Michael Kinney <michael.d.kinney@intel.com>
> Cc: Ayellet Wolman <ayellet.wolman@intel.com>
> Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> ---
> MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c | 1438
> ++++++++++++++++++++++++++
> MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h | 395 +++++++
> MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c | 704
> +++++++++++++
> MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h | 174 ++++
> MdeModulePkg/Core/PiSmmCore/Page.c | 51 +-
> MdeModulePkg/Core/PiSmmCore/PiSmmCore.c | 12 +-
> MdeModulePkg/Core/PiSmmCore/PiSmmCore.h | 80 +-
> MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf | 8 +
> MdeModulePkg/Core/PiSmmCore/Pool.c | 77 +-
> 9 files changed, 2911 insertions(+), 28 deletions(-)
> create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c
> create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h
> create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c
> create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h
>
> diff --git a/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c
> b/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c
> new file mode 100644
> index 0000000000..c64eaea5d1
> --- /dev/null
> +++ b/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c
> @@ -0,0 +1,1438 @@
> +/** @file
> + UEFI Heap Guard functions.
> +
> +Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
> +This program and the accompanying materials
> +are licensed and made available under the terms and conditions of the BSD
> License
> +which accompanies this distribution. The full text of the license may be
> found at
> +http://opensource.org/licenses/bsd-license.php
> +
> +THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> BASIS,
> +WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> EXPRESS OR IMPLIED.
> +
> +**/
> +
> +#include "HeapGuard.h"
> +
> +//
> +// Pointer to table tracking the Guarded memory with bitmap, in which '1'
> +// is used to indicate memory guarded. '0' might be free memory or Guard
> +// page itself, depending on status of memory adjacent to it.
> +//
> +GLOBAL_REMOVE_IF_UNREFERENCED UINT64 *mGuardedMemoryMap =
> NULL;
> +
> +//
> +// Current depth level of map table pointed by mGuardedMemoryMap.
> +// mMapLevel must be initialized at least by 1. It will be automatically
> +// updated according to the address of memory just tracked.
> +//
> +GLOBAL_REMOVE_IF_UNREFERENCED UINTN mMapLevel = 1;
> +
> +//
> +// SMM status flag
> +//
> +BOOLEAN mIsSmmCpuMode = FALSE;
> +
> +/**
> + Set corresponding bits in bitmap table to 1 according to the address
> +
> + @param[in] Address Start address to set for
> + @param[in] BitNumber Number of bits to set
> + @param[in] BitMap Pointer to bitmap which covers the Address
> +
> + @return VOID
> +**/
> +STATIC
> +VOID
> +SetBits (
> + IN EFI_PHYSICAL_ADDRESS Address,
> + IN UINTN BitNumber,
> + IN UINT64 *BitMap
> + )
> +{
> + UINTN Lsbs;
> + UINTN Qwords;
> + UINTN Msbs;
> + UINTN StartBit;
> + UINTN EndBit;
> +
> + StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> + EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> +
> + if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> + Msbs = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
> + GUARDED_HEAP_MAP_ENTRY_BITS;
> + Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> + Qwords = (BitNumber - Msbs) / GUARDED_HEAP_MAP_ENTRY_BITS;
> + } else {
> + Msbs = BitNumber;
> + Lsbs = 0;
> + Qwords = 0;
> + }
> +
> + if (Msbs > 0) {
> + *BitMap |= LShiftU64 (LShiftU64 (1, Msbs) - 1, StartBit);
> + BitMap += 1;
> + }
> +
> + if (Qwords > 0) {
> + SetMem64 ((VOID *)BitMap, Qwords *
> GUARDED_HEAP_MAP_ENTRY_BYTES,
> + (UINT64)-1);
> + BitMap += Qwords;
> + }
> +
> + if (Lsbs > 0) {
> + *BitMap |= (LShiftU64 (1, Lsbs) - 1);
> + }
> +}
> +
> +/**
> + Set corresponding bits in bitmap table to 0 according to the address
> +
> + @param[in] Address Start address to set for
> + @param[in] BitNumber Number of bits to set
> + @param[in] BitMap Pointer to bitmap which covers the Address
> +
> + @return VOID
> +**/
> +STATIC
> +VOID
> +ClearBits (
> + IN EFI_PHYSICAL_ADDRESS Address,
> + IN UINTN BitNumber,
> + IN UINT64 *BitMap
> + )
> +{
> + UINTN Lsbs;
> + UINTN Qwords;
> + UINTN Msbs;
> + UINTN StartBit;
> + UINTN EndBit;
> +
> + StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> + EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> +
> + if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> + Msbs = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
> + GUARDED_HEAP_MAP_ENTRY_BITS;
> + Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> + Qwords = (BitNumber - Msbs) / GUARDED_HEAP_MAP_ENTRY_BITS;
> + } else {
> + Msbs = BitNumber;
> + Lsbs = 0;
> + Qwords = 0;
> + }
> +
> + if (Msbs > 0) {
> + *BitMap &= ~LShiftU64 (LShiftU64 (1, Msbs) - 1, StartBit);
> + BitMap += 1;
> + }
> +
> + if (Qwords > 0) {
> + SetMem64 ((VOID *)BitMap, Qwords *
> GUARDED_HEAP_MAP_ENTRY_BYTES, 0);
> + BitMap += Qwords;
> + }
> +
> + if (Lsbs > 0) {
> + *BitMap &= ~(LShiftU64 (1, Lsbs) - 1);
> + }
> +}
> +
> +/**
> + Get corresponding bits in bitmap table according to the address
> +
> + The value of bit 0 corresponds to the status of memory at given Address.
> + No more than 64 bits can be retrieved in one call.
> +
> + @param[in] Address Start address to retrieve bits for
> + @param[in] BitNumber Number of bits to get
> + @param[in] BitMap Pointer to bitmap which covers the Address
> +
> + @return An integer containing the bits information
> +**/
> +STATIC
> +UINT64
> +GetBits (
> + IN EFI_PHYSICAL_ADDRESS Address,
> + IN UINTN BitNumber,
> + IN UINT64 *BitMap
> + )
> +{
> + UINTN StartBit;
> + UINTN EndBit;
> + UINTN Lsbs;
> + UINTN Msbs;
> + UINT64 Result;
> +
> + ASSERT (BitNumber <= GUARDED_HEAP_MAP_ENTRY_BITS);
> +
> + StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> + EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> +
> + if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> + Msbs = GUARDED_HEAP_MAP_ENTRY_BITS - StartBit;
> + Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> + } else {
> + Msbs = BitNumber;
> + Lsbs = 0;
> + }
> +
> + Result = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1);
> + if (Lsbs > 0) {
> + BitMap += 1;
> + Result |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> + }
> +
> + return Result;
> +}
> +
> +/**
> + Helper function to allocate pages without Guard for internal uses
> +
> + @param[in] Pages Page number
> +
> + @return Address of memory allocated
> +**/
> +VOID *
> +PageAlloc (
> + IN UINTN Pages
> + )
> +{
> + EFI_STATUS Status;
> + EFI_PHYSICAL_ADDRESS Memory;
> +
> + Status = SmmInternalAllocatePages (AllocateAnyPages,
> EfiRuntimeServicesData,
> + Pages, &Memory, FALSE);
> + if (EFI_ERROR (Status)) {
> + Memory = 0;
> + }
> +
> + return (VOID *)(UINTN)Memory;
> +}
> +
> +/**
> + Locate the pointer of bitmap from the guarded memory bitmap tables,
> which
> + covers the given Address.
> +
> + @param[in] Address Start address to search the bitmap for
> + @param[in] AllocMapUnit Flag to indicate memory allocation for the table
> + @param[out] BitMap Pointer to bitmap which covers the Address
> +
> + @return The bit number from given Address to the end of current map
> table
> +**/
> +UINTN
> +FindGuardedMemoryMap (
> + IN EFI_PHYSICAL_ADDRESS Address,
> + IN BOOLEAN AllocMapUnit,
> + OUT UINT64 **BitMap
> + )
> +{
> + UINTN Level;
> + UINTN LevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
> + = GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS;
> + UINTN LevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
> + = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> + UINT64 **GuardMap;
> + UINT64 *MapMemory;
> + UINTN Index;
> + UINTN Size;
> + UINTN BitsToUnitEnd;
> +
> + //
> + // Adjust current map table depth according to the address to access
> + //
> + while (mMapLevel < GUARDED_HEAP_MAP_TABLE_DEPTH
> + &&
> + RShiftU64 (
> + Address,
> + LevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel - 1]
> + ) != 0) {
> +
> + if (mGuardedMemoryMap != NULL) {
> + Size = (LevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel -
> 1] + 1)
> + * GUARDED_HEAP_MAP_ENTRY_BYTES;
> + MapMemory = PageAlloc (EFI_SIZE_TO_PAGES (Size));
> + ASSERT (MapMemory != NULL);
> +
> + SetMem ((VOID *)MapMemory, Size, 0);
> +
> + *(UINT64 **)MapMemory = mGuardedMemoryMap;
> + mGuardedMemoryMap = MapMemory;
> + }
> +
> + mMapLevel++;
> +
> + }
> +
> + GuardMap = &mGuardedMemoryMap;
> + for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> + Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> + ++Level) {
> +
> + if (*GuardMap == NULL) {
> + if (!AllocMapUnit) {
> + GuardMap = NULL;
> + break;
> + }
> +
> + Size = (LevelMask[Level] + 1) * GUARDED_HEAP_MAP_ENTRY_BYTES;
> + MapMemory = PageAlloc (EFI_SIZE_TO_PAGES (Size));
> + ASSERT (MapMemory != NULL);
> +
> + SetMem ((VOID *)MapMemory, Size, 0);
> + *GuardMap = (UINT64 *)MapMemory;
> + }
> +
> + Index = (UINTN)RShiftU64 (Address, LevelShift[Level]);
> + Index &= LevelMask[Level];
> + GuardMap = (UINT64 **)((*GuardMap) + Index);
> +
> + }
> +
> + BitsToUnitEnd = GUARDED_HEAP_MAP_BITS -
> GUARDED_HEAP_MAP_BIT_INDEX (Address);
> + *BitMap = (UINT64 *)GuardMap;
> +
> + return BitsToUnitEnd;
> +}
> +
> +/**
> + Set corresponding bits in bitmap table to 1 according to given memory
> range
> +
> + @param[in] Address Memory address to guard from
> + @param[in] NumberOfPages Number of pages to guard
> +
> + @return VOID
> +**/
> +VOID
> +EFIAPI
> +SetGuardedMemoryBits (
> + IN EFI_PHYSICAL_ADDRESS Address,
> + IN UINTN NumberOfPages
> + )
> +{
> + UINT64 *BitMap;
> + UINTN Bits;
> + UINTN BitsToUnitEnd;
> +
> + while (NumberOfPages > 0) {
> + BitsToUnitEnd = FindGuardedMemoryMap (Address, TRUE, &BitMap);
> + ASSERT (BitMap != NULL);
> +
> + if (NumberOfPages > BitsToUnitEnd) {
> + // Cross map unit
> + Bits = BitsToUnitEnd;
> + } else {
> + Bits = NumberOfPages;
> + }
> +
> + SetBits (Address, Bits, BitMap);
> +
> + NumberOfPages -= Bits;
> + Address += EFI_PAGES_TO_SIZE (Bits);
> + }
> +}
> +
> +/**
> + Clear corresponding bits in bitmap table according to given memory range
> +
> + @param[in] Address Memory address to unset from
> + @param[in] NumberOfPages Number of pages to unset guard
> +
> + @return VOID
> +**/
> +VOID
> +EFIAPI
> +ClearGuardedMemoryBits (
> + IN EFI_PHYSICAL_ADDRESS Address,
> + IN UINTN NumberOfPages
> + )
> +{
> + UINT64 *BitMap;
> + UINTN Bits;
> + UINTN BitsToUnitEnd;
> +
> + while (NumberOfPages > 0) {
> + BitsToUnitEnd = FindGuardedMemoryMap (Address, TRUE, &BitMap);
> + ASSERT (BitMap != NULL);
> +
> + if (NumberOfPages > BitsToUnitEnd) {
> + // Cross map unit
> + Bits = BitsToUnitEnd;
> + } else {
> + Bits = NumberOfPages;
> + }
> +
> + ClearBits (Address, Bits, BitMap);
> +
> + NumberOfPages -= Bits;
> + Address += EFI_PAGES_TO_SIZE (Bits);
> + }
> +}
> +
> +/**
> + Retrieve corresponding bits in bitmap table according to given memory
> range
> +
> + @param[in] Address Memory address to retrieve from
> + @param[in] NumberOfPages Number of pages to retrieve
> +
> + @return VOID
> +**/
> +UINTN
> +GetGuardedMemoryBits (
> + IN EFI_PHYSICAL_ADDRESS Address,
> + IN UINTN NumberOfPages
> + )
> +{
> + UINT64 *BitMap;
> + UINTN Bits;
> + UINTN Result;
> + UINTN Shift;
> + UINTN BitsToUnitEnd;
> +
> + ASSERT (NumberOfPages <= GUARDED_HEAP_MAP_ENTRY_BITS);
> +
> + Result = 0;
> + Shift = 0;
> + while (NumberOfPages > 0) {
> + BitsToUnitEnd = FindGuardedMemoryMap (Address, FALSE, &BitMap);
> +
> + if (NumberOfPages > BitsToUnitEnd) {
> + // Cross map unit
> + Bits = BitsToUnitEnd;
> + } else {
> + Bits = NumberOfPages;
> + }
> +
> + if (BitMap != NULL) {
> + Result |= LShiftU64 (GetBits (Address, Bits, BitMap), Shift);
> + }
> +
> + Shift += Bits;
> + NumberOfPages -= Bits;
> + Address += EFI_PAGES_TO_SIZE (Bits);
> + }
> +
> + return Result;
> +}
> +
> +/**
> + Get bit value in bitmap table for the given address
> +
> + @param[in] Address The address to retrieve for
> +
> + @return 1 or 0
> +**/
> +UINTN
> +EFIAPI
> +GetGuardMapBit (
> + IN EFI_PHYSICAL_ADDRESS Address
> + )
> +{
> + UINT64 *GuardMap;
> +
> + FindGuardedMemoryMap (Address, FALSE, &GuardMap);
> + if (GuardMap != NULL) {
> + if (RShiftU64 (*GuardMap,
> + GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address)) & 1) {
> + return 1;
> + }
> + }
> +
> + return 0;
> +}
> +
> +/**
> + Set the bit in bitmap table for the given address
> +
> + @param[in] Address The address to set for
> +
> + @return VOID
> +**/
> +VOID
> +EFIAPI
> +SetGuardMapBit (
> + IN EFI_PHYSICAL_ADDRESS Address
> + )
> +{
> + UINT64 *GuardMap;
> + UINT64 BitMask;
> +
> + FindGuardedMemoryMap (Address, TRUE, &GuardMap);
> + if (GuardMap != NULL) {
> + BitMask = LShiftU64 (1, GUARDED_HEAP_MAP_ENTRY_BIT_INDEX
> (Address));
> + *GuardMap |= BitMask;
> + }
> +}
> +
> +/**
> + Clear the bit in bitmap table for the given address
> +
> + @param[in] Address The address to clear for
> +
> + @return VOID
> +**/
> +VOID
> +EFIAPI
> +ClearGuardMapBit (
> + IN EFI_PHYSICAL_ADDRESS Address
> + )
> +{
> + UINT64 *GuardMap;
> + UINTN BitMask;
> +
> + FindGuardedMemoryMap (Address, TRUE, &GuardMap);
> + if (GuardMap != NULL) {
> + BitMask = LShiftU64 (1, GUARDED_HEAP_MAP_ENTRY_BIT_INDEX
> (Address));
> + *GuardMap &= ~BitMask;
> + }
> +}
> +
> +/**
> + Check to see if the page at the given address is a Guard page or not
> +
> + @param[in] Address The address to check for
> +
> + @return TRUE The page at Address is a Guard page
> + @return FALSE The page at Address is not a Guard page
> +**/
> +BOOLEAN
> +EFIAPI
> +IsGuardPage (
> + IN EFI_PHYSICAL_ADDRESS Address
> + )
> +{
> + UINTN BitMap;
> +
> + BitMap = GetGuardedMemoryBits (Address - EFI_PAGE_SIZE, 3);
> + return (BitMap == 0b001 || BitMap == 0b100 || BitMap == 0b101);
> +}
> +
> +/**
> + Check to see if the page at the given address is a head Guard page or not
> +
> + @param[in] Address The address to check for
> +
> + @return TRUE The page at Address is a head Guard page
> + @return FALSE The page at Address is not a head Guard page
> +**/
> +BOOLEAN
> +EFIAPI
> +IsHeadGuard (
> + IN EFI_PHYSICAL_ADDRESS Address
> + )
> +{
> + return (GetGuardedMemoryBits (Address, 2) == 0b10);
> +}
> +
> +/**
> + Check to see if the page at the given address is a tail Guard page or not
> +
> + @param[in] Address The address to check for
> +
> + @return TRUE The page at Address is a tail Guard page
> + @return FALSE The page at Address is not a tail Guard page
> +**/
> +BOOLEAN
> +EFIAPI
> +IsTailGuard (
> + IN EFI_PHYSICAL_ADDRESS Address
> + )
> +{
> + return (GetGuardedMemoryBits (Address - EFI_PAGE_SIZE, 2) == 0b01);
> +}
> +
> +/**
> + Check to see if the page at the given address is guarded or not
> +
> + @param[in] Address The address to check for
> +
> + @return TRUE The page at Address is guarded
> + @return FALSE The page at Address is not guarded
> +**/
> +BOOLEAN
> +EFIAPI
> +IsMemoryGuarded (
> + IN EFI_PHYSICAL_ADDRESS Address
> + )
> +{
> + return (GetGuardMapBit (Address) == 1);
> +}
> +
> +/**
> + Set the page at the given address to be a Guard page.
> +
> + This is done by changing the page table attribute to be NOT PRSENT.
> +
> + @param[in] Address Page address to Guard at
> +
> + @return VOID
> +**/
> +VOID
> +EFIAPI
> +SetGuardPage (
> + IN EFI_PHYSICAL_ADDRESS BaseAddress
> + )
> +{
> + if (mIsSmmCpuMode) {
> + SmmSetMemoryAttributes (BaseAddress, EFI_PAGE_SIZE,
> EFI_MEMORY_RP);
> + }
> +}
> +
> +/**
> + Unset the Guard page at the given address to the normal memory.
> +
> + This is done by changing the page table attribute to be PRSENT.
> +
> + @param[in] Address Page address to Guard at
> +
> + @return VOID
> +**/
> +VOID
> +EFIAPI
> +UnsetGuardPage (
> + IN EFI_PHYSICAL_ADDRESS BaseAddress
> + )
> +{
> + if (mIsSmmCpuMode) {
> + SmmClearMemoryAttributes (BaseAddress, EFI_PAGE_SIZE,
> EFI_MEMORY_RP);
> + }
> +}
> +
> +/**
> + Check to see if the memory at the given address should be guarded or not
> +
> + @param[in] MemoryType Memory type to check
> + @param[in] AllocateType Allocation type to check
> + @param[in] PageOrPool Indicate a page allocation or pool allocation
> +
> +
> + @return TRUE The given type of memory should be guarded
> + @return FALSE The given type of memory should not be guarded
> +**/
> +BOOLEAN
> +IsMemoryTypeToGuard (
> + IN EFI_MEMORY_TYPE MemoryType,
> + IN EFI_ALLOCATE_TYPE AllocateType,
> + IN UINT8 PageOrPool
> + )
> +{
> + UINT64 TestBit;
> + UINT64 ConfigBit;
> +
> + if ((PcdGet8 (PcdHeapGuardPropertyMask) & PageOrPool) == 0 ||
> + AllocateType == AllocateAddress) {
> + return FALSE;
> + }
> +
> + ConfigBit = 0;
> + if (PageOrPool & GUARD_HEAP_TYPE_POOL) {
> + ConfigBit |= PcdGet64 (PcdHeapGuardPoolType);
> + }
> +
> + if (PageOrPool & GUARD_HEAP_TYPE_PAGE) {
> + ConfigBit |= PcdGet64 (PcdHeapGuardPageType);
> + }
> +
> + if (MemoryType == EfiRuntimeServicesData ||
> + MemoryType == EfiRuntimeServicesCode) {
> + TestBit = LShiftU64 (1, MemoryType);
> + } else if (MemoryType == EfiMaxMemoryType) {
> + TestBit = (UINT64)-1;
> + } else {
> + TestBit = 0;
> + }
> +
> + return ((ConfigBit & TestBit) != 0);
> +}
> +
> +/**
> + Check to see if the pool at the given address should be guarded or not
> +
> + @param[in] MemoryType Pool type to check
> +
> +
> + @return TRUE The given type of pool should be guarded
> + @return FALSE The given type of pool should not be guarded
> +**/
> +BOOLEAN
> +IsPoolTypeToGuard (
> + IN EFI_MEMORY_TYPE MemoryType
> + )
> +{
> + return IsMemoryTypeToGuard (MemoryType, AllocateAnyPages,
> + GUARD_HEAP_TYPE_POOL);
> +}
> +
> +/**
> + Check to see if the page at the given address should be guarded or not
> +
> + @param[in] MemoryType Page type to check
> + @param[in] AllocateType Allocation type to check
> +
> + @return TRUE The given type of page should be guarded
> + @return FALSE The given type of page should not be guarded
> +**/
> +BOOLEAN
> +IsPageTypeToGuard (
> + IN EFI_MEMORY_TYPE MemoryType,
> + IN EFI_ALLOCATE_TYPE AllocateType
> + )
> +{
> + return IsMemoryTypeToGuard (MemoryType, AllocateType,
> GUARD_HEAP_TYPE_PAGE);
> +}
> +
> +/**
> + Check to see if the heap guard is enabled for page and/or pool allocation
> +
> + @return TRUE/FALSE
> +**/
> +BOOLEAN
> +IsHeapGuardEnabled (
> + VOID
> + )
> +{
> + return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
> + GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_PAGE);
> +}
> +
> +/**
> + Set head Guard and tail Guard for the given memory range
> +
> + @param[in] Memory Base address of memory to set guard for
> + @param[in] NumberOfPages Memory size in pages
> +
> + @return VOID
> +**/
> +VOID
> +SetGuardForMemory (
> + IN EFI_PHYSICAL_ADDRESS Memory,
> + IN UINTN NumberOfPages
> + )
> +{
> + EFI_PHYSICAL_ADDRESS GuardPage;
> +
> + //
> + // Set tail Guard
> + //
> + GuardPage = Memory + EFI_PAGES_TO_SIZE (NumberOfPages);
> + if (!IsGuardPage (GuardPage)) {
> + SetGuardPage (GuardPage);
> + }
> +
> + // Set head Guard
> + GuardPage = Memory - EFI_PAGES_TO_SIZE (1);
> + if (!IsGuardPage (GuardPage)) {
> + SetGuardPage (GuardPage);
> + }
> +
> + //
> + // Mark the memory range as Guarded
> + //
> + SetGuardedMemoryBits (Memory, NumberOfPages);
> +}
> +
> +/**
> + Unset head Guard and tail Guard for the given memory range
> +
> + @param[in] Memory Base address of memory to unset guard for
> + @param[in] NumberOfPages Memory size in pages
> +
> + @return VOID
> +**/
> +VOID
> +UnsetGuardForMemory (
> + IN EFI_PHYSICAL_ADDRESS Memory,
> + IN UINTN NumberOfPages
> + )
> +{
> + EFI_PHYSICAL_ADDRESS GuardPage;
> +
> + if (NumberOfPages == 0) {
> + return;
> + }
> +
> + //
> + // Head Guard must be one page before, if any.
> + //
> + GuardPage = Memory - EFI_PAGES_TO_SIZE (1);
> + if (IsHeadGuard (GuardPage)) {
> + if (!IsMemoryGuarded (GuardPage - EFI_PAGES_TO_SIZE (1))) {
> + //
> + // If the head Guard is not a tail Guard of adjacent memory block,
> + // unset it.
> + //
> + UnsetGuardPage (GuardPage);
> + }
> + } else if (IsMemoryGuarded (GuardPage)) {
> + //
> + // Pages before memory to free are still in Guard. It's a partial free
> + // case. Turn first page of memory block to free into a new Guard.
> + //
> + SetGuardPage (Memory);
> + }
> +
> + //
> + // Tail Guard must be the page after this memory block to free, if any.
> + //
> + GuardPage = Memory + EFI_PAGES_TO_SIZE (NumberOfPages);
> + if (IsTailGuard (GuardPage)) {
> + if (!IsMemoryGuarded (GuardPage + EFI_PAGES_TO_SIZE (1))) {
> + //
> + // If the tail Guard is not a head Guard of adjacent memory block,
> + // free it; otherwise, keep it.
> + //
> + UnsetGuardPage (GuardPage);
> + }
> + } else if (IsMemoryGuarded (GuardPage)) {
> + //
> + // Pages after memory to free are still in Guard. It's a partial free
> + // case. We need to keep one page to be a head Guard.
> + //
> + SetGuardPage (GuardPage - EFI_PAGES_TO_SIZE (1));
> + }
> +
> + //
> + // No matter what, we just clear the mark of the Guarded memory.
> + //
> + ClearGuardedMemoryBits(Memory, NumberOfPages);
> +}
> +
> +/**
> + Adjust address of free memory according to existing and/or required
> Guard
> +
> + This function will check if there're existing Guard pages of adjacent
> + memory blocks, and try to use it as the Guard page of the memory to be
> + allocated.
> +
> + @param[in] Start Start address of free memory block
> + @param[in] Size Size of free memory block
> + @param[in] SizeRequested Size of memory to allocate
> +
> + @return The end address of memory block found
> + @return 0 if no enough space for the required size of memory and its
> Guard
> +**/
> +UINT64
> +AdjustMemoryS (
> + IN UINT64 Start,
> + IN UINT64 Size,
> + IN UINT64 SizeRequested
> + )
> +{
> + UINT64 Target;
> +
> + Target = Start + Size - SizeRequested;
> +
> + //
> + // At least one more page needed for Guard page.
> + //
> + if (Size < (SizeRequested + EFI_PAGES_TO_SIZE (1))) {
> + return 0;
> + }
> +
> + if (!IsGuardPage (Start + Size)) {
> + // No Guard at tail to share. One more page is needed.
> + Target -= EFI_PAGES_TO_SIZE (1);
> + }
> +
> + // Out of range?
> + if (Target < Start) {
> + return 0;
> + }
> +
> + // At the edge?
> + if (Target == Start) {
> + if (!IsGuardPage (Target - EFI_PAGES_TO_SIZE (1))) {
> + // No enough space for a new head Guard if no Guard at head to share.
> + return 0;
> + }
> + }
> +
> + // OK, we have enough pages for memory and its Guards. Return the End
> of the
> + // free space.
> + return Target + SizeRequested - 1;
> +}
> +
> +/**
> + Adjust the start address and number of pages to free according to Guard
> +
> + The purpose of this function is to keep the shared Guard page with
> adjacent
> + memory block if it's still in guard, or free it if no more sharing. Another
> + is to reserve pages as Guard pages in partial page free situation.
> +
> + @param[in/out] Memory Base address of memory to free
> + @param[in/out] NumberOfPages Size of memory to free
> +
> + @return VOID
> +**/
> +VOID
> +AdjustMemoryF (
> + IN OUT EFI_PHYSICAL_ADDRESS *Memory,
> + IN OUT UINTN *NumberOfPages
> + )
> +{
> + EFI_PHYSICAL_ADDRESS Start;
> + EFI_PHYSICAL_ADDRESS MemoryToTest;
> + UINTN PagesToFree;
> +
> + if (Memory == NULL || NumberOfPages == NULL || *NumberOfPages ==
> 0) {
> + return;
> + }
> +
> + Start = *Memory;
> + PagesToFree = *NumberOfPages;
> +
> + //
> + // Head Guard must be one page before, if any.
> + //
> + MemoryToTest = Start - EFI_PAGES_TO_SIZE (1);
> + if (IsHeadGuard (MemoryToTest)) {
> + if (!IsMemoryGuarded (MemoryToTest - EFI_PAGES_TO_SIZE (1))) {
> + //
> + // If the head Guard is not a tail Guard of adjacent memory block,
> + // free it; otherwise, keep it.
> + //
> + Start -= EFI_PAGES_TO_SIZE (1);
> + PagesToFree += 1;
> + }
> + } else if (IsMemoryGuarded (MemoryToTest)) {
> + //
> + // Pages before memory to free are still in Guard. It's a partial free
> + // case. We need to keep one page to be a tail Guard.
> + //
> + Start += EFI_PAGES_TO_SIZE (1);
> + PagesToFree -= 1;
> + }
> +
> + //
> + // Tail Guard must be the page after this memory block to free, if any.
> + //
> + MemoryToTest = Start + EFI_PAGES_TO_SIZE (PagesToFree);
> + if (IsTailGuard (MemoryToTest)) {
> + if (!IsMemoryGuarded (MemoryToTest + EFI_PAGES_TO_SIZE (1))) {
> + //
> + // If the tail Guard is not a head Guard of adjacent memory block,
> + // free it; otherwise, keep it.
> + //
> + PagesToFree += 1;
> + }
> + } else if (IsMemoryGuarded (MemoryToTest)) {
> + //
> + // Pages after memory to free are still in Guard. It's a partial free
> + // case. We need to keep one page to be a head Guard.
> + //
> + PagesToFree -= 1;
> + }
> +
> + *Memory = Start;
> + *NumberOfPages = PagesToFree;
> +}
> +
> +/**
> + Adjust the base and number of pages to really allocate according to Guard
> +
> + @param[in/out] Memory Base address of free memory
> + @param[in/out] NumberOfPages Size of memory to allocate
> +
> + @return VOID
> +**/
> +VOID
> +AdjustMemoryA (
> + IN OUT EFI_PHYSICAL_ADDRESS *Memory,
> + IN OUT UINTN *NumberOfPages
> + )
> +{
> + //
> + // FindFreePages() has already taken the Guard into account. It's safe to
> + // adjust the start address and/or number of pages here, to make sure
> that
> + // the Guards are also "allocated".
> + //
> + if (!IsGuardPage (*Memory + EFI_PAGES_TO_SIZE (*NumberOfPages))) {
> + // No tail Guard, add one.
> + *NumberOfPages += 1;
> + }
> +
> + if (!IsGuardPage (*Memory - EFI_PAGE_SIZE)) {
> + // No head Guard, add one.
> + *Memory -= EFI_PAGE_SIZE;
> + *NumberOfPages += 1;
> + }
> +}
> +
> +/**
> + Adjust the pool head position to make sure the Guard page is adjavent to
> + pool tail or pool head.
> +
> + @param[in] Memory Base address of memory allocated
> + @param[in] NoPages Number of pages actually allocated
> + @param[in] Size Size of memory requested
> + (plus pool head/tail overhead)
> +
> + @return Address of pool head
> +**/
> +VOID *
> +AdjustPoolHeadA (
> + IN EFI_PHYSICAL_ADDRESS Memory,
> + IN UINTN NoPages,
> + IN UINTN Size
> + )
> +{
> + if ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) != 0) {
> + //
> + // Pool head is put near the head Guard
> + //
> + return (VOID *)(UINTN)Memory;
> + }
> +
> + //
> + // Pool head is put near the tail Guard
> + //
> + return (VOID *)(UINTN)(Memory + EFI_PAGES_TO_SIZE (NoPages) - Size);
> +}
> +
> +/**
> + Get the page base address according to pool head address
> +
> + @param[in] Memory Head address of pool to free
> +
> + @return Address of pool head
> +**/
> +VOID *
> +AdjustPoolHeadF (
> + IN EFI_PHYSICAL_ADDRESS Memory
> + )
> +{
> + if ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) != 0) {
> + //
> + // Pool head is put near the head Guard
> + //
> + return (VOID *)(UINTN)Memory;
> + }
> +
> + //
> + // Pool head is put near the tail Guard
> + //
> + return (VOID *)(UINTN)(Memory & ~EFI_PAGE_MASK);
> +}
> +
> +/**
> + Helper function of memory allocation with Guard pages
> +
> + @param FreePageList The free page node.
> + @param NumberOfPages Number of pages to be allocated.
> + @param MaxAddress Request to allocate memory below this
> address.
> + @param MemoryType Type of memory requested.
> +
> + @return Memory address of allocated pages.
> +**/
> +UINTN
> +InternalAllocMaxAddressWithGuard (
> + IN OUT LIST_ENTRY *FreePageList,
> + IN UINTN NumberOfPages,
> + IN UINTN MaxAddress,
> + IN EFI_MEMORY_TYPE MemoryType
> +
> + )
> +{
> + LIST_ENTRY *Node;
> + FREE_PAGE_LIST *Pages;
> + UINTN PagesToAlloc;
> + UINTN HeadGuard;
> + UINTN TailGuard;
> + UINTN Address;
> +
> + for (Node = FreePageList->BackLink; Node != FreePageList;
> + Node = Node->BackLink) {
> + Pages = BASE_CR (Node, FREE_PAGE_LIST, Link);
> + if (Pages->NumberOfPages >= NumberOfPages &&
> + (UINTN)Pages + EFI_PAGES_TO_SIZE (NumberOfPages) - 1 <=
> MaxAddress) {
> +
> + //
> + // We may need 1 or 2 more pages for Guard. Check it out.
> + //
> + PagesToAlloc = NumberOfPages;
> + TailGuard = (UINTN)Pages + EFI_PAGES_TO_SIZE (Pages-
> >NumberOfPages);
> + if (!IsGuardPage (TailGuard)) {
> + //
> + // Add one if no Guard at the end of current free memory block.
> + //
> + PagesToAlloc += 1;
> + TailGuard = 0;
> + }
> +
> + HeadGuard = (UINTN)Pages +
> + EFI_PAGES_TO_SIZE (Pages->NumberOfPages - PagesToAlloc) -
> + EFI_PAGE_SIZE;
> + if (!IsGuardPage (HeadGuard)) {
> + //
> + // Add one if no Guard at the page before the address to allocate
> + //
> + PagesToAlloc += 1;
> + HeadGuard = 0;
> + }
> +
> + if (Pages->NumberOfPages < PagesToAlloc) {
> + // Not enough space to allocate memory with Guards? Try next block.
> + continue;
> + }
> +
> + Address = InternalAllocPagesOnOneNode (Pages, PagesToAlloc,
> MaxAddress);
> + ConvertSmmMemoryMapEntry(MemoryType, Address, PagesToAlloc,
> FALSE);
> + CoreFreeMemoryMapStack();
> + if (!HeadGuard) {
> + // Don't pass the Guard page to user.
> + Address += EFI_PAGE_SIZE;
> + }
> + SetGuardForMemory (Address, NumberOfPages);
> + return Address;
> + }
> + }
> +
> + return (UINTN)(-1);
> +}
> +
> +/**
> + Helper function of memory free with Guard pages
> +
> + @param[in] Memory Base address of memory being freed.
> + @param[in] NumberOfPages The number of pages to free.
> + @param[in] AddRegion If this memory is new added region.
> +
> + @retval EFI_NOT_FOUND Could not find the entry that covers the
> range.
> + @retval EFI_INVALID_PARAMETER Address not aligned, Address is zero or
> NumberOfPages is zero.
> + @return EFI_SUCCESS Pages successfully freed.
> +**/
> +EFI_STATUS
> +SmmInternalFreePagesExWithGuard (
> + IN EFI_PHYSICAL_ADDRESS Memory,
> + IN UINTN NumberOfPages,
> + IN BOOLEAN AddRegion
> + )
> +{
> + EFI_PHYSICAL_ADDRESS MemoryToFree;
> + UINTN PagesToFree;
> +
> + MemoryToFree = Memory;
> + PagesToFree = NumberOfPages;
> +
> + AdjustMemoryF (&MemoryToFree, &PagesToFree);
> + UnsetGuardForMemory (Memory, NumberOfPages);
> +
> + return SmmInternalFreePagesEx (MemoryToFree, PagesToFree,
> AddRegion);
> +}
> +
> +/**
> + Set all Guard pages which cannot be set during the non-SMM mode time
> +**/
> +VOID
> +SetAllGuardPages (
> + VOID
> + )
> +{
> + UINT64 Entries[GUARDED_HEAP_MAP_TABLE_DEPTH]
> + = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> + UINT64 Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH]
> + = GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS;
> + UINT64 *Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> + UINT64 Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> + UINT64 Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> + UINT64 TableEntry;
> + UINT64 Address;
> + UINT64 GuardPage;
> + INTN Level;
> + UINTN Index;
> + BOOLEAN OnGuarding;
> +
> + SetMem64 (Tables, sizeof(Tables), 0);
> + SetMem64 (Addresses, sizeof(Addresses), 0);
> + SetMem64 (Indices, sizeof(Indices), 0);
> +
> + Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> + Tables[Level] = mGuardedMemoryMap;
> + Address = 0;
> + OnGuarding = FALSE;
> +
> + DEBUG_CODE (
> + DumpGuardedMemoryBitmap ();
> + );
> +
> + while (TRUE) {
> + if (Indices[Level] > Entries[Level]) {
> + Tables[Level] = 0;
> + Level -= 1;
> + } else {
> +
> + TableEntry = Tables[Level][Indices[Level]];
> + Address = Addresses[Level];
> +
> + if (TableEntry == 0) {
> +
> + OnGuarding = FALSE;
> +
> + } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> +
> + Level += 1;
> + Tables[Level] = (UINT64 *)TableEntry;
> + Addresses[Level] = Address;
> + Indices[Level] = 0;
> +
> + continue;
> +
> + } else {
> +
> + Index = 0;
> + while (Index < GUARDED_HEAP_MAP_ENTRY_BITS) {
> + if ((TableEntry & 1) == 1) {
> + if (OnGuarding) {
> + GuardPage = 0;
> + } else {
> + GuardPage = Address - EFI_PAGE_SIZE;
> + }
> + OnGuarding = TRUE;
> + } else {
> + if (OnGuarding) {
> + GuardPage = Address;
> + } else {
> + GuardPage = 0;
> + }
> + OnGuarding = FALSE;
> + }
> +
> + if (GuardPage != 0) {
> + SetGuardPage (GuardPage);
> + }
> +
> + if (TableEntry == 0) {
> + break;
> + }
> +
> + TableEntry = RShiftU64 (TableEntry, 1);
> + Address += EFI_PAGE_SIZE;
> + Index += 1;
> + }
> + }
> + }
> +
> + if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> + break;
> + }
> +
> + Indices[Level] += 1;
> + Address = (Level == 0) ? 0 : Addresses[Level - 1];
> + Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> +
> + }
> +}
> +
> +/**
> + Hook function used to set all Guard pages after entering SMM mode
> +**/
> +VOID
> +SmmEntryPointMemoryManagementHook (
> + VOID
> + )
> +{
> + EFI_STATUS Status;
> + VOID *SmmCpu;
> +
> + if (!mIsSmmCpuMode) {
> + Status = SmmLocateProtocol (&gEfiSmmCpuProtocolGuid, NULL,
> &SmmCpu);
> + if (!EFI_ERROR(Status)) {
> + mIsSmmCpuMode = TRUE;
> + SetAllGuardPages ();
> + }
> + }
> +}
> +
> +/**
> + Helper function to convert a UINT64 value in binary to a string
> +
> + @param[in] Value Value of a UINT64 integer
> + @param[in] BinString String buffer to contain the conversion result
> +
> + @return VOID
> +**/
> +VOID
> +Uint64ToBinString (
> + IN UINT64 Value,
> + OUT CHAR8 *BinString
> + )
> +{
> + UINTN Index;
> +
> + if (BinString == NULL) {
> + return;
> + }
> +
> + for (Index = 64; Index > 0; --Index) {
> + BinString[Index - 1] = '0' + (Value & 1);
> + Value = RShiftU64 (Value, 1);
> + }
> + BinString[64] = '\0';
> +}
> +
> +/**
> + Dump the guarded memory bit map
> +
> + @return VOID
> +**/
> +VOID
> +EFIAPI
> +DumpGuardedMemoryBitmap (
> + VOID
> + )
> +{
> + UINT64 Entries[GUARDED_HEAP_MAP_TABLE_DEPTH]
> + = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> + UINT64 Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH]
> + = GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS;
> + UINT64 *Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> + UINT64 Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> + UINT64 Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> + UINT64 TableEntry;
> + UINT64 Address;
> + INTN Level;
> + UINTN RepeatZero;
> + CHAR8 String[GUARDED_HEAP_MAP_ENTRY_BITS + 1];
> + CHAR8 *Ruler1 = " 3 2"
> + " 1 0";
> + CHAR8 *Ruler2 = "FEDCBA9876543210FEDCBA9876543210"
> + "FEDCBA9876543210FEDCBA9876543210";
> +
> + if (mGuardedMemoryMap == NULL) {
> + return;
> + }
> +
> + DEBUG ((DEBUG_INFO, "============================="
> + " Guarded Memory Bitmap "
> + "==============================\r\n"));
> + DEBUG ((DEBUG_INFO, " %a\r\n", Ruler1));
> + DEBUG ((DEBUG_INFO, " %a\r\n", Ruler2));
> +
> +
> + SetMem64 (Tables, sizeof(Tables), 0);
> + SetMem64 (Addresses, sizeof(Addresses), 0);
> + SetMem64 (Indices, sizeof(Indices), 0);
> +
> + Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> + Tables[Level] = mGuardedMemoryMap;
> + Address = 0;
> + RepeatZero = 0;
> +
> + while (TRUE) {
> + if (Indices[Level] > Entries[Level]) {
> +
> + Tables[Level] = 0;
> + Level -= 1;
> + RepeatZero = 0;
> +
> + DEBUG ((
> + DEBUG_INFO,
> + "========================================="
> + "=========================================\r\n"
> + ));
> +
> + } else {
> +
> + TableEntry = Tables[Level][Indices[Level]];
> + Address = Addresses[Level];
> +
> + if (TableEntry == 0) {
> +
> + if (Level == GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> + if (RepeatZero == 0) {
> + Uint64ToBinString(TableEntry, String);
> + DEBUG ((DEBUG_INFO, "%016lx: %a\r\n", Address, String));
> + } else if (RepeatZero == 1) {
> + DEBUG ((DEBUG_INFO, "... : ...\r\n"));
> + }
> + RepeatZero += 1;
> + }
> +
> + } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> +
> + Level += 1;
> + Tables[Level] = (UINT64 *)TableEntry;
> + Addresses[Level] = Address;
> + Indices[Level] = 0;
> + RepeatZero = 0;
> +
> + continue;
> +
> + } else {
> +
> + RepeatZero = 0;
> + Uint64ToBinString(TableEntry, String);
> + DEBUG ((DEBUG_INFO, "%016lx: %a\r\n", Address, String));
> +
> + }
> + }
> +
> + if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> + break;
> + }
> +
> + Indices[Level] += 1;
> + Address = (Level == 0) ? 0 : Addresses[Level - 1];
> + Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> +
> + }
> +}
> +
> +/**
> + Debug function used to verify if the Guard page is well set or not
> +
> + @param[in] BaseAddress Address of memory to check
> + @param[in] NumberOfPages Size of memory in pages
> +
> + @return TRUE The head Guard and tail Guard are both well set
> + @return FALSE The head Guard and/or tail Guard are not well set
> +**/
> +BOOLEAN
> +VerifyMemoryGuard (
> + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> + IN UINTN NumberOfPages
> + )
> +{
> + UINT64 *PageEntry;
> + PAGE_ATTRIBUTE Attribute;
> + EFI_PHYSICAL_ADDRESS Address;
> +
> + if (!mIsSmmCpuMode) {
> + return TRUE;
> + }
> +
> + Address = BaseAddress - EFI_PAGE_SIZE;
> + PageEntry = GetPageTableEntry (Address, &Attribute);
> + if (PageEntry == NULL || Attribute != Page4K) {
> + DEBUG ((DEBUG_ERROR, "Head Guard is not set at: %016lx!!!\r\n",
> Address));
> + DumpGuardedMemoryBitmap ();
> + return FALSE;
> + }
> +
> + if ((*PageEntry & IA32_PG_P) != 0) {
> + DEBUG ((DEBUG_ERROR, "Head Guard is not set at: %016lx
> (%016lX)!!!\r\n",
> + Address, *PageEntry));
> + *(UINT8 *) Address = 0;
> + DumpGuardedMemoryBitmap ();
> + return FALSE;
> + }
> +
> + Address = BaseAddress + EFI_PAGES_TO_SIZE (NumberOfPages);
> + PageEntry = GetPageTableEntry (Address, &Attribute);
> + if (PageEntry == NULL || Attribute != Page4K) {
> + DEBUG ((DEBUG_ERROR, "Tail Guard is not set at: %016lx!!!\r\n",
> Address));
> + DumpGuardedMemoryBitmap ();
> + return FALSE;
> + }
> +
> + if ((*PageEntry & IA32_PG_P) != 0) {
> + DEBUG ((DEBUG_ERROR, "Tail Guard is not set at: %016lx (%016lX)!!!\r\n",
> + Address, *PageEntry));
> + *(UINT8 *) Address = 0;
> + DumpGuardedMemoryBitmap ();
> + return FALSE;
> + }
> +
> + return TRUE;
> +}
> +
> diff --git a/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h
> b/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h
> new file mode 100644
> index 0000000000..ecc10e83a7
> --- /dev/null
> +++ b/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h
> @@ -0,0 +1,395 @@
> +/** @file
> + Data structure and functions to allocate and free memory space.
> +
> +Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
> +This program and the accompanying materials
> +are licensed and made available under the terms and conditions of the BSD
> License
> +which accompanies this distribution. The full text of the license may be
> found at
> +http://opensource.org/licenses/bsd-license.php
> +
> +THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> BASIS,
> +WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> EXPRESS OR IMPLIED.
> +
> +**/
> +
> +#ifndef _HEAPGUARD_H_
> +#define _HEAPGUARD_H_
> +
> +#include "PiSmmCore.h"
> +#include "PageTable.h"
> +
> +//
> +// Following macros are used to define and access the guarded memory
> bitmap
> +// table.
> +//
> +// To simplify the access and reduce the memory used for this table, the
> +// table is constructed in the similar way as page table structure but in
> +// reverse direction, i.e. from bottom growing up to top.
> +//
> +// - 1-bit tracks 1 page (4KB)
> +// - 1-UINT64 map entry tracks 256KB memory
> +// - 1K-UINT64 map table tracks 256MB memory
> +// - Five levels of tables can track any address of memory of 64-bit
> +// system, like below.
> +//
> +// 512 * 512 * 512 * 512 * 1K * 64b * 4K
> +// 111111111 111111111 111111111 111111111 1111111111 111111
> 111111111111
> +// 63 54 45 36 27 17 11 0
> +// 9b 9b 9b 9b 10b 6b 12b
> +// L0 -> L1 -> L2 -> L3 -> L4 -> bits -> page
> +// 1FF 1FF 1FF 1FF 3FF 3F FFF
> +//
> +// L4 table has 1K * sizeof(UINT64) = 8K (2-page), which can track 256MB
> +// memory. Each table of L0-L3 will be allocated when its memory address
> +// range is to be tracked. Only 1-page will be allocated each time. This
> +// can save memories used to establish this map table.
> +//
> +// For a normal configuration of system with 4G memory, two levels of
> tables
> +// can track the whole memory, because two levels (L3+L4) of map tables
> have
> +// already coverred 37-bit of memory address. And for a normal UEFI BIOS,
> +// less than 128M memory would be consumed during boot. That means we
> just
> +// need
> +//
> +// 1-page (L3) + 2-page (L4)
> +//
> +// memory (3 pages) to track the memory allocation works. In this case,
> +// there's no need to setup L0-L2 tables.
> +//
> +
> +//
> +// Each entry occupies 8B/64b. 1-page can hold 512 entries, which spans 9
> +// bits in address. (512 = 1 << 9)
> +//
> +#define BYTE_LENGTH_SHIFT 3 // (8 = 1 << 3)
> +
> +#define GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT \
> + (EFI_PAGE_SHIFT - BYTE_LENGTH_SHIFT)
> +
> +#define GUARDED_HEAP_MAP_TABLE_DEPTH 5
> +
> +// Use UINT64_index + bit_index_of_UINT64 to locate the bit in may
> +#define GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT 6 // (64 = 1 << 6)
> +
> +#define GUARDED_HEAP_MAP_ENTRY_BITS \
> + (1 << GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT)
> +
> +#define GUARDED_HEAP_MAP_ENTRY_BYTES \
> + (GUARDED_HEAP_MAP_ENTRY_BITS / 8)
> +
> +// L4 table address width: 64 - 9 * 4 - 6 - 12 = 10b
> +#define GUARDED_HEAP_MAP_ENTRY_SHIFT \
> + (GUARDED_HEAP_MAP_ENTRY_BITS \
> + - GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT * 4 \
> + - GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT \
> + - EFI_PAGE_SHIFT)
> +
> +// L4 table address mask: (1 << 10 - 1) = 0x3FF
> +#define GUARDED_HEAP_MAP_ENTRY_MASK \
> + ((1 << GUARDED_HEAP_MAP_ENTRY_SHIFT) - 1)
> +
> +// Size of each L4 table: (1 << 10) * 8 = 8KB = 2-page
> +#define GUARDED_HEAP_MAP_SIZE \
> + ((1 << GUARDED_HEAP_MAP_ENTRY_SHIFT) *
> GUARDED_HEAP_MAP_ENTRY_BYTES)
> +
> +// Memory size tracked by one L4 table: 8KB * 8 * 4KB = 256MB
> +#define GUARDED_HEAP_MAP_UNIT_SIZE \
> + (GUARDED_HEAP_MAP_SIZE * 8 * EFI_PAGE_SIZE)
> +
> +// L4 table entry number: 8KB / 8 = 1024
> +#define GUARDED_HEAP_MAP_ENTRIES_PER_UNIT \
> + (GUARDED_HEAP_MAP_SIZE / GUARDED_HEAP_MAP_ENTRY_BYTES)
> +
> +// L4 table entry indexing
> +#define GUARDED_HEAP_MAP_ENTRY_INDEX(Address) \
> + (RShiftU64 (Address, EFI_PAGE_SHIFT \
> + + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT) \
> + & GUARDED_HEAP_MAP_ENTRY_MASK)
> +
> +// L4 table entry bit indexing
> +#define GUARDED_HEAP_MAP_ENTRY_BIT_INDEX(Address) \
> + (RShiftU64 (Address, EFI_PAGE_SHIFT) \
> + & ((1 << GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT) - 1))
> +
> +//
> +// Total bits (pages) tracked by one L4 table (65536-bit)
> +//
> +#define GUARDED_HEAP_MAP_BITS \
> + (1 << (GUARDED_HEAP_MAP_ENTRY_SHIFT \
> + + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT))
> +
> +//
> +// Bit indexing inside the whole L4 table (0 - 65535)
> +//
> +#define GUARDED_HEAP_MAP_BIT_INDEX(Address) \
> + (RShiftU64 (Address, EFI_PAGE_SHIFT) \
> + & ((1 << (GUARDED_HEAP_MAP_ENTRY_SHIFT \
> + + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT)) - 1))
> +
> +//
> +// Memory address bit width tracked by L4 table: 10 + 6 + 12 = 28
> +//
> +#define GUARDED_HEAP_MAP_TABLE_SHIFT \
> + (GUARDED_HEAP_MAP_ENTRY_SHIFT +
> GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT \
> + + EFI_PAGE_SHIFT)
> +
> +//
> +// Macro used to initialize the local array variable for map table traversing
> +// {55, 46, 37, 28, 18}
> +//
> +#define GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS \
> + { \
> + GUARDED_HEAP_MAP_TABLE_SHIFT +
> GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT * 3, \
> + GUARDED_HEAP_MAP_TABLE_SHIFT +
> GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT * 2, \
> + GUARDED_HEAP_MAP_TABLE_SHIFT +
> GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT, \
> + GUARDED_HEAP_MAP_TABLE_SHIFT, \
> + EFI_PAGE_SHIFT + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT \
> + }
> +
> +//
> +// Masks used to extract address range of each level of table
> +// {0x1FF, 0x1FF, 0x1FF, 0x1FF, 0x3FF}
> +//
> +#define GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS \
> + { \
> + (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
> + (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
> + (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
> + (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
> + (1 << GUARDED_HEAP_MAP_ENTRY_SHIFT) - 1 \
> + }
> +
> +//
> +// Memory type to guard (matching the related PCD definition)
> +//
> +#define GUARD_HEAP_TYPE_POOL BIT2
> +#define GUARD_HEAP_TYPE_PAGE BIT3
> +
> +typedef struct {
> + UINT32 TailMark;
> + UINT32 HeadMark;
> + EFI_PHYSICAL_ADDRESS Address;
> + LIST_ENTRY Link;
> +} HEAP_GUARD_NODE;
> +
> +/**
> + Set head Guard and tail Guard for the given memory range
> +
> + @param[in] Memory Base address of memory to set guard for
> + @param[in] NumberOfPages Memory size in pages
> +
> + @return VOID
> +**/
> +VOID
> +SetGuardForMemory (
> + IN EFI_PHYSICAL_ADDRESS Memory,
> + IN UINTN NumberOfPages
> + );
> +
> +/**
> + Unset head Guard and tail Guard for the given memory range
> +
> + @param[in] Memory Base address of memory to unset guard for
> + @param[in] NumberOfPages Memory size in pages
> +
> + @return VOID
> +**/
> +VOID
> +UnsetGuardForMemory (
> + IN EFI_PHYSICAL_ADDRESS Memory,
> + IN UINTN NumberOfPages
> + );
> +
> +/**
> + Adjust the base and number of pages to really allocate according to Guard
> +
> + @param[in/out] Memory Base address of free memory
> + @param[in/out] NumberOfPages Size of memory to allocate
> +
> + @return VOID
> +**/
> +VOID
> +AdjustMemoryA (
> + IN OUT EFI_PHYSICAL_ADDRESS *Memory,
> + IN OUT UINTN *NumberOfPages
> + );
> +
> +/**
> + Adjust the start address and number of pages to free according to Guard
> +
> + The purpose of this function is to keep the shared Guard page with
> adjacent
> + memory block if it's still in guard, or free it if no more sharing. Another
> + is to reserve pages as Guard pages in partial page free situation.
> +
> + @param[in/out] Memory Base address of memory to free
> + @param[in/out] NumberOfPages Size of memory to free
> +
> + @return VOID
> +**/
> +VOID
> +AdjustMemoryF (
> + IN OUT EFI_PHYSICAL_ADDRESS *Memory,
> + IN OUT UINTN *NumberOfPages
> + );
> +
> +/**
> + Check to see if the pool at the given address should be guarded or not
> +
> + @param[in] MemoryType Pool type to check
> +
> +
> + @return TRUE The given type of pool should be guarded
> + @return FALSE The given type of pool should not be guarded
> +**/
> +BOOLEAN
> +IsPoolTypeToGuard (
> + IN EFI_MEMORY_TYPE MemoryType
> + );
> +
> +/**
> + Check to see if the page at the given address should be guarded or not
> +
> + @param[in] MemoryType Page type to check
> + @param[in] AllocateType Allocation type to check
> +
> + @return TRUE The given type of page should be guarded
> + @return FALSE The given type of page should not be guarded
> +**/
> +BOOLEAN
> +IsPageTypeToGuard (
> + IN EFI_MEMORY_TYPE MemoryType,
> + IN EFI_ALLOCATE_TYPE AllocateType
> + );
> +
> +/**
> + Check to see if the page at the given address is guarded or not
> +
> + @param[in] Address The address to check for
> +
> + @return TRUE The page at Address is guarded
> + @return FALSE The page at Address is not guarded
> +**/
> +BOOLEAN
> +EFIAPI
> +IsMemoryGuarded (
> + IN EFI_PHYSICAL_ADDRESS Address
> + );
> +
> +/**
> + Check to see if the page at the given address is a Guard page or not
> +
> + @param[in] Address The address to check for
> +
> + @return TRUE The page at Address is a Guard page
> + @return FALSE The page at Address is not a Guard page
> +**/
> +BOOLEAN
> +EFIAPI
> +IsGuardPage (
> + IN EFI_PHYSICAL_ADDRESS Address
> + );
> +
> +/**
> + Dump the guarded memory bit map
> +
> + @return VOID
> +**/
> +VOID
> +EFIAPI
> +DumpGuardedMemoryBitmap (
> + VOID
> + );
> +
> +/**
> + Adjust the pool head position to make sure the Guard page is adjavent to
> + pool tail or pool head.
> +
> + @param[in] Memory Base address of memory allocated
> + @param[in] NoPages Number of pages actually allocated
> + @param[in] Size Size of memory requested
> + (plus pool head/tail overhead)
> +
> + @return Address of pool head
> +**/
> +VOID *
> +AdjustPoolHeadA (
> + IN EFI_PHYSICAL_ADDRESS Memory,
> + IN UINTN NoPages,
> + IN UINTN Size
> + );
> +
> +/**
> + Get the page base address according to pool head address
> +
> + @param[in] Memory Head address of pool to free
> +
> + @return Address of pool head
> +**/
> +VOID *
> +AdjustPoolHeadF (
> + IN EFI_PHYSICAL_ADDRESS Memory
> + );
> +
> +/**
> + Helper function of memory allocation with Guard pages
> +
> + @param FreePageList The free page node.
> + @param NumberOfPages Number of pages to be allocated.
> + @param MaxAddress Request to allocate memory below this
> address.
> + @param MemoryType Type of memory requested.
> +
> + @return Memory address of allocated pages.
> +**/
> +UINTN
> +InternalAllocMaxAddressWithGuard (
> + IN OUT LIST_ENTRY *FreePageList,
> + IN UINTN NumberOfPages,
> + IN UINTN MaxAddress,
> + IN EFI_MEMORY_TYPE MemoryType
> + );
> +
> +/**
> + Helper function of memory free with Guard pages
> +
> + @param[in] Memory Base address of memory being freed.
> + @param[in] NumberOfPages The number of pages to free.
> + @param[in] AddRegion If this memory is new added region.
> +
> + @retval EFI_NOT_FOUND Could not find the entry that covers the
> range.
> + @retval EFI_INVALID_PARAMETER Address not aligned, Address is zero or
> NumberOfPages is zero.
> + @return EFI_SUCCESS Pages successfully freed.
> +**/
> +EFI_STATUS
> +SmmInternalFreePagesExWithGuard (
> + IN EFI_PHYSICAL_ADDRESS Memory,
> + IN UINTN NumberOfPages,
> + IN BOOLEAN AddRegion
> + );
> +
> +/**
> + Check to see if the heap guard is enabled for page and/or pool allocation
> +
> + @return TRUE/FALSE
> +**/
> +BOOLEAN
> +IsHeapGuardEnabled (
> + VOID
> + );
> +
> +/**
> + Debug function used to verify if the Guard page is well set or not
> +
> + @param[in] BaseAddress Address of memory to check
> + @param[in] NumberOfPages Size of memory in pages
> +
> + @return TRUE The head Guard and tail Guard are both well set
> + @return FALSE The head Guard and/or tail Guard are not well set
> +**/
> +BOOLEAN
> +VerifyMemoryGuard (
> + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> + IN UINTN NumberOfPages
> + );
> +
> +extern BOOLEAN mOnGuarding;
> +
> +#endif
> diff --git a/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c
> b/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c
> new file mode 100644
> index 0000000000..d41b3e923f
> --- /dev/null
> +++ b/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c
> @@ -0,0 +1,704 @@
> +/** @file
> +
> +Copyright (c) 2016 - 2017, Intel Corporation. All rights reserved.<BR>
> +This program and the accompanying materials
> +are licensed and made available under the terms and conditions of the BSD
> License
> +which accompanies this distribution. The full text of the license may be
> found at
> +http://opensource.org/licenses/bsd-license.php
> +
> +THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> BASIS,
> +WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> EXPRESS OR IMPLIED.
> +
> +**/
> +
> +#include "PiSmmCore.h"
> +#include "PageTable.h"
> +
> +#include <Library/CpuLib.h>
> +
> +UINT64 mAddressEncMask = 0;
> +UINT8 mPhysicalAddressBits = 32;
> +
> +PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] = {
> + {PageNone, 0, 0},
> + {Page4K, SIZE_4KB, PAGING_4K_ADDRESS_MASK_64},
> + {Page2M, SIZE_2MB, PAGING_2M_ADDRESS_MASK_64},
> + {Page1G, SIZE_1GB, PAGING_1G_ADDRESS_MASK_64},
> +};
> +
> +/**
> + Calculate the maximum support address.
> +
> + @return the maximum support address.
> +**/
> +UINT8
> +CalculateMaximumSupportAddress (
> + VOID
> + )
> +{
> + UINT32 RegEax;
> + UINT8 PhysicalAddressBits;
> + VOID *Hob;
> +
> + //
> + // Get physical address bits supported.
> + //
> + Hob = GetFirstHob (EFI_HOB_TYPE_CPU);
> + if (Hob != NULL) {
> + PhysicalAddressBits = ((EFI_HOB_CPU *) Hob)->SizeOfMemorySpace;
> + } else {
> + AsmCpuid (0x80000000, &RegEax, NULL, NULL, NULL);
> + if (RegEax >= 0x80000008) {
> + AsmCpuid (0x80000008, &RegEax, NULL, NULL, NULL);
> + PhysicalAddressBits = (UINT8) RegEax;
> + } else {
> + PhysicalAddressBits = 36;
> + }
> + }
> +
> + //
> + // IA-32e paging translates 48-bit linear addresses to 52-bit physical
> addresses.
> + //
> + ASSERT (PhysicalAddressBits <= 52);
> + if (PhysicalAddressBits > 48) {
> + PhysicalAddressBits = 48;
> + }
> + return PhysicalAddressBits;
> +}
> +
> +/**
> + Return page table base.
> +
> + @return page table base.
> +**/
> +UINTN
> +GetPageTableBase (
> + VOID
> + )
> +{
> + return (AsmReadCr3 () & PAGING_4K_ADDRESS_MASK_64);
> +}
> +
> +/**
> + Return length according to page attributes.
> +
> + @param[in] PageAttributes The page attribute of the page entry.
> +
> + @return The length of page entry.
> +**/
> +UINTN
> +PageAttributeToLength (
> + IN PAGE_ATTRIBUTE PageAttribute
> + )
> +{
> + if (PageAttribute <= Page1G) {
> + return (UINTN)mPageAttributeTable[PageAttribute].Length;
> + }
> + return 0;
> +}
> +
> +/**
> + Return address mask according to page attributes.
> +
> + @param[in] PageAttributes The page attribute of the page entry.
> +
> + @return The address mask of page entry.
> +**/
> +UINTN
> +PageAttributeToMask (
> + IN PAGE_ATTRIBUTE PageAttribute
> + )
> +{
> + if (PageAttribute <= Page1G) {
> + return (UINTN)mPageAttributeTable[PageAttribute].AddressMask;
> + }
> + return 0;
> +}
> +
> +/**
> + Return page table entry to match the address.
> +
> + @param[in] Address The address to be checked.
> + @param[out] PageAttributes The page attribute of the page entry.
> +
> + @return The page entry.
> +**/
> +VOID *
> +GetPageTableEntry (
> + IN PHYSICAL_ADDRESS Address,
> + OUT PAGE_ATTRIBUTE *PageAttribute
> + )
> +{
> + UINTN Index1;
> + UINTN Index2;
> + UINTN Index3;
> + UINTN Index4;
> + UINT64 *L1PageTable;
> + UINT64 *L2PageTable;
> + UINT64 *L3PageTable;
> + UINT64 *L4PageTable;
> +
> + Index4 = ((UINTN)RShiftU64 (Address, 39)) & PAGING_PAE_INDEX_MASK;
> + Index3 = ((UINTN)Address >> 30) & PAGING_PAE_INDEX_MASK;
> + Index2 = ((UINTN)Address >> 21) & PAGING_PAE_INDEX_MASK;
> + Index1 = ((UINTN)Address >> 12) & PAGING_PAE_INDEX_MASK;
> +
> + if (sizeof(UINTN) == sizeof(UINT64)) {
> + L4PageTable = (UINT64 *)GetPageTableBase ();
> + if (L4PageTable[Index4] == 0) {
> + *PageAttribute = PageNone;
> + return NULL;
> + }
> +
> + L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
> ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> + } else {
> + L3PageTable = (UINT64 *)GetPageTableBase ();
> + }
> + if (L3PageTable[Index3] == 0) {
> + *PageAttribute = PageNone;
> + return NULL;
> + }
> + if ((L3PageTable[Index3] & IA32_PG_PS) != 0) {
> + // 1G
> + *PageAttribute = Page1G;
> + return &L3PageTable[Index3];
> + }
> +
> + L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> + if (L2PageTable[Index2] == 0) {
> + *PageAttribute = PageNone;
> + return NULL;
> + }
> + if ((L2PageTable[Index2] & IA32_PG_PS) != 0) {
> + // 2M
> + *PageAttribute = Page2M;
> + return &L2PageTable[Index2];
> + }
> +
> + // 4k
> + L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> + if ((L1PageTable[Index1] == 0) && (Address != 0)) {
> + *PageAttribute = PageNone;
> + return NULL;
> + }
> + *PageAttribute = Page4K;
> + return &L1PageTable[Index1];
> +}
> +
> +/**
> + Return memory attributes of page entry.
> +
> + @param[in] PageEntry The page entry.
> +
> + @return Memory attributes of page entry.
> +**/
> +UINT64
> +GetAttributesFromPageEntry (
> + IN UINT64 *PageEntry
> + )
> +{
> + UINT64 Attributes;
> + Attributes = 0;
> + if ((*PageEntry & IA32_PG_P) == 0) {
> + Attributes |= EFI_MEMORY_RP;
> + }
> + if ((*PageEntry & IA32_PG_RW) == 0) {
> + Attributes |= EFI_MEMORY_RO;
> + }
> + if ((*PageEntry & IA32_PG_NX) != 0) {
> + Attributes |= EFI_MEMORY_XP;
> + }
> + return Attributes;
> +}
> +
> +/**
> + Modify memory attributes of page entry.
> +
> + @param[in] PageEntry The page entry.
> + @param[in] Attributes The bit mask of attributes to modify for the
> memory region.
> + @param[in] IsSet TRUE means to set attributes. FALSE means to
> clear attributes.
> + @param[out] IsModified TRUE means page table modified. FALSE
> means page table not modified.
> +**/
> +VOID
> +ConvertPageEntryAttribute (
> + IN UINT64 *PageEntry,
> + IN UINT64 Attributes,
> + IN BOOLEAN IsSet,
> + OUT BOOLEAN *IsModified
> + )
> +{
> + UINT64 CurrentPageEntry;
> + UINT64 NewPageEntry;
> +
> + CurrentPageEntry = *PageEntry;
> + NewPageEntry = CurrentPageEntry;
> + if ((Attributes & EFI_MEMORY_RP) != 0) {
> + if (IsSet) {
> + NewPageEntry &= ~(UINT64)IA32_PG_P;
> + } else {
> + NewPageEntry |= IA32_PG_P;
> + }
> + }
> + if ((Attributes & EFI_MEMORY_RO) != 0) {
> + if (IsSet) {
> + NewPageEntry &= ~(UINT64)IA32_PG_RW;
> + } else {
> + NewPageEntry |= IA32_PG_RW;
> + }
> + }
> + if ((Attributes & EFI_MEMORY_XP) != 0) {
> + if (IsSet) {
> + NewPageEntry |= IA32_PG_NX;
> + } else {
> + NewPageEntry &= ~IA32_PG_NX;
> + }
> + }
> +
> + if (CurrentPageEntry != NewPageEntry) {
> + *PageEntry = NewPageEntry;
> + *IsModified = TRUE;
> + DEBUG ((DEBUG_INFO, "(SMM)ConvertPageEntryAttribute 0x%lx",
> CurrentPageEntry));
> + DEBUG ((DEBUG_INFO, "->0x%lx\n", NewPageEntry));
> + } else {
> + *IsModified = FALSE;
> + }
> +}
> +
> +/**
> + This function returns if there is need to split page entry.
> +
> + @param[in] BaseAddress The base address to be checked.
> + @param[in] Length The length to be checked.
> + @param[in] PageEntry The page entry to be checked.
> + @param[in] PageAttribute The page attribute of the page entry.
> +
> + @retval SplitAttributes on if there is need to split page entry.
> +**/
> +PAGE_ATTRIBUTE
> +NeedSplitPage (
> + IN PHYSICAL_ADDRESS BaseAddress,
> + IN UINT64 Length,
> + IN UINT64 *PageEntry,
> + IN PAGE_ATTRIBUTE PageAttribute
> + )
> +{
> + UINT64 PageEntryLength;
> +
> + PageEntryLength = PageAttributeToLength (PageAttribute);
> +
> + if (((BaseAddress & (PageEntryLength - 1)) == 0) && (Length >=
> PageEntryLength)) {
> + return PageNone;
> + }
> +
> + if (((BaseAddress & PAGING_2M_MASK) != 0) || (Length < SIZE_2MB)) {
> + return Page4K;
> + }
> +
> + return Page2M;
> +}
> +
> +/**
> + This function splits one page entry to small page entries.
> +
> + @param[in] PageEntry The page entry to be splitted.
> + @param[in] PageAttribute The page attribute of the page entry.
> + @param[in] SplitAttribute How to split the page entry.
> +
> + @retval RETURN_SUCCESS The page entry is splitted.
> + @retval RETURN_UNSUPPORTED The page entry does not support to
> be splitted.
> + @retval RETURN_OUT_OF_RESOURCES No resource to split page entry.
> +**/
> +RETURN_STATUS
> +SplitPage (
> + IN UINT64 *PageEntry,
> + IN PAGE_ATTRIBUTE PageAttribute,
> + IN PAGE_ATTRIBUTE SplitAttribute
> + )
> +{
> + UINT64 BaseAddress;
> + UINT64 *NewPageEntry;
> + UINTN Index;
> +
> + ASSERT (PageAttribute == Page2M || PageAttribute == Page1G);
> +
> + if (PageAttribute == Page2M) {
> + //
> + // Split 2M to 4K
> + //
> + ASSERT (SplitAttribute == Page4K);
> + if (SplitAttribute == Page4K) {
> + NewPageEntry = PageAlloc (1);
> + DEBUG ((DEBUG_VERBOSE, "Split - 0x%x\n", NewPageEntry));
> + if (NewPageEntry == NULL) {
> + return RETURN_OUT_OF_RESOURCES;
> + }
> + BaseAddress = *PageEntry & PAGING_2M_ADDRESS_MASK_64;
> + for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
> + NewPageEntry[Index] = (BaseAddress + SIZE_4KB * Index) |
> mAddressEncMask | ((*PageEntry) & PAGE_PROGATE_BITS);
> + }
> + (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask |
> PAGE_ATTRIBUTE_BITS;
> + return RETURN_SUCCESS;
> + } else {
> + return RETURN_UNSUPPORTED;
> + }
> + } else if (PageAttribute == Page1G) {
> + //
> + // Split 1G to 2M
> + // No need support 1G->4K directly, we should use 1G->2M, then 2M->4K
> to get more compact page table.
> + //
> + ASSERT (SplitAttribute == Page2M || SplitAttribute == Page4K);
> + if ((SplitAttribute == Page2M || SplitAttribute == Page4K)) {
> + NewPageEntry = PageAlloc (1);
> + DEBUG ((DEBUG_VERBOSE, "Split - 0x%x\n", NewPageEntry));
> + if (NewPageEntry == NULL) {
> + return RETURN_OUT_OF_RESOURCES;
> + }
> + BaseAddress = *PageEntry & PAGING_1G_ADDRESS_MASK_64;
> + for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
> + NewPageEntry[Index] = (BaseAddress + SIZE_2MB * Index) |
> mAddressEncMask | IA32_PG_PS | ((*PageEntry) & PAGE_PROGATE_BITS);
> + }
> + (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask |
> PAGE_ATTRIBUTE_BITS;
> + return RETURN_SUCCESS;
> + } else {
> + return RETURN_UNSUPPORTED;
> + }
> + } else {
> + return RETURN_UNSUPPORTED;
> + }
> +}
> +
> +/**
> + This function modifies the page attributes for the memory region specified
> by BaseAddress and
> + Length from their current attributes to the attributes specified by
> Attributes.
> +
> + Caller should make sure BaseAddress and Length is at page boundary.
> +
> + @param[in] BaseAddress The physical address that is the start address
> of a memory region.
> + @param[in] Length The size in bytes of the memory region.
> + @param[in] Attributes The bit mask of attributes to modify for the
> memory region.
> + @param[in] IsSet TRUE means to set attributes. FALSE means to
> clear attributes.
> + @param[out] IsSplitted TRUE means page table splitted. FALSE means
> page table not splitted.
> + @param[out] IsModified TRUE means page table modified. FALSE
> means page table not modified.
> +
> + @retval RETURN_SUCCESS The attributes were modified for the
> memory region.
> + @retval RETURN_ACCESS_DENIED The attributes for the memory
> resource range specified by
> + BaseAddress and Length cannot be modified.
> + @retval RETURN_INVALID_PARAMETER Length is zero.
> + Attributes specified an illegal combination of attributes
> that
> + cannot be set together.
> + @retval RETURN_OUT_OF_RESOURCES There are not enough system
> resources to modify the attributes of
> + the memory resource range.
> + @retval RETURN_UNSUPPORTED The processor does not support one
> or more bytes of the memory
> + resource range specified by BaseAddress and Length.
> + The bit mask of attributes is not support for the memory
> resource
> + range specified by BaseAddress and Length.
> +**/
> +RETURN_STATUS
> +EFIAPI
> +ConvertMemoryPageAttributes (
> + IN PHYSICAL_ADDRESS BaseAddress,
> + IN UINT64 Length,
> + IN UINT64 Attributes,
> + IN BOOLEAN IsSet,
> + OUT BOOLEAN *IsSplitted, OPTIONAL
> + OUT BOOLEAN *IsModified OPTIONAL
> + )
> +{
> + UINT64 *PageEntry;
> + PAGE_ATTRIBUTE PageAttribute;
> + UINTN PageEntryLength;
> + PAGE_ATTRIBUTE SplitAttribute;
> + RETURN_STATUS Status;
> + BOOLEAN IsEntryModified;
> + EFI_PHYSICAL_ADDRESS MaximumSupportMemAddress;
> +
> + ASSERT (Attributes != 0);
> + ASSERT ((Attributes & ~(EFI_MEMORY_RP | EFI_MEMORY_RO |
> EFI_MEMORY_XP)) == 0);
> +
> + ASSERT ((BaseAddress & (SIZE_4KB - 1)) == 0);
> + ASSERT ((Length & (SIZE_4KB - 1)) == 0);
> +
> + if (Length == 0) {
> + return RETURN_INVALID_PARAMETER;
> + }
> +
> + MaximumSupportMemAddress =
> (EFI_PHYSICAL_ADDRESS)(UINTN)(LShiftU64 (1, mPhysicalAddressBits) - 1);
> + if (BaseAddress > MaximumSupportMemAddress) {
> + return RETURN_UNSUPPORTED;
> + }
> + if (Length > MaximumSupportMemAddress) {
> + return RETURN_UNSUPPORTED;
> + }
> + if ((Length != 0) && (BaseAddress > MaximumSupportMemAddress -
> (Length - 1))) {
> + return RETURN_UNSUPPORTED;
> + }
> +
> +// DEBUG ((DEBUG_ERROR, "ConvertMemoryPageAttributes(%x) -
> %016lx, %016lx, %02lx\n", IsSet, BaseAddress, Length, Attributes));
> +
> + if (IsSplitted != NULL) {
> + *IsSplitted = FALSE;
> + }
> + if (IsModified != NULL) {
> + *IsModified = FALSE;
> + }
> +
> + //
> + // Below logic is to check 2M/4K page to make sure we do not waste
> memory.
> + //
> + while (Length != 0) {
> + PageEntry = GetPageTableEntry (BaseAddress, &PageAttribute);
> + if (PageEntry == NULL) {
> + return RETURN_UNSUPPORTED;
> + }
> + PageEntryLength = PageAttributeToLength (PageAttribute);
> + SplitAttribute = NeedSplitPage (BaseAddress, Length, PageEntry,
> PageAttribute);
> + if (SplitAttribute == PageNone) {
> + ConvertPageEntryAttribute (PageEntry, Attributes, IsSet,
> &IsEntryModified);
> + if (IsEntryModified) {
> + if (IsModified != NULL) {
> + *IsModified = TRUE;
> + }
> + }
> + //
> + // Convert success, move to next
> + //
> + BaseAddress += PageEntryLength;
> + Length -= PageEntryLength;
> + } else {
> + Status = SplitPage (PageEntry, PageAttribute, SplitAttribute);
> + if (RETURN_ERROR (Status)) {
> + return RETURN_UNSUPPORTED;
> + }
> + if (IsSplitted != NULL) {
> + *IsSplitted = TRUE;
> + }
> + if (IsModified != NULL) {
> + *IsModified = TRUE;
> + }
> + //
> + // Just split current page
> + // Convert success in next around
> + //
> + }
> + }
> +
> + return RETURN_SUCCESS;
> +}
> +
> +/**
> + FlushTlb on current processor.
> +
> + @param[in,out] Buffer Pointer to private data buffer.
> +**/
> +VOID
> +EFIAPI
> +FlushTlbOnCurrentProcessor (
> + IN OUT VOID *Buffer
> + )
> +{
> + CpuFlushTlb ();
> +}
> +
> +/**
> + FlushTlb for all processors.
> +**/
> +VOID
> +FlushTlbForAll (
> + VOID
> + )
> +{
> + UINTN Index;
> +
> + FlushTlbOnCurrentProcessor (NULL);
> +
> + if (gSmmCoreSmst.SmmStartupThisAp == NULL) {
> + DEBUG ((DEBUG_WARN, "Cannot flush TLB for APs\r\n"));
> + return;
> + }
> +
> + for (Index = 0; Index < gSmmCoreSmst.NumberOfCpus; Index++) {
> + if (Index != gSmmCoreSmst.CurrentlyExecutingCpu) {
> + // Force to start up AP in blocking mode,
> + gSmmCoreSmst.SmmStartupThisAp (FlushTlbOnCurrentProcessor, Index,
> NULL);
> + // Do not check return status, because AP might not be present in some
> corner cases.
> + }
> + }
> +}
> +
> +/**
> + This function sets the attributes for the memory region specified by
> BaseAddress and
> + Length from their current attributes to the attributes specified by
> Attributes.
> +
> + @param[in] BaseAddress The physical address that is the start address
> of a memory region.
> + @param[in] Length The size in bytes of the memory region.
> + @param[in] Attributes The bit mask of attributes to set for the
> memory region.
> + @param[out] IsSplitted TRUE means page table splitted. FALSE means
> page table not splitted.
> +
> + @retval EFI_SUCCESS The attributes were set for the memory region.
> + @retval EFI_ACCESS_DENIED The attributes for the memory resource
> range specified by
> + BaseAddress and Length cannot be modified.
> + @retval EFI_INVALID_PARAMETER Length is zero.
> + Attributes specified an illegal combination of attributes that
> + cannot be set together.
> + @retval EFI_OUT_OF_RESOURCES There are not enough system
> resources to modify the attributes of
> + the memory resource range.
> + @retval EFI_UNSUPPORTED The processor does not support one or
> more bytes of the memory
> + resource range specified by BaseAddress and Length.
> + The bit mask of attributes is not support for the memory
> resource
> + range specified by BaseAddress and Length.
> +
> +**/
> +EFI_STATUS
> +EFIAPI
> +SmmSetMemoryAttributesEx (
> + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> + IN UINT64 Length,
> + IN UINT64 Attributes,
> + OUT BOOLEAN *IsSplitted OPTIONAL
> + )
> +{
> + EFI_STATUS Status;
> + BOOLEAN IsModified;
> +
> + Status = ConvertMemoryPageAttributes (BaseAddress, Length, Attributes,
> TRUE, IsSplitted, &IsModified);
> + if (!EFI_ERROR(Status)) {
> + if (IsModified) {
> + //
> + // Flush TLB as last step
> + //
> + FlushTlbForAll();
> + }
> + }
> +
> + return Status;
> +}
> +
> +/**
> + This function clears the attributes for the memory region specified by
> BaseAddress and
> + Length from their current attributes to the attributes specified by
> Attributes.
> +
> + @param[in] BaseAddress The physical address that is the start address
> of a memory region.
> + @param[in] Length The size in bytes of the memory region.
> + @param[in] Attributes The bit mask of attributes to clear for the
> memory region.
> + @param[out] IsSplitted TRUE means page table splitted. FALSE means
> page table not splitted.
> +
> + @retval EFI_SUCCESS The attributes were cleared for the memory
> region.
> + @retval EFI_ACCESS_DENIED The attributes for the memory resource
> range specified by
> + BaseAddress and Length cannot be modified.
> + @retval EFI_INVALID_PARAMETER Length is zero.
> + Attributes specified an illegal combination of attributes that
> + cannot be set together.
> + @retval EFI_OUT_OF_RESOURCES There are not enough system
> resources to modify the attributes of
> + the memory resource range.
> + @retval EFI_UNSUPPORTED The processor does not support one or
> more bytes of the memory
> + resource range specified by BaseAddress and Length.
> + The bit mask of attributes is not support for the memory
> resource
> + range specified by BaseAddress and Length.
> +
> +**/
> +EFI_STATUS
> +EFIAPI
> +SmmClearMemoryAttributesEx (
> + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> + IN UINT64 Length,
> + IN UINT64 Attributes,
> + OUT BOOLEAN *IsSplitted OPTIONAL
> + )
> +{
> + EFI_STATUS Status;
> + BOOLEAN IsModified;
> +
> + Status = ConvertMemoryPageAttributes (BaseAddress, Length, Attributes,
> FALSE, IsSplitted, &IsModified);
> + if (!EFI_ERROR(Status)) {
> + if (IsModified) {
> + //
> + // Flush TLB as last step
> + //
> + FlushTlbForAll();
> + }
> + }
> +
> + return Status;
> +}
> +
> +/**
> + This function sets the attributes for the memory region specified by
> BaseAddress and
> + Length from their current attributes to the attributes specified by
> Attributes.
> +
> + @param[in] BaseAddress The physical address that is the start address
> of a memory region.
> + @param[in] Length The size in bytes of the memory region.
> + @param[in] Attributes The bit mask of attributes to set for the memory
> region.
> +
> + @retval EFI_SUCCESS The attributes were set for the memory region.
> + @retval EFI_ACCESS_DENIED The attributes for the memory resource
> range specified by
> + BaseAddress and Length cannot be modified.
> + @retval EFI_INVALID_PARAMETER Length is zero.
> + Attributes specified an illegal combination of attributes that
> + cannot be set together.
> + @retval EFI_OUT_OF_RESOURCES There are not enough system
> resources to modify the attributes of
> + the memory resource range.
> + @retval EFI_UNSUPPORTED The processor does not support one or
> more bytes of the memory
> + resource range specified by BaseAddress and Length.
> + The bit mask of attributes is not support for the memory
> resource
> + range specified by BaseAddress and Length.
> +
> +**/
> +EFI_STATUS
> +EFIAPI
> +SmmSetMemoryAttributes (
> + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> + IN UINT64 Length,
> + IN UINT64 Attributes
> + )
> +{
> + return SmmSetMemoryAttributesEx (BaseAddress, Length, Attributes,
> NULL);
> +}
> +
> +/**
> + This function clears the attributes for the memory region specified by
> BaseAddress and
> + Length from their current attributes to the attributes specified by
> Attributes.
> +
> + @param[in] BaseAddress The physical address that is the start address
> of a memory region.
> + @param[in] Length The size in bytes of the memory region.
> + @param[in] Attributes The bit mask of attributes to clear for the
> memory region.
> +
> + @retval EFI_SUCCESS The attributes were cleared for the memory
> region.
> + @retval EFI_ACCESS_DENIED The attributes for the memory resource
> range specified by
> + BaseAddress and Length cannot be modified.
> + @retval EFI_INVALID_PARAMETER Length is zero.
> + Attributes specified an illegal combination of attributes that
> + cannot be set together.
> + @retval EFI_OUT_OF_RESOURCES There are not enough system
> resources to modify the attributes of
> + the memory resource range.
> + @retval EFI_UNSUPPORTED The processor does not support one or
> more bytes of the memory
> + resource range specified by BaseAddress and Length.
> + The bit mask of attributes is not support for the memory
> resource
> + range specified by BaseAddress and Length.
> +
> +**/
> +EFI_STATUS
> +EFIAPI
> +SmmClearMemoryAttributes (
> + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> + IN UINT64 Length,
> + IN UINT64 Attributes
> + )
> +{
> + return SmmClearMemoryAttributesEx (BaseAddress, Length, Attributes,
> NULL);
> +}
> +
> +/**
> + Initialize the Page Table lib.
> +**/
> +VOID
> +InitializePageTableLib (
> + VOID
> + )
> +{
> + mAddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask)
> & PAGING_1G_ADDRESS_MASK_64;
> + mPhysicalAddressBits = CalculateMaximumSupportAddress ();
> + DEBUG ((DEBUG_INFO, "mAddressEncMask = 0x%lx\r\n",
> mAddressEncMask));
> + DEBUG ((DEBUG_INFO, "mPhysicalAddressBits = %d\r\n",
> mPhysicalAddressBits));
> + return ;
> +}
> +
> diff --git a/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h
> b/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h
> new file mode 100644
> index 0000000000..61a64af370
> --- /dev/null
> +++ b/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h
> @@ -0,0 +1,174 @@
> +/** @file
> + Page table management header file.
> +
> + Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
> + This program and the accompanying materials
> + are licensed and made available under the terms and conditions of the BSD
> License
> + which accompanies this distribution. The full text of the license may be
> found at
> + http://opensource.org/licenses/bsd-license.php
> +
> + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> BASIS,
> + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> EXPRESS OR IMPLIED.
> +
> +**/
> +
> +#ifndef _PAGE_TABLE_LIB_H_
> +#define _PAGE_TABLE_LIB_H_
> +
> +///
> +/// Page Table Entry
> +///
> +#define IA32_PG_P BIT0
> +#define IA32_PG_RW BIT1
> +#define IA32_PG_U BIT2
> +#define IA32_PG_WT BIT3
> +#define IA32_PG_CD BIT4
> +#define IA32_PG_A BIT5
> +#define IA32_PG_D BIT6
> +#define IA32_PG_PS BIT7
> +#define IA32_PG_PAT_2M BIT12
> +#define IA32_PG_PAT_4K IA32_PG_PS
> +#define IA32_PG_PMNT BIT62
> +#define IA32_PG_NX BIT63
> +
> +#define PAGE_ATTRIBUTE_BITS (IA32_PG_D | IA32_PG_A |
> IA32_PG_U | IA32_PG_RW | IA32_PG_P)
> +//
> +// Bits 1, 2, 5, 6 are reserved in the IA32 PAE PDPTE
> +// X64 PAE PDPTE does not have such restriction
> +//
> +#define IA32_PAE_PDPTE_ATTRIBUTE_BITS (IA32_PG_P)
> +
> +#define PAGE_PROGATE_BITS (IA32_PG_NX | PAGE_ATTRIBUTE_BITS)
> +
> +#define PAGING_4K_MASK 0xFFF
> +#define PAGING_2M_MASK 0x1FFFFF
> +#define PAGING_1G_MASK 0x3FFFFFFF
> +
> +#define PAGING_PAE_INDEX_MASK 0x1FF
> +
> +#define PAGING_4K_ADDRESS_MASK_64 0x000FFFFFFFFFF000ull
> +#define PAGING_2M_ADDRESS_MASK_64 0x000FFFFFFFE00000ull
> +#define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull
> +
> +#define SMRR_MAX_ADDRESS BASE_4GB
> +
> +typedef enum {
> + PageNone = 0,
> + Page4K,
> + Page2M,
> + Page1G,
> +} PAGE_ATTRIBUTE;
> +
> +typedef struct {
> + PAGE_ATTRIBUTE Attribute;
> + UINT64 Length;
> + UINT64 AddressMask;
> +} PAGE_ATTRIBUTE_TABLE;
> +
> +/**
> + Helper function to allocate pages without Guard for internal uses
> +
> + @param[in] Pages Page number
> +
> + @return Address of memory allocated
> +**/
> +VOID *
> +PageAlloc (
> + IN UINTN Pages
> + );
> +
> +/**
> + This function sets the attributes for the memory region specified by
> BaseAddress and
> + Length from their current attributes to the attributes specified by
> Attributes.
> +
> + @param[in] BaseAddress The physical address that is the start address
> of a memory region.
> + @param[in] Length The size in bytes of the memory region.
> + @param[in] Attributes The bit mask of attributes to set for the
> memory region.
> + @param[out] IsSplitted TRUE means page table splitted. FALSE means
> page table not splitted.
> +
> + @retval EFI_SUCCESS The attributes were set for the memory region.
> + @retval EFI_ACCESS_DENIED The attributes for the memory resource
> range specified by
> + BaseAddress and Length cannot be modified.
> + @retval EFI_INVALID_PARAMETER Length is zero.
> + Attributes specified an illegal combination of attributes that
> + cannot be set together.
> + @retval EFI_OUT_OF_RESOURCES There are not enough system
> resources to modify the attributes of
> + the memory resource range.
> + @retval EFI_UNSUPPORTED The processor does not support one or
> more bytes of the memory
> + resource range specified by BaseAddress and Length.
> + The bit mask of attributes is not support for the memory
> resource
> + range specified by BaseAddress and Length.
> +
> +**/
> +EFI_STATUS
> +EFIAPI
> +SmmSetMemoryAttributes (
> + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> + IN UINT64 Length,
> + IN UINT64 Attributes
> + );
> +
> +/**
> + This function clears the attributes for the memory region specified by
> BaseAddress and
> + Length from their current attributes to the attributes specified by
> Attributes.
> +
> + @param[in] BaseAddress The physical address that is the start address
> of a memory region.
> + @param[in] Length The size in bytes of the memory region.
> + @param[in] Attributes The bit mask of attributes to clear for the
> memory region.
> + @param[out] IsSplitted TRUE means page table splitted. FALSE means
> page table not splitted.
> +
> + @retval EFI_SUCCESS The attributes were cleared for the memory
> region.
> + @retval EFI_ACCESS_DENIED The attributes for the memory resource
> range specified by
> + BaseAddress and Length cannot be modified.
> + @retval EFI_INVALID_PARAMETER Length is zero.
> + Attributes specified an illegal combination of attributes that
> + cannot be set together.
> + @retval EFI_OUT_OF_RESOURCES There are not enough system
> resources to modify the attributes of
> + the memory resource range.
> + @retval EFI_UNSUPPORTED The processor does not support one or
> more bytes of the memory
> + resource range specified by BaseAddress and Length.
> + The bit mask of attributes is not support for the memory
> resource
> + range specified by BaseAddress and Length.
> +
> +**/
> +EFI_STATUS
> +EFIAPI
> +SmmClearMemoryAttributes (
> + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> + IN UINT64 Length,
> + IN UINT64 Attributes
> + );
> +
> +/**
> + Initialize the Page Table lib.
> +**/
> +VOID
> +InitializePageTableLib (
> + VOID
> + );
> +
> +/**
> + Return page table base.
> +
> + @return page table base.
> +**/
> +UINTN
> +GetPageTableBase (
> + VOID
> + );
> +
> +/**
> + Return page table entry to match the address.
> +
> + @param[in] Address The address to be checked.
> + @param[out] PageAttributes The page attribute of the page entry.
> +
> + @return The page entry.
> +**/
> +VOID *
> +GetPageTableEntry (
> + IN PHYSICAL_ADDRESS Address,
> + OUT PAGE_ATTRIBUTE *PageAttribute
> + );
> +
> +#endif
> diff --git a/MdeModulePkg/Core/PiSmmCore/Page.c
> b/MdeModulePkg/Core/PiSmmCore/Page.c
> index 4154c2e6a1..29d1311f5a 100644
> --- a/MdeModulePkg/Core/PiSmmCore/Page.c
> +++ b/MdeModulePkg/Core/PiSmmCore/Page.c
> @@ -64,6 +64,8 @@ LIST_ENTRY mFreeMemoryMapEntryList =
> INITIALIZE_LIST_HEAD_VARIABLE (mFreeMemor
> @param[out] Memory A pointer to receive the base allocated
> memory
> address.
> @param[in] AddRegion If this memory is new added region.
> + @param[in] NeedGuard Flag to indicate Guard page is needed
> + or not
>
> @retval EFI_INVALID_PARAMETER Parameters violate checking rules
> defined in spec.
> @retval EFI_NOT_FOUND Could not allocate pages match the
> requirement.
> @@ -77,7 +79,8 @@ SmmInternalAllocatePagesEx (
> IN EFI_MEMORY_TYPE MemoryType,
> IN UINTN NumberOfPages,
> OUT EFI_PHYSICAL_ADDRESS *Memory,
> - IN BOOLEAN AddRegion
> + IN BOOLEAN AddRegion,
> + IN BOOLEAN NeedGuard
> );
>
> /**
> @@ -112,7 +115,8 @@ AllocateMemoryMapEntry (
> EfiRuntimeServicesData,
> EFI_SIZE_TO_PAGES (RUNTIME_PAGE_ALLOCATION_GRANULARITY),
> &Mem,
> - TRUE
> + TRUE,
> + FALSE
> );
> ASSERT_EFI_ERROR (Status);
> if(!EFI_ERROR (Status)) {
> @@ -688,6 +692,8 @@ InternalAllocAddress (
> @param[out] Memory A pointer to receive the base allocated
> memory
> address.
> @param[in] AddRegion If this memory is new added region.
> + @param[in] NeedGuard Flag to indicate Guard page is needed
> + or not
>
> @retval EFI_INVALID_PARAMETER Parameters violate checking rules
> defined in spec.
> @retval EFI_NOT_FOUND Could not allocate pages match the
> requirement.
> @@ -701,7 +707,8 @@ SmmInternalAllocatePagesEx (
> IN EFI_MEMORY_TYPE MemoryType,
> IN UINTN NumberOfPages,
> OUT EFI_PHYSICAL_ADDRESS *Memory,
> - IN BOOLEAN AddRegion
> + IN BOOLEAN AddRegion,
> + IN BOOLEAN NeedGuard
> )
> {
> UINTN RequestedAddress;
> @@ -723,6 +730,21 @@ SmmInternalAllocatePagesEx (
> case AllocateAnyPages:
> RequestedAddress = (UINTN)(-1);
> case AllocateMaxAddress:
> + if (NeedGuard) {
> + *Memory = InternalAllocMaxAddressWithGuard (
> + &mSmmMemoryMap,
> + NumberOfPages,
> + RequestedAddress,
> + MemoryType
> + );
> + if (*Memory == (UINTN)-1) {
> + return EFI_OUT_OF_RESOURCES;
> + } else {
> + ASSERT (VerifyMemoryGuard(*Memory, NumberOfPages) == TRUE);
> + return EFI_SUCCESS;
> + }
> + }
> +
> *Memory = InternalAllocMaxAddress (
> &mSmmMemoryMap,
> NumberOfPages,
> @@ -766,6 +788,8 @@ SmmInternalAllocatePagesEx (
> @param[in] NumberOfPages The number of pages to allocate.
> @param[out] Memory A pointer to receive the base allocated
> memory
> address.
> + @param[in] NeedGuard Flag to indicate Guard page is needed
> + or not
>
> @retval EFI_INVALID_PARAMETER Parameters violate checking rules
> defined in spec.
> @retval EFI_NOT_FOUND Could not allocate pages match the
> requirement.
> @@ -779,10 +803,12 @@ SmmInternalAllocatePages (
> IN EFI_ALLOCATE_TYPE Type,
> IN EFI_MEMORY_TYPE MemoryType,
> IN UINTN NumberOfPages,
> - OUT EFI_PHYSICAL_ADDRESS *Memory
> + OUT EFI_PHYSICAL_ADDRESS *Memory,
> + IN BOOLEAN NeedGuard
> )
> {
> - return SmmInternalAllocatePagesEx (Type, MemoryType, NumberOfPages,
> Memory, FALSE);
> + return SmmInternalAllocatePagesEx (Type, MemoryType,
> NumberOfPages, Memory,
> + FALSE, NeedGuard);
> }
>
> /**
> @@ -811,8 +837,11 @@ SmmAllocatePages (
> )
> {
> EFI_STATUS Status;
> + BOOLEAN NeedGuard;
>
> - Status = SmmInternalAllocatePages (Type, MemoryType, NumberOfPages,
> Memory);
> + NeedGuard = IsPageTypeToGuard (MemoryType, Type);
> + Status = SmmInternalAllocatePages (Type, MemoryType, NumberOfPages,
> Memory,
> + NeedGuard);
> if (!EFI_ERROR (Status)) {
> SmmCoreUpdateProfile (
> (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
> @@ -941,9 +970,13 @@ EFI_STATUS
> EFIAPI
> SmmInternalFreePages (
> IN EFI_PHYSICAL_ADDRESS Memory,
> - IN UINTN NumberOfPages
> + IN UINTN NumberOfPages,
> + IN BOOLEAN IsGuarded
> )
> {
> + if (IsGuarded) {
> + return SmmInternalFreePagesExWithGuard (Memory, NumberOfPages,
> FALSE);
> + }
> return SmmInternalFreePagesEx (Memory, NumberOfPages, FALSE);
> }
>
> @@ -966,8 +999,10 @@ SmmFreePages (
> )
> {
> EFI_STATUS Status;
> + BOOLEAN IsGuarded;
>
> - Status = SmmInternalFreePages (Memory, NumberOfPages);
> + IsGuarded = IsHeapGuardEnabled () && IsMemoryGuarded (Memory);
> + Status = SmmInternalFreePages (Memory, NumberOfPages, IsGuarded);
> if (!EFI_ERROR (Status)) {
> SmmCoreUpdateProfile (
> (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
> diff --git a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.c
> b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.c
> index 9e4390e15a..b4609c2fed 100644
> --- a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.c
> +++ b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.c
> @@ -451,6 +451,11 @@ SmmEntryPoint (
> //
> PlatformHookBeforeSmmDispatch ();
>
> + //
> + // Call memory management hook function
> + //
> + SmmEntryPointMemoryManagementHook ();
> +
> //
> // If a legacy boot has occured, then make sure gSmmCorePrivate is not
> accessed
> //
> @@ -644,7 +649,12 @@ SmmMain (
> //
> gSmmCorePrivate->Smst = &gSmmCoreSmst;
> gSmmCorePrivate->SmmEntryPoint = SmmEntryPoint;
> -
> +
> + //
> + // Initialize page table operations
> + //
> + InitializePageTableLib();
> +
> //
> // No need to initialize memory service.
> // It is done in constructor of PiSmmCoreMemoryAllocationLib(),
> diff --git a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.h
> b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.h
> index b6f815c68d..8c61fdcf0c 100644
> --- a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.h
> +++ b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.h
> @@ -59,6 +59,7 @@
> #include <Library/SmmMemLib.h>
>
> #include "PiSmmCorePrivateData.h"
> +#include "Misc/HeapGuard.h"
>
> //
> // Used to build a table of SMI Handlers that the SMM Core registers
> @@ -317,6 +318,7 @@ SmmAllocatePages (
> @param NumberOfPages The number of pages to allocate
> @param Memory A pointer to receive the base allocated memory
> address
> + @param NeedGuard Flag to indicate Guard page is needed or not
>
> @retval EFI_INVALID_PARAMETER Parameters violate checking rules
> defined in spec.
> @retval EFI_NOT_FOUND Could not allocate pages match the
> requirement.
> @@ -330,7 +332,8 @@ SmmInternalAllocatePages (
> IN EFI_ALLOCATE_TYPE Type,
> IN EFI_MEMORY_TYPE MemoryType,
> IN UINTN NumberOfPages,
> - OUT EFI_PHYSICAL_ADDRESS *Memory
> + OUT EFI_PHYSICAL_ADDRESS *Memory,
> + IN BOOLEAN NeedGuard
> );
>
> /**
> @@ -356,6 +359,8 @@ SmmFreePages (
>
> @param Memory Base address of memory being freed
> @param NumberOfPages The number of pages to free
> + @param IsGuarded Flag to indicate if the memory is guarded
> + or not
>
> @retval EFI_NOT_FOUND Could not find the entry that covers the
> range
> @retval EFI_INVALID_PARAMETER Address not aligned, Address is zero or
> NumberOfPages is zero.
> @@ -366,7 +371,8 @@ EFI_STATUS
> EFIAPI
> SmmInternalFreePages (
> IN EFI_PHYSICAL_ADDRESS Memory,
> - IN UINTN NumberOfPages
> + IN UINTN NumberOfPages,
> + IN BOOLEAN IsGuarded
> );
>
> /**
> @@ -1231,4 +1237,74 @@ typedef enum {
>
> extern LIST_ENTRY
> mSmmPoolLists[SmmPoolTypeMax][MAX_POOL_INDEX];
>
> +/**
> + Internal Function. Allocate n pages from given free page node.
> +
> + @param Pages The free page node.
> + @param NumberOfPages Number of pages to be allocated.
> + @param MaxAddress Request to allocate memory below this
> address.
> +
> + @return Memory address of allocated pages.
> +
> +**/
> +UINTN
> +InternalAllocPagesOnOneNode (
> + IN OUT FREE_PAGE_LIST *Pages,
> + IN UINTN NumberOfPages,
> + IN UINTN MaxAddress
> + );
> +
> +/**
> + Update SMM memory map entry.
> +
> + @param[in] Type The type of allocation to perform.
> + @param[in] Memory The base of memory address.
> + @param[in] NumberOfPages The number of pages to allocate.
> + @param[in] AddRegion If this memory is new added region.
> +**/
> +VOID
> +ConvertSmmMemoryMapEntry (
> + IN EFI_MEMORY_TYPE Type,
> + IN EFI_PHYSICAL_ADDRESS Memory,
> + IN UINTN NumberOfPages,
> + IN BOOLEAN AddRegion
> + );
> +
> +/**
> + Internal function. Moves any memory descriptors that are on the
> + temporary descriptor stack to heap.
> +
> +**/
> +VOID
> +CoreFreeMemoryMapStack (
> + VOID
> + );
> +
> +/**
> + Frees previous allocated pages.
> +
> + @param[in] Memory Base address of memory being freed.
> + @param[in] NumberOfPages The number of pages to free.
> + @param[in] AddRegion If this memory is new added region.
> +
> + @retval EFI_NOT_FOUND Could not find the entry that covers the
> range.
> + @retval EFI_INVALID_PARAMETER Address not aligned, Address is zero or
> NumberOfPages is zero.
> + @return EFI_SUCCESS Pages successfully freed.
> +
> +**/
> +EFI_STATUS
> +SmmInternalFreePagesEx (
> + IN EFI_PHYSICAL_ADDRESS Memory,
> + IN UINTN NumberOfPages,
> + IN BOOLEAN AddRegion
> + );
> +
> +/**
> + Hook function used to set all Guard pages after entering SMM mode
> +**/
> +VOID
> +SmmEntryPointMemoryManagementHook (
> + VOID
> + );
> +
> #endif
> diff --git a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf
> b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf
> index 49ae6fbb57..e505b165bc 100644
> --- a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf
> +++ b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf
> @@ -40,6 +40,8 @@
> SmramProfileRecord.c
> MemoryAttributesTable.c
> SmiHandlerProfile.c
> + Misc/HeapGuard.c
> + Misc/PageTable.c
>
> [Packages]
> MdePkg/MdePkg.dec
> @@ -65,6 +67,7 @@
> HobLib
> SmmMemLib
> DxeServicesLib
> + CpuLib
>
> [Protocols]
> gEfiDxeSmmReadyToLockProtocolGuid ## UNDEFINED #
> SmiHandlerRegister
> @@ -88,6 +91,7 @@
> gEfiSmmGpiDispatch2ProtocolGuid ## SOMETIMES_CONSUMES
> gEfiSmmIoTrapDispatch2ProtocolGuid ## SOMETIMES_CONSUMES
> gEfiSmmUsbDispatch2ProtocolGuid ## SOMETIMES_CONSUMES
> + gEfiSmmCpuProtocolGuid ## SOMETIMES_CONSUMES
>
> [Pcd]
>
> gEfiMdeModulePkgTokenSpaceGuid.PcdLoadFixAddressSmmCodePageNum
> ber ## SOMETIMES_CONSUMES
> @@ -96,6 +100,10 @@
> gEfiMdeModulePkgTokenSpaceGuid.PcdMemoryProfilePropertyMask
> ## CONSUMES
> gEfiMdeModulePkgTokenSpaceGuid.PcdMemoryProfileDriverPath
> ## CONSUMES
> gEfiMdeModulePkgTokenSpaceGuid.PcdSmiHandlerProfilePropertyMask
> ## CONSUMES
> + gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPageType ##
> CONSUMES
> + gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType ##
> CONSUMES
> + gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> ## CONSUMES
> +
> gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrM
> ask ## CONSUMES
>
> [Guids]
> gAprioriGuid ## SOMETIMES_CONSUMES ## File
> diff --git a/MdeModulePkg/Core/PiSmmCore/Pool.c
> b/MdeModulePkg/Core/PiSmmCore/Pool.c
> index 36317563c4..1f9213ea6e 100644
> --- a/MdeModulePkg/Core/PiSmmCore/Pool.c
> +++ b/MdeModulePkg/Core/PiSmmCore/Pool.c
> @@ -144,7 +144,9 @@ InternalAllocPoolByIndex (
> Status = EFI_SUCCESS;
> Hdr = NULL;
> if (PoolIndex == MAX_POOL_INDEX) {
> - Status = SmmInternalAllocatePages (AllocateAnyPages, PoolType,
> EFI_SIZE_TO_PAGES (MAX_POOL_SIZE << 1), &Address);
> + Status = SmmInternalAllocatePages (AllocateAnyPages, PoolType,
> + EFI_SIZE_TO_PAGES (MAX_POOL_SIZE << 1),
> + &Address, FALSE);
> if (EFI_ERROR (Status)) {
> return EFI_OUT_OF_RESOURCES;
> }
> @@ -243,6 +245,9 @@ SmmInternalAllocatePool (
> EFI_STATUS Status;
> EFI_PHYSICAL_ADDRESS Address;
> UINTN PoolIndex;
> + BOOLEAN HasPoolTail;
> + BOOLEAN NeedGuard;
> + UINTN NoPages;
>
> Address = 0;
>
> @@ -251,25 +256,45 @@ SmmInternalAllocatePool (
> return EFI_INVALID_PARAMETER;
> }
>
> + NeedGuard = IsPoolTypeToGuard (PoolType);
> + HasPoolTail = !(NeedGuard &&
> + ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> +
> //
> // Adjust the size by the pool header & tail overhead
> //
> Size += POOL_OVERHEAD;
> - if (Size > MAX_POOL_SIZE) {
> - Size = EFI_SIZE_TO_PAGES (Size);
> - Status = SmmInternalAllocatePages (AllocateAnyPages, PoolType, Size,
> &Address);
> + if (Size > MAX_POOL_SIZE || NeedGuard) {
> + if (!HasPoolTail) {
> + Size -= sizeof (POOL_TAIL);
> + }
> +
> + NoPages = EFI_SIZE_TO_PAGES (Size);
> + Status = SmmInternalAllocatePages (AllocateAnyPages, PoolType,
> NoPages,
> + &Address, NeedGuard);
> if (EFI_ERROR (Status)) {
> return Status;
> }
>
> + if (NeedGuard) {
> + ASSERT (VerifyMemoryGuard(Address, NoPages) == TRUE);
> + DEBUG ((DEBUG_INFO, "SmmInternalAllocatePool: %lx ->", Address));
> + Address = (EFI_PHYSICAL_ADDRESS)AdjustPoolHeadA (Address,
> NoPages, Size);
> + DEBUG ((DEBUG_INFO, " %lx %d %x\r\n", Address, NoPages, Size));
> + }
> +
> PoolHdr = (POOL_HEADER*)(UINTN)Address;
> PoolHdr->Signature = POOL_HEAD_SIGNATURE;
> - PoolHdr->Size = EFI_PAGES_TO_SIZE (Size);
> + PoolHdr->Size = Size; //EFI_PAGES_TO_SIZE (NoPages)
> PoolHdr->Available = FALSE;
> PoolHdr->Type = PoolType;
> - PoolTail = HEAD_TO_TAIL(PoolHdr);
> - PoolTail->Signature = POOL_TAIL_SIGNATURE;
> - PoolTail->Size = PoolHdr->Size;
> +
> + if (HasPoolTail) {
> + PoolTail = HEAD_TO_TAIL (PoolHdr);
> + PoolTail->Signature = POOL_TAIL_SIGNATURE;
> + PoolTail->Size = PoolHdr->Size;
> + }
> +
> *Buffer = PoolHdr + 1;
> return Status;
> }
> @@ -341,28 +366,45 @@ SmmInternalFreePool (
> {
> FREE_POOL_HEADER *FreePoolHdr;
> POOL_TAIL *PoolTail;
> + BOOLEAN HasPoolTail;
> + BOOLEAN MemoryGuarded;
>
> if (Buffer == NULL) {
> return EFI_INVALID_PARAMETER;
> }
>
> + MemoryGuarded = IsHeapGuardEnabled () &&
> + IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Buffer);
> + HasPoolTail = !(MemoryGuarded &&
> + ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> +
> FreePoolHdr = (FREE_POOL_HEADER*)((POOL_HEADER*)Buffer - 1);
> ASSERT (FreePoolHdr->Header.Signature == POOL_HEAD_SIGNATURE);
> ASSERT (!FreePoolHdr->Header.Available);
> - PoolTail = HEAD_TO_TAIL(&FreePoolHdr->Header);
> - ASSERT (PoolTail->Signature == POOL_TAIL_SIGNATURE);
> - ASSERT (FreePoolHdr->Header.Size == PoolTail->Size);
> -
> if (FreePoolHdr->Header.Signature != POOL_HEAD_SIGNATURE) {
> return EFI_INVALID_PARAMETER;
> }
>
> - if (PoolTail->Signature != POOL_TAIL_SIGNATURE) {
> - return EFI_INVALID_PARAMETER;
> + if (HasPoolTail) {
> + PoolTail = HEAD_TO_TAIL (&FreePoolHdr->Header);
> + ASSERT (PoolTail->Signature == POOL_TAIL_SIGNATURE);
> + ASSERT (FreePoolHdr->Header.Size == PoolTail->Size);
> + if (PoolTail->Signature != POOL_TAIL_SIGNATURE) {
> + return EFI_INVALID_PARAMETER;
> + }
> +
> + if (FreePoolHdr->Header.Size != PoolTail->Size) {
> + return EFI_INVALID_PARAMETER;
> + }
> }
>
> - if (FreePoolHdr->Header.Size != PoolTail->Size) {
> - return EFI_INVALID_PARAMETER;
> + if (MemoryGuarded) {
> + Buffer = AdjustPoolHeadF
> ((EFI_PHYSICAL_ADDRESS)(UINTN)FreePoolHdr);
> + return SmmInternalFreePages (
> + (EFI_PHYSICAL_ADDRESS)(UINTN)Buffer,
> + EFI_SIZE_TO_PAGES (FreePoolHdr->Header.Size),
> + TRUE
> + );
> }
>
> if (FreePoolHdr->Header.Size > MAX_POOL_SIZE) {
> @@ -370,7 +412,8 @@ SmmInternalFreePool (
> ASSERT ((FreePoolHdr->Header.Size & EFI_PAGE_MASK) == 0);
> return SmmInternalFreePages (
> (EFI_PHYSICAL_ADDRESS)(UINTN)FreePoolHdr,
> - EFI_SIZE_TO_PAGES (FreePoolHdr->Header.Size)
> + EFI_SIZE_TO_PAGES (FreePoolHdr->Header.Size),
> + FALSE
> );
> }
> return InternalFreePoolByIndex (FreePoolHdr, PoolTail);
> --
> 2.14.1.windows.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 5/5] UefiCpuPkg/PiSmmCpuDxeSmm: Disable page table protection
2017-10-13 1:24 ` Dong, Eric
@ 2017-10-13 6:14 ` Wang, Jian J
0 siblings, 0 replies; 10+ messages in thread
From: Wang, Jian J @ 2017-10-13 6:14 UTC (permalink / raw)
To: Dong, Eric, edk2-devel@lists.01.org
Cc: Yao, Jiewen, Kinney, Michael D, Wolman, Ayellet
You're right. "BIT3 | BIT2" should be enclosed by parentheses. Thanks for catching this issue.
> -----Original Message-----
> From: Dong, Eric
> Sent: Friday, October 13, 2017 9:24 AM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Yao, Jiewen <jiewen.yao@intel.com>; Kinney, Michael D
> <michael.d.kinney@intel.com>; Wolman, Ayellet <ayellet.wolman@intel.com>
> Subject: RE: [PATCH 5/5] UefiCpuPkg/PiSmmCpuDxeSmm: Disable page table
> protection
>
> Hi Jian,
>
> > + if (!mCpuSmmStaticPageTable || (PcdGet8 (PcdHeapGuardPropertyMask)
> > &
> > + BIT3 | BIT2) != 0) {
>
> I think above code logic is not correct, the "&" will be handled before the "|"
> which is not an expected order, right?
>
> Thanks,
> Eric
>
> > -----Original Message-----
> > From: Wang, Jian J
> > Sent: Wednesday, October 11, 2017 11:18 AM
> > To: edk2-devel@lists.01.org
> > Cc: Dong, Eric <eric.dong@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>;
> > Kinney, Michael D <michael.d.kinney@intel.com>; Wolman, Ayellet
> > <ayellet.wolman@intel.com>
> > Subject: [PATCH 5/5] UefiCpuPkg/PiSmmCpuDxeSmm: Disable page table
> > protection
> >
> > Heap guard feature will update page attributes frequently. The page table
> > should not set to be read-only if heap guard feature is enabled for SMM
> > mode. Otherwise this feature cannot work.
> >
> > Cc: Eric Dong <eric.dong@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Michael Kinney <michael.d.kinney@intel.com>
> > Cc: Ayellet Wolman <ayellet.wolman@intel.com>
> > Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> > ---
> > UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 1 +
> > UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c | 2 +-
> > 2 files changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> > b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> > index 099792e6ce..644709650c 100644
> > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf
> > @@ -159,6 +159,7 @@
> > gUefiCpuPkgTokenSpaceGuid.PcdCpuSmmStaticPageTable ##
> > CONSUMES
> > gEfiMdeModulePkgTokenSpaceGuid.PcdAcpiS3Enable ##
> > CONSUMES
> >
> > gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrM
> > ask ## CONSUMES
> > + gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> > ## CONSUMES
> >
> > [Depex]
> > gEfiMpServiceProtocolGuid
> > diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> > b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> > index 3dde80f9ba..4debce3a0f 100644
> > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/X64/PageTbl.c
> > @@ -902,7 +902,7 @@ SetPageTableAttributes (
> > BOOLEAN IsSplitted;
> > BOOLEAN PageTableSplitted;
> >
> > - if (!mCpuSmmStaticPageTable) {
> > + if (!mCpuSmmStaticPageTable || (PcdGet8 (PcdHeapGuardPropertyMask)
> > &
> > + BIT3 | BIT2) != 0) {
> > return ;
> > }
> >
> > --
> > 2.14.1.windows.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/5] MdeModulePkg/PiSmmCore: Implement heap guard feature for SMM mode
2017-10-13 1:27 ` Dong, Eric
@ 2017-10-13 6:15 ` Wang, Jian J
0 siblings, 0 replies; 10+ messages in thread
From: Wang, Jian J @ 2017-10-13 6:15 UTC (permalink / raw)
To: Dong, Eric, edk2-devel@lists.01.org
Cc: Zeng, Star, Yao, Jiewen, Kinney, Michael D, Wolman, Ayellet
Ok. I'll change it to follow required coding style. Thanks for catching it.
> -----Original Message-----
> From: Dong, Eric
> Sent: Friday, October 13, 2017 9:27 AM
> To: Wang, Jian J <jian.j.wang@intel.com>; edk2-devel@lists.01.org
> Cc: Zeng, Star <star.zeng@intel.com>; Yao, Jiewen <jiewen.yao@intel.com>;
> Kinney, Michael D <michael.d.kinney@intel.com>; Wolman, Ayellet
> <ayellet.wolman@intel.com>
> Subject: RE: [PATCH 2/5] MdeModulePkg/PiSmmCore: Implement heap guard
> feature for SMM mode
>
> Hi Jian,
>
> I think below code not follow EDKII coding style, EDKII requires definition and
> assignment in different code.
>
> + UINTN LevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
> + = GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS;
> + UINTN LevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
> + = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
>
> Thanks,
> Eric
> > -----Original Message-----
> > From: Wang, Jian J
> > Sent: Wednesday, October 11, 2017 11:18 AM
> > To: edk2-devel@lists.01.org
> > Cc: Zeng, Star <star.zeng@intel.com>; Dong, Eric <eric.dong@intel.com>; Yao,
> > Jiewen <jiewen.yao@intel.com>; Kinney, Michael D
> > <michael.d.kinney@intel.com>; Wolman, Ayellet
> > <ayellet.wolman@intel.com>
> > Subject: [PATCH 2/5] MdeModulePkg/PiSmmCore: Implement heap guard
> > feature for SMM mode
> >
> > This feature makes use of paging mechanism to add a hidden (not present)
> > page just before and after the allocated memory block. If the code tries
> > to access memory outside of the allocated part, page fault exception will
> > be triggered.
> >
> > This feature is controlled by three PCDs:
> >
> > gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> > gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
> > gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPageType
> >
> > BIT2 and BIT3 of PcdHeapGuardPropertyMask can be used to enable or
> > disable
> > memory guard for SMM page and pool respectively. PcdHeapGuardPoolType
> > and/or
> > PcdHeapGuardPageType are used to enable or disable guard for specific type
> > of memory. For example, we can turn on guard only for EfiBootServicesData
> > and EfiRuntimeServicesData by setting the PCD with value 0x50.
> >
> > Pool memory is not ususally integer multiple of one page, and is more likely
> > less than a page. There's no way to monitor the overflow at both top and
> > bottom of pool memory. BIT7 of PcdHeapGuardPropertyMask is used to
> > control
> > how to position the head of pool memory so that it's easier to catch memory
> > overflow in memory growing direction or in decreasing direction.
> >
> > Cc: Star Zeng <star.zeng@intel.com>
> > Cc: Eric Dong <eric.dong@intel.com>
> > Cc: Jiewen Yao <jiewen.yao@intel.com>
> > Cc: Michael Kinney <michael.d.kinney@intel.com>
> > Cc: Ayellet Wolman <ayellet.wolman@intel.com>
> > Suggested-by: Ayellet Wolman <ayellet.wolman@intel.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.wang@intel.com>
> > ---
> > MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c | 1438
> > ++++++++++++++++++++++++++
> > MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h | 395 +++++++
> > MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c | 704
> > +++++++++++++
> > MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h | 174 ++++
> > MdeModulePkg/Core/PiSmmCore/Page.c | 51 +-
> > MdeModulePkg/Core/PiSmmCore/PiSmmCore.c | 12 +-
> > MdeModulePkg/Core/PiSmmCore/PiSmmCore.h | 80 +-
> > MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf | 8 +
> > MdeModulePkg/Core/PiSmmCore/Pool.c | 77 +-
> > 9 files changed, 2911 insertions(+), 28 deletions(-)
> > create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c
> > create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h
> > create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c
> > create mode 100644 MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h
> >
> > diff --git a/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c
> > b/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c
> > new file mode 100644
> > index 0000000000..c64eaea5d1
> > --- /dev/null
> > +++ b/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.c
> > @@ -0,0 +1,1438 @@
> > +/** @file
> > + UEFI Heap Guard functions.
> > +
> > +Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
> > +This program and the accompanying materials
> > +are licensed and made available under the terms and conditions of the BSD
> > License
> > +which accompanies this distribution. The full text of the license may be
> > found at
> > +http://opensource.org/licenses/bsd-license.php
> > +
> > +THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > +WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +
> > +**/
> > +
> > +#include "HeapGuard.h"
> > +
> > +//
> > +// Pointer to table tracking the Guarded memory with bitmap, in which '1'
> > +// is used to indicate memory guarded. '0' might be free memory or Guard
> > +// page itself, depending on status of memory adjacent to it.
> > +//
> > +GLOBAL_REMOVE_IF_UNREFERENCED UINT64 *mGuardedMemoryMap =
> > NULL;
> > +
> > +//
> > +// Current depth level of map table pointed by mGuardedMemoryMap.
> > +// mMapLevel must be initialized at least by 1. It will be automatically
> > +// updated according to the address of memory just tracked.
> > +//
> > +GLOBAL_REMOVE_IF_UNREFERENCED UINTN mMapLevel = 1;
> > +
> > +//
> > +// SMM status flag
> > +//
> > +BOOLEAN mIsSmmCpuMode = FALSE;
> > +
> > +/**
> > + Set corresponding bits in bitmap table to 1 according to the address
> > +
> > + @param[in] Address Start address to set for
> > + @param[in] BitNumber Number of bits to set
> > + @param[in] BitMap Pointer to bitmap which covers the Address
> > +
> > + @return VOID
> > +**/
> > +STATIC
> > +VOID
> > +SetBits (
> > + IN EFI_PHYSICAL_ADDRESS Address,
> > + IN UINTN BitNumber,
> > + IN UINT64 *BitMap
> > + )
> > +{
> > + UINTN Lsbs;
> > + UINTN Qwords;
> > + UINTN Msbs;
> > + UINTN StartBit;
> > + UINTN EndBit;
> > +
> > + StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> > + EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > +
> > + if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> > + Msbs = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
> > + GUARDED_HEAP_MAP_ENTRY_BITS;
> > + Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > + Qwords = (BitNumber - Msbs) / GUARDED_HEAP_MAP_ENTRY_BITS;
> > + } else {
> > + Msbs = BitNumber;
> > + Lsbs = 0;
> > + Qwords = 0;
> > + }
> > +
> > + if (Msbs > 0) {
> > + *BitMap |= LShiftU64 (LShiftU64 (1, Msbs) - 1, StartBit);
> > + BitMap += 1;
> > + }
> > +
> > + if (Qwords > 0) {
> > + SetMem64 ((VOID *)BitMap, Qwords *
> > GUARDED_HEAP_MAP_ENTRY_BYTES,
> > + (UINT64)-1);
> > + BitMap += Qwords;
> > + }
> > +
> > + if (Lsbs > 0) {
> > + *BitMap |= (LShiftU64 (1, Lsbs) - 1);
> > + }
> > +}
> > +
> > +/**
> > + Set corresponding bits in bitmap table to 0 according to the address
> > +
> > + @param[in] Address Start address to set for
> > + @param[in] BitNumber Number of bits to set
> > + @param[in] BitMap Pointer to bitmap which covers the Address
> > +
> > + @return VOID
> > +**/
> > +STATIC
> > +VOID
> > +ClearBits (
> > + IN EFI_PHYSICAL_ADDRESS Address,
> > + IN UINTN BitNumber,
> > + IN UINT64 *BitMap
> > + )
> > +{
> > + UINTN Lsbs;
> > + UINTN Qwords;
> > + UINTN Msbs;
> > + UINTN StartBit;
> > + UINTN EndBit;
> > +
> > + StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> > + EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > +
> > + if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> > + Msbs = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
> > + GUARDED_HEAP_MAP_ENTRY_BITS;
> > + Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > + Qwords = (BitNumber - Msbs) / GUARDED_HEAP_MAP_ENTRY_BITS;
> > + } else {
> > + Msbs = BitNumber;
> > + Lsbs = 0;
> > + Qwords = 0;
> > + }
> > +
> > + if (Msbs > 0) {
> > + *BitMap &= ~LShiftU64 (LShiftU64 (1, Msbs) - 1, StartBit);
> > + BitMap += 1;
> > + }
> > +
> > + if (Qwords > 0) {
> > + SetMem64 ((VOID *)BitMap, Qwords *
> > GUARDED_HEAP_MAP_ENTRY_BYTES, 0);
> > + BitMap += Qwords;
> > + }
> > +
> > + if (Lsbs > 0) {
> > + *BitMap &= ~(LShiftU64 (1, Lsbs) - 1);
> > + }
> > +}
> > +
> > +/**
> > + Get corresponding bits in bitmap table according to the address
> > +
> > + The value of bit 0 corresponds to the status of memory at given Address.
> > + No more than 64 bits can be retrieved in one call.
> > +
> > + @param[in] Address Start address to retrieve bits for
> > + @param[in] BitNumber Number of bits to get
> > + @param[in] BitMap Pointer to bitmap which covers the Address
> > +
> > + @return An integer containing the bits information
> > +**/
> > +STATIC
> > +UINT64
> > +GetBits (
> > + IN EFI_PHYSICAL_ADDRESS Address,
> > + IN UINTN BitNumber,
> > + IN UINT64 *BitMap
> > + )
> > +{
> > + UINTN StartBit;
> > + UINTN EndBit;
> > + UINTN Lsbs;
> > + UINTN Msbs;
> > + UINT64 Result;
> > +
> > + ASSERT (BitNumber <= GUARDED_HEAP_MAP_ENTRY_BITS);
> > +
> > + StartBit = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> > + EndBit = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > +
> > + if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> > + Msbs = GUARDED_HEAP_MAP_ENTRY_BITS - StartBit;
> > + Lsbs = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > + } else {
> > + Msbs = BitNumber;
> > + Lsbs = 0;
> > + }
> > +
> > + Result = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1);
> > + if (Lsbs > 0) {
> > + BitMap += 1;
> > + Result |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> > + }
> > +
> > + return Result;
> > +}
> > +
> > +/**
> > + Helper function to allocate pages without Guard for internal uses
> > +
> > + @param[in] Pages Page number
> > +
> > + @return Address of memory allocated
> > +**/
> > +VOID *
> > +PageAlloc (
> > + IN UINTN Pages
> > + )
> > +{
> > + EFI_STATUS Status;
> > + EFI_PHYSICAL_ADDRESS Memory;
> > +
> > + Status = SmmInternalAllocatePages (AllocateAnyPages,
> > EfiRuntimeServicesData,
> > + Pages, &Memory, FALSE);
> > + if (EFI_ERROR (Status)) {
> > + Memory = 0;
> > + }
> > +
> > + return (VOID *)(UINTN)Memory;
> > +}
> > +
> > +/**
> > + Locate the pointer of bitmap from the guarded memory bitmap tables,
> > which
> > + covers the given Address.
> > +
> > + @param[in] Address Start address to search the bitmap for
> > + @param[in] AllocMapUnit Flag to indicate memory allocation for the table
> > + @param[out] BitMap Pointer to bitmap which covers the Address
> > +
> > + @return The bit number from given Address to the end of current map
> > table
> > +**/
> > +UINTN
> > +FindGuardedMemoryMap (
> > + IN EFI_PHYSICAL_ADDRESS Address,
> > + IN BOOLEAN AllocMapUnit,
> > + OUT UINT64 **BitMap
> > + )
> > +{
> > + UINTN Level;
> > + UINTN LevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
> > + = GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS;
> > + UINTN LevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
> > + = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> > + UINT64 **GuardMap;
> > + UINT64 *MapMemory;
> > + UINTN Index;
> > + UINTN Size;
> > + UINTN BitsToUnitEnd;
> > +
> > + //
> > + // Adjust current map table depth according to the address to access
> > + //
> > + while (mMapLevel < GUARDED_HEAP_MAP_TABLE_DEPTH
> > + &&
> > + RShiftU64 (
> > + Address,
> > + LevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel - 1]
> > + ) != 0) {
> > +
> > + if (mGuardedMemoryMap != NULL) {
> > + Size = (LevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel -
> > 1] + 1)
> > + * GUARDED_HEAP_MAP_ENTRY_BYTES;
> > + MapMemory = PageAlloc (EFI_SIZE_TO_PAGES (Size));
> > + ASSERT (MapMemory != NULL);
> > +
> > + SetMem ((VOID *)MapMemory, Size, 0);
> > +
> > + *(UINT64 **)MapMemory = mGuardedMemoryMap;
> > + mGuardedMemoryMap = MapMemory;
> > + }
> > +
> > + mMapLevel++;
> > +
> > + }
> > +
> > + GuardMap = &mGuardedMemoryMap;
> > + for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > + Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> > + ++Level) {
> > +
> > + if (*GuardMap == NULL) {
> > + if (!AllocMapUnit) {
> > + GuardMap = NULL;
> > + break;
> > + }
> > +
> > + Size = (LevelMask[Level] + 1) * GUARDED_HEAP_MAP_ENTRY_BYTES;
> > + MapMemory = PageAlloc (EFI_SIZE_TO_PAGES (Size));
> > + ASSERT (MapMemory != NULL);
> > +
> > + SetMem ((VOID *)MapMemory, Size, 0);
> > + *GuardMap = (UINT64 *)MapMemory;
> > + }
> > +
> > + Index = (UINTN)RShiftU64 (Address, LevelShift[Level]);
> > + Index &= LevelMask[Level];
> > + GuardMap = (UINT64 **)((*GuardMap) + Index);
> > +
> > + }
> > +
> > + BitsToUnitEnd = GUARDED_HEAP_MAP_BITS -
> > GUARDED_HEAP_MAP_BIT_INDEX (Address);
> > + *BitMap = (UINT64 *)GuardMap;
> > +
> > + return BitsToUnitEnd;
> > +}
> > +
> > +/**
> > + Set corresponding bits in bitmap table to 1 according to given memory
> > range
> > +
> > + @param[in] Address Memory address to guard from
> > + @param[in] NumberOfPages Number of pages to guard
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +EFIAPI
> > +SetGuardedMemoryBits (
> > + IN EFI_PHYSICAL_ADDRESS Address,
> > + IN UINTN NumberOfPages
> > + )
> > +{
> > + UINT64 *BitMap;
> > + UINTN Bits;
> > + UINTN BitsToUnitEnd;
> > +
> > + while (NumberOfPages > 0) {
> > + BitsToUnitEnd = FindGuardedMemoryMap (Address, TRUE, &BitMap);
> > + ASSERT (BitMap != NULL);
> > +
> > + if (NumberOfPages > BitsToUnitEnd) {
> > + // Cross map unit
> > + Bits = BitsToUnitEnd;
> > + } else {
> > + Bits = NumberOfPages;
> > + }
> > +
> > + SetBits (Address, Bits, BitMap);
> > +
> > + NumberOfPages -= Bits;
> > + Address += EFI_PAGES_TO_SIZE (Bits);
> > + }
> > +}
> > +
> > +/**
> > + Clear corresponding bits in bitmap table according to given memory range
> > +
> > + @param[in] Address Memory address to unset from
> > + @param[in] NumberOfPages Number of pages to unset guard
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +EFIAPI
> > +ClearGuardedMemoryBits (
> > + IN EFI_PHYSICAL_ADDRESS Address,
> > + IN UINTN NumberOfPages
> > + )
> > +{
> > + UINT64 *BitMap;
> > + UINTN Bits;
> > + UINTN BitsToUnitEnd;
> > +
> > + while (NumberOfPages > 0) {
> > + BitsToUnitEnd = FindGuardedMemoryMap (Address, TRUE, &BitMap);
> > + ASSERT (BitMap != NULL);
> > +
> > + if (NumberOfPages > BitsToUnitEnd) {
> > + // Cross map unit
> > + Bits = BitsToUnitEnd;
> > + } else {
> > + Bits = NumberOfPages;
> > + }
> > +
> > + ClearBits (Address, Bits, BitMap);
> > +
> > + NumberOfPages -= Bits;
> > + Address += EFI_PAGES_TO_SIZE (Bits);
> > + }
> > +}
> > +
> > +/**
> > + Retrieve corresponding bits in bitmap table according to given memory
> > range
> > +
> > + @param[in] Address Memory address to retrieve from
> > + @param[in] NumberOfPages Number of pages to retrieve
> > +
> > + @return VOID
> > +**/
> > +UINTN
> > +GetGuardedMemoryBits (
> > + IN EFI_PHYSICAL_ADDRESS Address,
> > + IN UINTN NumberOfPages
> > + )
> > +{
> > + UINT64 *BitMap;
> > + UINTN Bits;
> > + UINTN Result;
> > + UINTN Shift;
> > + UINTN BitsToUnitEnd;
> > +
> > + ASSERT (NumberOfPages <= GUARDED_HEAP_MAP_ENTRY_BITS);
> > +
> > + Result = 0;
> > + Shift = 0;
> > + while (NumberOfPages > 0) {
> > + BitsToUnitEnd = FindGuardedMemoryMap (Address, FALSE, &BitMap);
> > +
> > + if (NumberOfPages > BitsToUnitEnd) {
> > + // Cross map unit
> > + Bits = BitsToUnitEnd;
> > + } else {
> > + Bits = NumberOfPages;
> > + }
> > +
> > + if (BitMap != NULL) {
> > + Result |= LShiftU64 (GetBits (Address, Bits, BitMap), Shift);
> > + }
> > +
> > + Shift += Bits;
> > + NumberOfPages -= Bits;
> > + Address += EFI_PAGES_TO_SIZE (Bits);
> > + }
> > +
> > + return Result;
> > +}
> > +
> > +/**
> > + Get bit value in bitmap table for the given address
> > +
> > + @param[in] Address The address to retrieve for
> > +
> > + @return 1 or 0
> > +**/
> > +UINTN
> > +EFIAPI
> > +GetGuardMapBit (
> > + IN EFI_PHYSICAL_ADDRESS Address
> > + )
> > +{
> > + UINT64 *GuardMap;
> > +
> > + FindGuardedMemoryMap (Address, FALSE, &GuardMap);
> > + if (GuardMap != NULL) {
> > + if (RShiftU64 (*GuardMap,
> > + GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address)) & 1) {
> > + return 1;
> > + }
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +/**
> > + Set the bit in bitmap table for the given address
> > +
> > + @param[in] Address The address to set for
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +EFIAPI
> > +SetGuardMapBit (
> > + IN EFI_PHYSICAL_ADDRESS Address
> > + )
> > +{
> > + UINT64 *GuardMap;
> > + UINT64 BitMask;
> > +
> > + FindGuardedMemoryMap (Address, TRUE, &GuardMap);
> > + if (GuardMap != NULL) {
> > + BitMask = LShiftU64 (1, GUARDED_HEAP_MAP_ENTRY_BIT_INDEX
> > (Address));
> > + *GuardMap |= BitMask;
> > + }
> > +}
> > +
> > +/**
> > + Clear the bit in bitmap table for the given address
> > +
> > + @param[in] Address The address to clear for
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +EFIAPI
> > +ClearGuardMapBit (
> > + IN EFI_PHYSICAL_ADDRESS Address
> > + )
> > +{
> > + UINT64 *GuardMap;
> > + UINTN BitMask;
> > +
> > + FindGuardedMemoryMap (Address, TRUE, &GuardMap);
> > + if (GuardMap != NULL) {
> > + BitMask = LShiftU64 (1, GUARDED_HEAP_MAP_ENTRY_BIT_INDEX
> > (Address));
> > + *GuardMap &= ~BitMask;
> > + }
> > +}
> > +
> > +/**
> > + Check to see if the page at the given address is a Guard page or not
> > +
> > + @param[in] Address The address to check for
> > +
> > + @return TRUE The page at Address is a Guard page
> > + @return FALSE The page at Address is not a Guard page
> > +**/
> > +BOOLEAN
> > +EFIAPI
> > +IsGuardPage (
> > + IN EFI_PHYSICAL_ADDRESS Address
> > + )
> > +{
> > + UINTN BitMap;
> > +
> > + BitMap = GetGuardedMemoryBits (Address - EFI_PAGE_SIZE, 3);
> > + return (BitMap == 0b001 || BitMap == 0b100 || BitMap == 0b101);
> > +}
> > +
> > +/**
> > + Check to see if the page at the given address is a head Guard page or not
> > +
> > + @param[in] Address The address to check for
> > +
> > + @return TRUE The page at Address is a head Guard page
> > + @return FALSE The page at Address is not a head Guard page
> > +**/
> > +BOOLEAN
> > +EFIAPI
> > +IsHeadGuard (
> > + IN EFI_PHYSICAL_ADDRESS Address
> > + )
> > +{
> > + return (GetGuardedMemoryBits (Address, 2) == 0b10);
> > +}
> > +
> > +/**
> > + Check to see if the page at the given address is a tail Guard page or not
> > +
> > + @param[in] Address The address to check for
> > +
> > + @return TRUE The page at Address is a tail Guard page
> > + @return FALSE The page at Address is not a tail Guard page
> > +**/
> > +BOOLEAN
> > +EFIAPI
> > +IsTailGuard (
> > + IN EFI_PHYSICAL_ADDRESS Address
> > + )
> > +{
> > + return (GetGuardedMemoryBits (Address - EFI_PAGE_SIZE, 2) == 0b01);
> > +}
> > +
> > +/**
> > + Check to see if the page at the given address is guarded or not
> > +
> > + @param[in] Address The address to check for
> > +
> > + @return TRUE The page at Address is guarded
> > + @return FALSE The page at Address is not guarded
> > +**/
> > +BOOLEAN
> > +EFIAPI
> > +IsMemoryGuarded (
> > + IN EFI_PHYSICAL_ADDRESS Address
> > + )
> > +{
> > + return (GetGuardMapBit (Address) == 1);
> > +}
> > +
> > +/**
> > + Set the page at the given address to be a Guard page.
> > +
> > + This is done by changing the page table attribute to be NOT PRSENT.
> > +
> > + @param[in] Address Page address to Guard at
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +EFIAPI
> > +SetGuardPage (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress
> > + )
> > +{
> > + if (mIsSmmCpuMode) {
> > + SmmSetMemoryAttributes (BaseAddress, EFI_PAGE_SIZE,
> > EFI_MEMORY_RP);
> > + }
> > +}
> > +
> > +/**
> > + Unset the Guard page at the given address to the normal memory.
> > +
> > + This is done by changing the page table attribute to be PRSENT.
> > +
> > + @param[in] Address Page address to Guard at
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +EFIAPI
> > +UnsetGuardPage (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress
> > + )
> > +{
> > + if (mIsSmmCpuMode) {
> > + SmmClearMemoryAttributes (BaseAddress, EFI_PAGE_SIZE,
> > EFI_MEMORY_RP);
> > + }
> > +}
> > +
> > +/**
> > + Check to see if the memory at the given address should be guarded or not
> > +
> > + @param[in] MemoryType Memory type to check
> > + @param[in] AllocateType Allocation type to check
> > + @param[in] PageOrPool Indicate a page allocation or pool allocation
> > +
> > +
> > + @return TRUE The given type of memory should be guarded
> > + @return FALSE The given type of memory should not be guarded
> > +**/
> > +BOOLEAN
> > +IsMemoryTypeToGuard (
> > + IN EFI_MEMORY_TYPE MemoryType,
> > + IN EFI_ALLOCATE_TYPE AllocateType,
> > + IN UINT8 PageOrPool
> > + )
> > +{
> > + UINT64 TestBit;
> > + UINT64 ConfigBit;
> > +
> > + if ((PcdGet8 (PcdHeapGuardPropertyMask) & PageOrPool) == 0 ||
> > + AllocateType == AllocateAddress) {
> > + return FALSE;
> > + }
> > +
> > + ConfigBit = 0;
> > + if (PageOrPool & GUARD_HEAP_TYPE_POOL) {
> > + ConfigBit |= PcdGet64 (PcdHeapGuardPoolType);
> > + }
> > +
> > + if (PageOrPool & GUARD_HEAP_TYPE_PAGE) {
> > + ConfigBit |= PcdGet64 (PcdHeapGuardPageType);
> > + }
> > +
> > + if (MemoryType == EfiRuntimeServicesData ||
> > + MemoryType == EfiRuntimeServicesCode) {
> > + TestBit = LShiftU64 (1, MemoryType);
> > + } else if (MemoryType == EfiMaxMemoryType) {
> > + TestBit = (UINT64)-1;
> > + } else {
> > + TestBit = 0;
> > + }
> > +
> > + return ((ConfigBit & TestBit) != 0);
> > +}
> > +
> > +/**
> > + Check to see if the pool at the given address should be guarded or not
> > +
> > + @param[in] MemoryType Pool type to check
> > +
> > +
> > + @return TRUE The given type of pool should be guarded
> > + @return FALSE The given type of pool should not be guarded
> > +**/
> > +BOOLEAN
> > +IsPoolTypeToGuard (
> > + IN EFI_MEMORY_TYPE MemoryType
> > + )
> > +{
> > + return IsMemoryTypeToGuard (MemoryType, AllocateAnyPages,
> > + GUARD_HEAP_TYPE_POOL);
> > +}
> > +
> > +/**
> > + Check to see if the page at the given address should be guarded or not
> > +
> > + @param[in] MemoryType Page type to check
> > + @param[in] AllocateType Allocation type to check
> > +
> > + @return TRUE The given type of page should be guarded
> > + @return FALSE The given type of page should not be guarded
> > +**/
> > +BOOLEAN
> > +IsPageTypeToGuard (
> > + IN EFI_MEMORY_TYPE MemoryType,
> > + IN EFI_ALLOCATE_TYPE AllocateType
> > + )
> > +{
> > + return IsMemoryTypeToGuard (MemoryType, AllocateType,
> > GUARD_HEAP_TYPE_PAGE);
> > +}
> > +
> > +/**
> > + Check to see if the heap guard is enabled for page and/or pool allocation
> > +
> > + @return TRUE/FALSE
> > +**/
> > +BOOLEAN
> > +IsHeapGuardEnabled (
> > + VOID
> > + )
> > +{
> > + return IsMemoryTypeToGuard (EfiMaxMemoryType, AllocateAnyPages,
> > + GUARD_HEAP_TYPE_POOL|GUARD_HEAP_TYPE_PAGE);
> > +}
> > +
> > +/**
> > + Set head Guard and tail Guard for the given memory range
> > +
> > + @param[in] Memory Base address of memory to set guard for
> > + @param[in] NumberOfPages Memory size in pages
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +SetGuardForMemory (
> > + IN EFI_PHYSICAL_ADDRESS Memory,
> > + IN UINTN NumberOfPages
> > + )
> > +{
> > + EFI_PHYSICAL_ADDRESS GuardPage;
> > +
> > + //
> > + // Set tail Guard
> > + //
> > + GuardPage = Memory + EFI_PAGES_TO_SIZE (NumberOfPages);
> > + if (!IsGuardPage (GuardPage)) {
> > + SetGuardPage (GuardPage);
> > + }
> > +
> > + // Set head Guard
> > + GuardPage = Memory - EFI_PAGES_TO_SIZE (1);
> > + if (!IsGuardPage (GuardPage)) {
> > + SetGuardPage (GuardPage);
> > + }
> > +
> > + //
> > + // Mark the memory range as Guarded
> > + //
> > + SetGuardedMemoryBits (Memory, NumberOfPages);
> > +}
> > +
> > +/**
> > + Unset head Guard and tail Guard for the given memory range
> > +
> > + @param[in] Memory Base address of memory to unset guard for
> > + @param[in] NumberOfPages Memory size in pages
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +UnsetGuardForMemory (
> > + IN EFI_PHYSICAL_ADDRESS Memory,
> > + IN UINTN NumberOfPages
> > + )
> > +{
> > + EFI_PHYSICAL_ADDRESS GuardPage;
> > +
> > + if (NumberOfPages == 0) {
> > + return;
> > + }
> > +
> > + //
> > + // Head Guard must be one page before, if any.
> > + //
> > + GuardPage = Memory - EFI_PAGES_TO_SIZE (1);
> > + if (IsHeadGuard (GuardPage)) {
> > + if (!IsMemoryGuarded (GuardPage - EFI_PAGES_TO_SIZE (1))) {
> > + //
> > + // If the head Guard is not a tail Guard of adjacent memory block,
> > + // unset it.
> > + //
> > + UnsetGuardPage (GuardPage);
> > + }
> > + } else if (IsMemoryGuarded (GuardPage)) {
> > + //
> > + // Pages before memory to free are still in Guard. It's a partial free
> > + // case. Turn first page of memory block to free into a new Guard.
> > + //
> > + SetGuardPage (Memory);
> > + }
> > +
> > + //
> > + // Tail Guard must be the page after this memory block to free, if any.
> > + //
> > + GuardPage = Memory + EFI_PAGES_TO_SIZE (NumberOfPages);
> > + if (IsTailGuard (GuardPage)) {
> > + if (!IsMemoryGuarded (GuardPage + EFI_PAGES_TO_SIZE (1))) {
> > + //
> > + // If the tail Guard is not a head Guard of adjacent memory block,
> > + // free it; otherwise, keep it.
> > + //
> > + UnsetGuardPage (GuardPage);
> > + }
> > + } else if (IsMemoryGuarded (GuardPage)) {
> > + //
> > + // Pages after memory to free are still in Guard. It's a partial free
> > + // case. We need to keep one page to be a head Guard.
> > + //
> > + SetGuardPage (GuardPage - EFI_PAGES_TO_SIZE (1));
> > + }
> > +
> > + //
> > + // No matter what, we just clear the mark of the Guarded memory.
> > + //
> > + ClearGuardedMemoryBits(Memory, NumberOfPages);
> > +}
> > +
> > +/**
> > + Adjust address of free memory according to existing and/or required
> > Guard
> > +
> > + This function will check if there're existing Guard pages of adjacent
> > + memory blocks, and try to use it as the Guard page of the memory to be
> > + allocated.
> > +
> > + @param[in] Start Start address of free memory block
> > + @param[in] Size Size of free memory block
> > + @param[in] SizeRequested Size of memory to allocate
> > +
> > + @return The end address of memory block found
> > + @return 0 if no enough space for the required size of memory and its
> > Guard
> > +**/
> > +UINT64
> > +AdjustMemoryS (
> > + IN UINT64 Start,
> > + IN UINT64 Size,
> > + IN UINT64 SizeRequested
> > + )
> > +{
> > + UINT64 Target;
> > +
> > + Target = Start + Size - SizeRequested;
> > +
> > + //
> > + // At least one more page needed for Guard page.
> > + //
> > + if (Size < (SizeRequested + EFI_PAGES_TO_SIZE (1))) {
> > + return 0;
> > + }
> > +
> > + if (!IsGuardPage (Start + Size)) {
> > + // No Guard at tail to share. One more page is needed.
> > + Target -= EFI_PAGES_TO_SIZE (1);
> > + }
> > +
> > + // Out of range?
> > + if (Target < Start) {
> > + return 0;
> > + }
> > +
> > + // At the edge?
> > + if (Target == Start) {
> > + if (!IsGuardPage (Target - EFI_PAGES_TO_SIZE (1))) {
> > + // No enough space for a new head Guard if no Guard at head to share.
> > + return 0;
> > + }
> > + }
> > +
> > + // OK, we have enough pages for memory and its Guards. Return the End
> > of the
> > + // free space.
> > + return Target + SizeRequested - 1;
> > +}
> > +
> > +/**
> > + Adjust the start address and number of pages to free according to Guard
> > +
> > + The purpose of this function is to keep the shared Guard page with
> > adjacent
> > + memory block if it's still in guard, or free it if no more sharing. Another
> > + is to reserve pages as Guard pages in partial page free situation.
> > +
> > + @param[in/out] Memory Base address of memory to free
> > + @param[in/out] NumberOfPages Size of memory to free
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +AdjustMemoryF (
> > + IN OUT EFI_PHYSICAL_ADDRESS *Memory,
> > + IN OUT UINTN *NumberOfPages
> > + )
> > +{
> > + EFI_PHYSICAL_ADDRESS Start;
> > + EFI_PHYSICAL_ADDRESS MemoryToTest;
> > + UINTN PagesToFree;
> > +
> > + if (Memory == NULL || NumberOfPages == NULL || *NumberOfPages ==
> > 0) {
> > + return;
> > + }
> > +
> > + Start = *Memory;
> > + PagesToFree = *NumberOfPages;
> > +
> > + //
> > + // Head Guard must be one page before, if any.
> > + //
> > + MemoryToTest = Start - EFI_PAGES_TO_SIZE (1);
> > + if (IsHeadGuard (MemoryToTest)) {
> > + if (!IsMemoryGuarded (MemoryToTest - EFI_PAGES_TO_SIZE (1))) {
> > + //
> > + // If the head Guard is not a tail Guard of adjacent memory block,
> > + // free it; otherwise, keep it.
> > + //
> > + Start -= EFI_PAGES_TO_SIZE (1);
> > + PagesToFree += 1;
> > + }
> > + } else if (IsMemoryGuarded (MemoryToTest)) {
> > + //
> > + // Pages before memory to free are still in Guard. It's a partial free
> > + // case. We need to keep one page to be a tail Guard.
> > + //
> > + Start += EFI_PAGES_TO_SIZE (1);
> > + PagesToFree -= 1;
> > + }
> > +
> > + //
> > + // Tail Guard must be the page after this memory block to free, if any.
> > + //
> > + MemoryToTest = Start + EFI_PAGES_TO_SIZE (PagesToFree);
> > + if (IsTailGuard (MemoryToTest)) {
> > + if (!IsMemoryGuarded (MemoryToTest + EFI_PAGES_TO_SIZE (1))) {
> > + //
> > + // If the tail Guard is not a head Guard of adjacent memory block,
> > + // free it; otherwise, keep it.
> > + //
> > + PagesToFree += 1;
> > + }
> > + } else if (IsMemoryGuarded (MemoryToTest)) {
> > + //
> > + // Pages after memory to free are still in Guard. It's a partial free
> > + // case. We need to keep one page to be a head Guard.
> > + //
> > + PagesToFree -= 1;
> > + }
> > +
> > + *Memory = Start;
> > + *NumberOfPages = PagesToFree;
> > +}
> > +
> > +/**
> > + Adjust the base and number of pages to really allocate according to Guard
> > +
> > + @param[in/out] Memory Base address of free memory
> > + @param[in/out] NumberOfPages Size of memory to allocate
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +AdjustMemoryA (
> > + IN OUT EFI_PHYSICAL_ADDRESS *Memory,
> > + IN OUT UINTN *NumberOfPages
> > + )
> > +{
> > + //
> > + // FindFreePages() has already taken the Guard into account. It's safe to
> > + // adjust the start address and/or number of pages here, to make sure
> > that
> > + // the Guards are also "allocated".
> > + //
> > + if (!IsGuardPage (*Memory + EFI_PAGES_TO_SIZE (*NumberOfPages))) {
> > + // No tail Guard, add one.
> > + *NumberOfPages += 1;
> > + }
> > +
> > + if (!IsGuardPage (*Memory - EFI_PAGE_SIZE)) {
> > + // No head Guard, add one.
> > + *Memory -= EFI_PAGE_SIZE;
> > + *NumberOfPages += 1;
> > + }
> > +}
> > +
> > +/**
> > + Adjust the pool head position to make sure the Guard page is adjavent to
> > + pool tail or pool head.
> > +
> > + @param[in] Memory Base address of memory allocated
> > + @param[in] NoPages Number of pages actually allocated
> > + @param[in] Size Size of memory requested
> > + (plus pool head/tail overhead)
> > +
> > + @return Address of pool head
> > +**/
> > +VOID *
> > +AdjustPoolHeadA (
> > + IN EFI_PHYSICAL_ADDRESS Memory,
> > + IN UINTN NoPages,
> > + IN UINTN Size
> > + )
> > +{
> > + if ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) != 0) {
> > + //
> > + // Pool head is put near the head Guard
> > + //
> > + return (VOID *)(UINTN)Memory;
> > + }
> > +
> > + //
> > + // Pool head is put near the tail Guard
> > + //
> > + return (VOID *)(UINTN)(Memory + EFI_PAGES_TO_SIZE (NoPages) - Size);
> > +}
> > +
> > +/**
> > + Get the page base address according to pool head address
> > +
> > + @param[in] Memory Head address of pool to free
> > +
> > + @return Address of pool head
> > +**/
> > +VOID *
> > +AdjustPoolHeadF (
> > + IN EFI_PHYSICAL_ADDRESS Memory
> > + )
> > +{
> > + if ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) != 0) {
> > + //
> > + // Pool head is put near the head Guard
> > + //
> > + return (VOID *)(UINTN)Memory;
> > + }
> > +
> > + //
> > + // Pool head is put near the tail Guard
> > + //
> > + return (VOID *)(UINTN)(Memory & ~EFI_PAGE_MASK);
> > +}
> > +
> > +/**
> > + Helper function of memory allocation with Guard pages
> > +
> > + @param FreePageList The free page node.
> > + @param NumberOfPages Number of pages to be allocated.
> > + @param MaxAddress Request to allocate memory below this
> > address.
> > + @param MemoryType Type of memory requested.
> > +
> > + @return Memory address of allocated pages.
> > +**/
> > +UINTN
> > +InternalAllocMaxAddressWithGuard (
> > + IN OUT LIST_ENTRY *FreePageList,
> > + IN UINTN NumberOfPages,
> > + IN UINTN MaxAddress,
> > + IN EFI_MEMORY_TYPE MemoryType
> > +
> > + )
> > +{
> > + LIST_ENTRY *Node;
> > + FREE_PAGE_LIST *Pages;
> > + UINTN PagesToAlloc;
> > + UINTN HeadGuard;
> > + UINTN TailGuard;
> > + UINTN Address;
> > +
> > + for (Node = FreePageList->BackLink; Node != FreePageList;
> > + Node = Node->BackLink) {
> > + Pages = BASE_CR (Node, FREE_PAGE_LIST, Link);
> > + if (Pages->NumberOfPages >= NumberOfPages &&
> > + (UINTN)Pages + EFI_PAGES_TO_SIZE (NumberOfPages) - 1 <=
> > MaxAddress) {
> > +
> > + //
> > + // We may need 1 or 2 more pages for Guard. Check it out.
> > + //
> > + PagesToAlloc = NumberOfPages;
> > + TailGuard = (UINTN)Pages + EFI_PAGES_TO_SIZE (Pages-
> > >NumberOfPages);
> > + if (!IsGuardPage (TailGuard)) {
> > + //
> > + // Add one if no Guard at the end of current free memory block.
> > + //
> > + PagesToAlloc += 1;
> > + TailGuard = 0;
> > + }
> > +
> > + HeadGuard = (UINTN)Pages +
> > + EFI_PAGES_TO_SIZE (Pages->NumberOfPages - PagesToAlloc) -
> > + EFI_PAGE_SIZE;
> > + if (!IsGuardPage (HeadGuard)) {
> > + //
> > + // Add one if no Guard at the page before the address to allocate
> > + //
> > + PagesToAlloc += 1;
> > + HeadGuard = 0;
> > + }
> > +
> > + if (Pages->NumberOfPages < PagesToAlloc) {
> > + // Not enough space to allocate memory with Guards? Try next block.
> > + continue;
> > + }
> > +
> > + Address = InternalAllocPagesOnOneNode (Pages, PagesToAlloc,
> > MaxAddress);
> > + ConvertSmmMemoryMapEntry(MemoryType, Address, PagesToAlloc,
> > FALSE);
> > + CoreFreeMemoryMapStack();
> > + if (!HeadGuard) {
> > + // Don't pass the Guard page to user.
> > + Address += EFI_PAGE_SIZE;
> > + }
> > + SetGuardForMemory (Address, NumberOfPages);
> > + return Address;
> > + }
> > + }
> > +
> > + return (UINTN)(-1);
> > +}
> > +
> > +/**
> > + Helper function of memory free with Guard pages
> > +
> > + @param[in] Memory Base address of memory being freed.
> > + @param[in] NumberOfPages The number of pages to free.
> > + @param[in] AddRegion If this memory is new added region.
> > +
> > + @retval EFI_NOT_FOUND Could not find the entry that covers the
> > range.
> > + @retval EFI_INVALID_PARAMETER Address not aligned, Address is zero or
> > NumberOfPages is zero.
> > + @return EFI_SUCCESS Pages successfully freed.
> > +**/
> > +EFI_STATUS
> > +SmmInternalFreePagesExWithGuard (
> > + IN EFI_PHYSICAL_ADDRESS Memory,
> > + IN UINTN NumberOfPages,
> > + IN BOOLEAN AddRegion
> > + )
> > +{
> > + EFI_PHYSICAL_ADDRESS MemoryToFree;
> > + UINTN PagesToFree;
> > +
> > + MemoryToFree = Memory;
> > + PagesToFree = NumberOfPages;
> > +
> > + AdjustMemoryF (&MemoryToFree, &PagesToFree);
> > + UnsetGuardForMemory (Memory, NumberOfPages);
> > +
> > + return SmmInternalFreePagesEx (MemoryToFree, PagesToFree,
> > AddRegion);
> > +}
> > +
> > +/**
> > + Set all Guard pages which cannot be set during the non-SMM mode time
> > +**/
> > +VOID
> > +SetAllGuardPages (
> > + VOID
> > + )
> > +{
> > + UINT64 Entries[GUARDED_HEAP_MAP_TABLE_DEPTH]
> > + = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> > + UINT64 Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH]
> > + = GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS;
> > + UINT64 *Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > + UINT64 Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > + UINT64 Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > + UINT64 TableEntry;
> > + UINT64 Address;
> > + UINT64 GuardPage;
> > + INTN Level;
> > + UINTN Index;
> > + BOOLEAN OnGuarding;
> > +
> > + SetMem64 (Tables, sizeof(Tables), 0);
> > + SetMem64 (Addresses, sizeof(Addresses), 0);
> > + SetMem64 (Indices, sizeof(Indices), 0);
> > +
> > + Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > + Tables[Level] = mGuardedMemoryMap;
> > + Address = 0;
> > + OnGuarding = FALSE;
> > +
> > + DEBUG_CODE (
> > + DumpGuardedMemoryBitmap ();
> > + );
> > +
> > + while (TRUE) {
> > + if (Indices[Level] > Entries[Level]) {
> > + Tables[Level] = 0;
> > + Level -= 1;
> > + } else {
> > +
> > + TableEntry = Tables[Level][Indices[Level]];
> > + Address = Addresses[Level];
> > +
> > + if (TableEntry == 0) {
> > +
> > + OnGuarding = FALSE;
> > +
> > + } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> > +
> > + Level += 1;
> > + Tables[Level] = (UINT64 *)TableEntry;
> > + Addresses[Level] = Address;
> > + Indices[Level] = 0;
> > +
> > + continue;
> > +
> > + } else {
> > +
> > + Index = 0;
> > + while (Index < GUARDED_HEAP_MAP_ENTRY_BITS) {
> > + if ((TableEntry & 1) == 1) {
> > + if (OnGuarding) {
> > + GuardPage = 0;
> > + } else {
> > + GuardPage = Address - EFI_PAGE_SIZE;
> > + }
> > + OnGuarding = TRUE;
> > + } else {
> > + if (OnGuarding) {
> > + GuardPage = Address;
> > + } else {
> > + GuardPage = 0;
> > + }
> > + OnGuarding = FALSE;
> > + }
> > +
> > + if (GuardPage != 0) {
> > + SetGuardPage (GuardPage);
> > + }
> > +
> > + if (TableEntry == 0) {
> > + break;
> > + }
> > +
> > + TableEntry = RShiftU64 (TableEntry, 1);
> > + Address += EFI_PAGE_SIZE;
> > + Index += 1;
> > + }
> > + }
> > + }
> > +
> > + if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> > + break;
> > + }
> > +
> > + Indices[Level] += 1;
> > + Address = (Level == 0) ? 0 : Addresses[Level - 1];
> > + Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> > +
> > + }
> > +}
> > +
> > +/**
> > + Hook function used to set all Guard pages after entering SMM mode
> > +**/
> > +VOID
> > +SmmEntryPointMemoryManagementHook (
> > + VOID
> > + )
> > +{
> > + EFI_STATUS Status;
> > + VOID *SmmCpu;
> > +
> > + if (!mIsSmmCpuMode) {
> > + Status = SmmLocateProtocol (&gEfiSmmCpuProtocolGuid, NULL,
> > &SmmCpu);
> > + if (!EFI_ERROR(Status)) {
> > + mIsSmmCpuMode = TRUE;
> > + SetAllGuardPages ();
> > + }
> > + }
> > +}
> > +
> > +/**
> > + Helper function to convert a UINT64 value in binary to a string
> > +
> > + @param[in] Value Value of a UINT64 integer
> > + @param[in] BinString String buffer to contain the conversion result
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +Uint64ToBinString (
> > + IN UINT64 Value,
> > + OUT CHAR8 *BinString
> > + )
> > +{
> > + UINTN Index;
> > +
> > + if (BinString == NULL) {
> > + return;
> > + }
> > +
> > + for (Index = 64; Index > 0; --Index) {
> > + BinString[Index - 1] = '0' + (Value & 1);
> > + Value = RShiftU64 (Value, 1);
> > + }
> > + BinString[64] = '\0';
> > +}
> > +
> > +/**
> > + Dump the guarded memory bit map
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +EFIAPI
> > +DumpGuardedMemoryBitmap (
> > + VOID
> > + )
> > +{
> > + UINT64 Entries[GUARDED_HEAP_MAP_TABLE_DEPTH]
> > + = GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> > + UINT64 Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH]
> > + = GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS;
> > + UINT64 *Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > + UINT64 Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > + UINT64 Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > + UINT64 TableEntry;
> > + UINT64 Address;
> > + INTN Level;
> > + UINTN RepeatZero;
> > + CHAR8 String[GUARDED_HEAP_MAP_ENTRY_BITS + 1];
> > + CHAR8 *Ruler1 = " 3 2"
> > + " 1 0";
> > + CHAR8 *Ruler2 = "FEDCBA9876543210FEDCBA9876543210"
> > + "FEDCBA9876543210FEDCBA9876543210";
> > +
> > + if (mGuardedMemoryMap == NULL) {
> > + return;
> > + }
> > +
> > + DEBUG ((DEBUG_INFO, "============================="
> > + " Guarded Memory Bitmap "
> > + "==============================\r\n"));
> > + DEBUG ((DEBUG_INFO, " %a\r\n", Ruler1));
> > + DEBUG ((DEBUG_INFO, " %a\r\n", Ruler2));
> > +
> > +
> > + SetMem64 (Tables, sizeof(Tables), 0);
> > + SetMem64 (Addresses, sizeof(Addresses), 0);
> > + SetMem64 (Indices, sizeof(Indices), 0);
> > +
> > + Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > + Tables[Level] = mGuardedMemoryMap;
> > + Address = 0;
> > + RepeatZero = 0;
> > +
> > + while (TRUE) {
> > + if (Indices[Level] > Entries[Level]) {
> > +
> > + Tables[Level] = 0;
> > + Level -= 1;
> > + RepeatZero = 0;
> > +
> > + DEBUG ((
> > + DEBUG_INFO,
> > + "========================================="
> > + "=========================================\r\n"
> > + ));
> > +
> > + } else {
> > +
> > + TableEntry = Tables[Level][Indices[Level]];
> > + Address = Addresses[Level];
> > +
> > + if (TableEntry == 0) {
> > +
> > + if (Level == GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> > + if (RepeatZero == 0) {
> > + Uint64ToBinString(TableEntry, String);
> > + DEBUG ((DEBUG_INFO, "%016lx: %a\r\n", Address, String));
> > + } else if (RepeatZero == 1) {
> > + DEBUG ((DEBUG_INFO, "... : ...\r\n"));
> > + }
> > + RepeatZero += 1;
> > + }
> > +
> > + } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> > +
> > + Level += 1;
> > + Tables[Level] = (UINT64 *)TableEntry;
> > + Addresses[Level] = Address;
> > + Indices[Level] = 0;
> > + RepeatZero = 0;
> > +
> > + continue;
> > +
> > + } else {
> > +
> > + RepeatZero = 0;
> > + Uint64ToBinString(TableEntry, String);
> > + DEBUG ((DEBUG_INFO, "%016lx: %a\r\n", Address, String));
> > +
> > + }
> > + }
> > +
> > + if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> > + break;
> > + }
> > +
> > + Indices[Level] += 1;
> > + Address = (Level == 0) ? 0 : Addresses[Level - 1];
> > + Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> > +
> > + }
> > +}
> > +
> > +/**
> > + Debug function used to verify if the Guard page is well set or not
> > +
> > + @param[in] BaseAddress Address of memory to check
> > + @param[in] NumberOfPages Size of memory in pages
> > +
> > + @return TRUE The head Guard and tail Guard are both well set
> > + @return FALSE The head Guard and/or tail Guard are not well set
> > +**/
> > +BOOLEAN
> > +VerifyMemoryGuard (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> > + IN UINTN NumberOfPages
> > + )
> > +{
> > + UINT64 *PageEntry;
> > + PAGE_ATTRIBUTE Attribute;
> > + EFI_PHYSICAL_ADDRESS Address;
> > +
> > + if (!mIsSmmCpuMode) {
> > + return TRUE;
> > + }
> > +
> > + Address = BaseAddress - EFI_PAGE_SIZE;
> > + PageEntry = GetPageTableEntry (Address, &Attribute);
> > + if (PageEntry == NULL || Attribute != Page4K) {
> > + DEBUG ((DEBUG_ERROR, "Head Guard is not set at: %016lx!!!\r\n",
> > Address));
> > + DumpGuardedMemoryBitmap ();
> > + return FALSE;
> > + }
> > +
> > + if ((*PageEntry & IA32_PG_P) != 0) {
> > + DEBUG ((DEBUG_ERROR, "Head Guard is not set at: %016lx
> > (%016lX)!!!\r\n",
> > + Address, *PageEntry));
> > + *(UINT8 *) Address = 0;
> > + DumpGuardedMemoryBitmap ();
> > + return FALSE;
> > + }
> > +
> > + Address = BaseAddress + EFI_PAGES_TO_SIZE (NumberOfPages);
> > + PageEntry = GetPageTableEntry (Address, &Attribute);
> > + if (PageEntry == NULL || Attribute != Page4K) {
> > + DEBUG ((DEBUG_ERROR, "Tail Guard is not set at: %016lx!!!\r\n",
> > Address));
> > + DumpGuardedMemoryBitmap ();
> > + return FALSE;
> > + }
> > +
> > + if ((*PageEntry & IA32_PG_P) != 0) {
> > + DEBUG ((DEBUG_ERROR, "Tail Guard is not set at: %016lx (%016lX)!!!\r\n",
> > + Address, *PageEntry));
> > + *(UINT8 *) Address = 0;
> > + DumpGuardedMemoryBitmap ();
> > + return FALSE;
> > + }
> > +
> > + return TRUE;
> > +}
> > +
> > diff --git a/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h
> > b/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h
> > new file mode 100644
> > index 0000000000..ecc10e83a7
> > --- /dev/null
> > +++ b/MdeModulePkg/Core/PiSmmCore/Misc/HeapGuard.h
> > @@ -0,0 +1,395 @@
> > +/** @file
> > + Data structure and functions to allocate and free memory space.
> > +
> > +Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
> > +This program and the accompanying materials
> > +are licensed and made available under the terms and conditions of the BSD
> > License
> > +which accompanies this distribution. The full text of the license may be
> > found at
> > +http://opensource.org/licenses/bsd-license.php
> > +
> > +THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > +WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +
> > +**/
> > +
> > +#ifndef _HEAPGUARD_H_
> > +#define _HEAPGUARD_H_
> > +
> > +#include "PiSmmCore.h"
> > +#include "PageTable.h"
> > +
> > +//
> > +// Following macros are used to define and access the guarded memory
> > bitmap
> > +// table.
> > +//
> > +// To simplify the access and reduce the memory used for this table, the
> > +// table is constructed in the similar way as page table structure but in
> > +// reverse direction, i.e. from bottom growing up to top.
> > +//
> > +// - 1-bit tracks 1 page (4KB)
> > +// - 1-UINT64 map entry tracks 256KB memory
> > +// - 1K-UINT64 map table tracks 256MB memory
> > +// - Five levels of tables can track any address of memory of 64-bit
> > +// system, like below.
> > +//
> > +// 512 * 512 * 512 * 512 * 1K * 64b * 4K
> > +// 111111111 111111111 111111111 111111111 1111111111 111111
> > 111111111111
> > +// 63 54 45 36 27 17 11 0
> > +// 9b 9b 9b 9b 10b 6b 12b
> > +// L0 -> L1 -> L2 -> L3 -> L4 -> bits -> page
> > +// 1FF 1FF 1FF 1FF 3FF 3F FFF
> > +//
> > +// L4 table has 1K * sizeof(UINT64) = 8K (2-page), which can track 256MB
> > +// memory. Each table of L0-L3 will be allocated when its memory address
> > +// range is to be tracked. Only 1-page will be allocated each time. This
> > +// can save memories used to establish this map table.
> > +//
> > +// For a normal configuration of system with 4G memory, two levels of
> > tables
> > +// can track the whole memory, because two levels (L3+L4) of map tables
> > have
> > +// already coverred 37-bit of memory address. And for a normal UEFI BIOS,
> > +// less than 128M memory would be consumed during boot. That means we
> > just
> > +// need
> > +//
> > +// 1-page (L3) + 2-page (L4)
> > +//
> > +// memory (3 pages) to track the memory allocation works. In this case,
> > +// there's no need to setup L0-L2 tables.
> > +//
> > +
> > +//
> > +// Each entry occupies 8B/64b. 1-page can hold 512 entries, which spans 9
> > +// bits in address. (512 = 1 << 9)
> > +//
> > +#define BYTE_LENGTH_SHIFT 3 // (8 = 1 << 3)
> > +
> > +#define GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT \
> > + (EFI_PAGE_SHIFT - BYTE_LENGTH_SHIFT)
> > +
> > +#define GUARDED_HEAP_MAP_TABLE_DEPTH 5
> > +
> > +// Use UINT64_index + bit_index_of_UINT64 to locate the bit in may
> > +#define GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT 6 // (64 = 1 << 6)
> > +
> > +#define GUARDED_HEAP_MAP_ENTRY_BITS \
> > + (1 << GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT)
> > +
> > +#define GUARDED_HEAP_MAP_ENTRY_BYTES \
> > + (GUARDED_HEAP_MAP_ENTRY_BITS / 8)
> > +
> > +// L4 table address width: 64 - 9 * 4 - 6 - 12 = 10b
> > +#define GUARDED_HEAP_MAP_ENTRY_SHIFT \
> > + (GUARDED_HEAP_MAP_ENTRY_BITS \
> > + - GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT * 4 \
> > + - GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT \
> > + - EFI_PAGE_SHIFT)
> > +
> > +// L4 table address mask: (1 << 10 - 1) = 0x3FF
> > +#define GUARDED_HEAP_MAP_ENTRY_MASK \
> > + ((1 << GUARDED_HEAP_MAP_ENTRY_SHIFT) - 1)
> > +
> > +// Size of each L4 table: (1 << 10) * 8 = 8KB = 2-page
> > +#define GUARDED_HEAP_MAP_SIZE \
> > + ((1 << GUARDED_HEAP_MAP_ENTRY_SHIFT) *
> > GUARDED_HEAP_MAP_ENTRY_BYTES)
> > +
> > +// Memory size tracked by one L4 table: 8KB * 8 * 4KB = 256MB
> > +#define GUARDED_HEAP_MAP_UNIT_SIZE \
> > + (GUARDED_HEAP_MAP_SIZE * 8 * EFI_PAGE_SIZE)
> > +
> > +// L4 table entry number: 8KB / 8 = 1024
> > +#define GUARDED_HEAP_MAP_ENTRIES_PER_UNIT \
> > + (GUARDED_HEAP_MAP_SIZE / GUARDED_HEAP_MAP_ENTRY_BYTES)
> > +
> > +// L4 table entry indexing
> > +#define GUARDED_HEAP_MAP_ENTRY_INDEX(Address) \
> > + (RShiftU64 (Address, EFI_PAGE_SHIFT \
> > + + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT) \
> > + & GUARDED_HEAP_MAP_ENTRY_MASK)
> > +
> > +// L4 table entry bit indexing
> > +#define GUARDED_HEAP_MAP_ENTRY_BIT_INDEX(Address) \
> > + (RShiftU64 (Address, EFI_PAGE_SHIFT) \
> > + & ((1 << GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT) - 1))
> > +
> > +//
> > +// Total bits (pages) tracked by one L4 table (65536-bit)
> > +//
> > +#define GUARDED_HEAP_MAP_BITS \
> > + (1 << (GUARDED_HEAP_MAP_ENTRY_SHIFT \
> > + + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT))
> > +
> > +//
> > +// Bit indexing inside the whole L4 table (0 - 65535)
> > +//
> > +#define GUARDED_HEAP_MAP_BIT_INDEX(Address) \
> > + (RShiftU64 (Address, EFI_PAGE_SHIFT) \
> > + & ((1 << (GUARDED_HEAP_MAP_ENTRY_SHIFT \
> > + + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT)) - 1))
> > +
> > +//
> > +// Memory address bit width tracked by L4 table: 10 + 6 + 12 = 28
> > +//
> > +#define GUARDED_HEAP_MAP_TABLE_SHIFT \
> > + (GUARDED_HEAP_MAP_ENTRY_SHIFT +
> > GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT \
> > + + EFI_PAGE_SHIFT)
> > +
> > +//
> > +// Macro used to initialize the local array variable for map table traversing
> > +// {55, 46, 37, 28, 18}
> > +//
> > +#define GUARDED_HEAP_MAP_TABLE_DEPTH_SHIFTS \
> > + { \
> > + GUARDED_HEAP_MAP_TABLE_SHIFT +
> > GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT * 3, \
> > + GUARDED_HEAP_MAP_TABLE_SHIFT +
> > GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT * 2, \
> > + GUARDED_HEAP_MAP_TABLE_SHIFT +
> > GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT, \
> > + GUARDED_HEAP_MAP_TABLE_SHIFT, \
> > + EFI_PAGE_SHIFT + GUARDED_HEAP_MAP_ENTRY_BIT_SHIFT \
> > + }
> > +
> > +//
> > +// Masks used to extract address range of each level of table
> > +// {0x1FF, 0x1FF, 0x1FF, 0x1FF, 0x3FF}
> > +//
> > +#define GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS \
> > + { \
> > + (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
> > + (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
> > + (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
> > + (1 << GUARDED_HEAP_MAP_TABLE_ENTRY_SHIFT) - 1, \
> > + (1 << GUARDED_HEAP_MAP_ENTRY_SHIFT) - 1 \
> > + }
> > +
> > +//
> > +// Memory type to guard (matching the related PCD definition)
> > +//
> > +#define GUARD_HEAP_TYPE_POOL BIT2
> > +#define GUARD_HEAP_TYPE_PAGE BIT3
> > +
> > +typedef struct {
> > + UINT32 TailMark;
> > + UINT32 HeadMark;
> > + EFI_PHYSICAL_ADDRESS Address;
> > + LIST_ENTRY Link;
> > +} HEAP_GUARD_NODE;
> > +
> > +/**
> > + Set head Guard and tail Guard for the given memory range
> > +
> > + @param[in] Memory Base address of memory to set guard for
> > + @param[in] NumberOfPages Memory size in pages
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +SetGuardForMemory (
> > + IN EFI_PHYSICAL_ADDRESS Memory,
> > + IN UINTN NumberOfPages
> > + );
> > +
> > +/**
> > + Unset head Guard and tail Guard for the given memory range
> > +
> > + @param[in] Memory Base address of memory to unset guard for
> > + @param[in] NumberOfPages Memory size in pages
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +UnsetGuardForMemory (
> > + IN EFI_PHYSICAL_ADDRESS Memory,
> > + IN UINTN NumberOfPages
> > + );
> > +
> > +/**
> > + Adjust the base and number of pages to really allocate according to Guard
> > +
> > + @param[in/out] Memory Base address of free memory
> > + @param[in/out] NumberOfPages Size of memory to allocate
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +AdjustMemoryA (
> > + IN OUT EFI_PHYSICAL_ADDRESS *Memory,
> > + IN OUT UINTN *NumberOfPages
> > + );
> > +
> > +/**
> > + Adjust the start address and number of pages to free according to Guard
> > +
> > + The purpose of this function is to keep the shared Guard page with
> > adjacent
> > + memory block if it's still in guard, or free it if no more sharing. Another
> > + is to reserve pages as Guard pages in partial page free situation.
> > +
> > + @param[in/out] Memory Base address of memory to free
> > + @param[in/out] NumberOfPages Size of memory to free
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +AdjustMemoryF (
> > + IN OUT EFI_PHYSICAL_ADDRESS *Memory,
> > + IN OUT UINTN *NumberOfPages
> > + );
> > +
> > +/**
> > + Check to see if the pool at the given address should be guarded or not
> > +
> > + @param[in] MemoryType Pool type to check
> > +
> > +
> > + @return TRUE The given type of pool should be guarded
> > + @return FALSE The given type of pool should not be guarded
> > +**/
> > +BOOLEAN
> > +IsPoolTypeToGuard (
> > + IN EFI_MEMORY_TYPE MemoryType
> > + );
> > +
> > +/**
> > + Check to see if the page at the given address should be guarded or not
> > +
> > + @param[in] MemoryType Page type to check
> > + @param[in] AllocateType Allocation type to check
> > +
> > + @return TRUE The given type of page should be guarded
> > + @return FALSE The given type of page should not be guarded
> > +**/
> > +BOOLEAN
> > +IsPageTypeToGuard (
> > + IN EFI_MEMORY_TYPE MemoryType,
> > + IN EFI_ALLOCATE_TYPE AllocateType
> > + );
> > +
> > +/**
> > + Check to see if the page at the given address is guarded or not
> > +
> > + @param[in] Address The address to check for
> > +
> > + @return TRUE The page at Address is guarded
> > + @return FALSE The page at Address is not guarded
> > +**/
> > +BOOLEAN
> > +EFIAPI
> > +IsMemoryGuarded (
> > + IN EFI_PHYSICAL_ADDRESS Address
> > + );
> > +
> > +/**
> > + Check to see if the page at the given address is a Guard page or not
> > +
> > + @param[in] Address The address to check for
> > +
> > + @return TRUE The page at Address is a Guard page
> > + @return FALSE The page at Address is not a Guard page
> > +**/
> > +BOOLEAN
> > +EFIAPI
> > +IsGuardPage (
> > + IN EFI_PHYSICAL_ADDRESS Address
> > + );
> > +
> > +/**
> > + Dump the guarded memory bit map
> > +
> > + @return VOID
> > +**/
> > +VOID
> > +EFIAPI
> > +DumpGuardedMemoryBitmap (
> > + VOID
> > + );
> > +
> > +/**
> > + Adjust the pool head position to make sure the Guard page is adjavent to
> > + pool tail or pool head.
> > +
> > + @param[in] Memory Base address of memory allocated
> > + @param[in] NoPages Number of pages actually allocated
> > + @param[in] Size Size of memory requested
> > + (plus pool head/tail overhead)
> > +
> > + @return Address of pool head
> > +**/
> > +VOID *
> > +AdjustPoolHeadA (
> > + IN EFI_PHYSICAL_ADDRESS Memory,
> > + IN UINTN NoPages,
> > + IN UINTN Size
> > + );
> > +
> > +/**
> > + Get the page base address according to pool head address
> > +
> > + @param[in] Memory Head address of pool to free
> > +
> > + @return Address of pool head
> > +**/
> > +VOID *
> > +AdjustPoolHeadF (
> > + IN EFI_PHYSICAL_ADDRESS Memory
> > + );
> > +
> > +/**
> > + Helper function of memory allocation with Guard pages
> > +
> > + @param FreePageList The free page node.
> > + @param NumberOfPages Number of pages to be allocated.
> > + @param MaxAddress Request to allocate memory below this
> > address.
> > + @param MemoryType Type of memory requested.
> > +
> > + @return Memory address of allocated pages.
> > +**/
> > +UINTN
> > +InternalAllocMaxAddressWithGuard (
> > + IN OUT LIST_ENTRY *FreePageList,
> > + IN UINTN NumberOfPages,
> > + IN UINTN MaxAddress,
> > + IN EFI_MEMORY_TYPE MemoryType
> > + );
> > +
> > +/**
> > + Helper function of memory free with Guard pages
> > +
> > + @param[in] Memory Base address of memory being freed.
> > + @param[in] NumberOfPages The number of pages to free.
> > + @param[in] AddRegion If this memory is new added region.
> > +
> > + @retval EFI_NOT_FOUND Could not find the entry that covers the
> > range.
> > + @retval EFI_INVALID_PARAMETER Address not aligned, Address is zero or
> > NumberOfPages is zero.
> > + @return EFI_SUCCESS Pages successfully freed.
> > +**/
> > +EFI_STATUS
> > +SmmInternalFreePagesExWithGuard (
> > + IN EFI_PHYSICAL_ADDRESS Memory,
> > + IN UINTN NumberOfPages,
> > + IN BOOLEAN AddRegion
> > + );
> > +
> > +/**
> > + Check to see if the heap guard is enabled for page and/or pool allocation
> > +
> > + @return TRUE/FALSE
> > +**/
> > +BOOLEAN
> > +IsHeapGuardEnabled (
> > + VOID
> > + );
> > +
> > +/**
> > + Debug function used to verify if the Guard page is well set or not
> > +
> > + @param[in] BaseAddress Address of memory to check
> > + @param[in] NumberOfPages Size of memory in pages
> > +
> > + @return TRUE The head Guard and tail Guard are both well set
> > + @return FALSE The head Guard and/or tail Guard are not well set
> > +**/
> > +BOOLEAN
> > +VerifyMemoryGuard (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> > + IN UINTN NumberOfPages
> > + );
> > +
> > +extern BOOLEAN mOnGuarding;
> > +
> > +#endif
> > diff --git a/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c
> > b/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c
> > new file mode 100644
> > index 0000000000..d41b3e923f
> > --- /dev/null
> > +++ b/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.c
> > @@ -0,0 +1,704 @@
> > +/** @file
> > +
> > +Copyright (c) 2016 - 2017, Intel Corporation. All rights reserved.<BR>
> > +This program and the accompanying materials
> > +are licensed and made available under the terms and conditions of the BSD
> > License
> > +which accompanies this distribution. The full text of the license may be
> > found at
> > +http://opensource.org/licenses/bsd-license.php
> > +
> > +THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > +WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +
> > +**/
> > +
> > +#include "PiSmmCore.h"
> > +#include "PageTable.h"
> > +
> > +#include <Library/CpuLib.h>
> > +
> > +UINT64 mAddressEncMask = 0;
> > +UINT8 mPhysicalAddressBits = 32;
> > +
> > +PAGE_ATTRIBUTE_TABLE mPageAttributeTable[] = {
> > + {PageNone, 0, 0},
> > + {Page4K, SIZE_4KB, PAGING_4K_ADDRESS_MASK_64},
> > + {Page2M, SIZE_2MB, PAGING_2M_ADDRESS_MASK_64},
> > + {Page1G, SIZE_1GB, PAGING_1G_ADDRESS_MASK_64},
> > +};
> > +
> > +/**
> > + Calculate the maximum support address.
> > +
> > + @return the maximum support address.
> > +**/
> > +UINT8
> > +CalculateMaximumSupportAddress (
> > + VOID
> > + )
> > +{
> > + UINT32 RegEax;
> > + UINT8 PhysicalAddressBits;
> > + VOID *Hob;
> > +
> > + //
> > + // Get physical address bits supported.
> > + //
> > + Hob = GetFirstHob (EFI_HOB_TYPE_CPU);
> > + if (Hob != NULL) {
> > + PhysicalAddressBits = ((EFI_HOB_CPU *) Hob)->SizeOfMemorySpace;
> > + } else {
> > + AsmCpuid (0x80000000, &RegEax, NULL, NULL, NULL);
> > + if (RegEax >= 0x80000008) {
> > + AsmCpuid (0x80000008, &RegEax, NULL, NULL, NULL);
> > + PhysicalAddressBits = (UINT8) RegEax;
> > + } else {
> > + PhysicalAddressBits = 36;
> > + }
> > + }
> > +
> > + //
> > + // IA-32e paging translates 48-bit linear addresses to 52-bit physical
> > addresses.
> > + //
> > + ASSERT (PhysicalAddressBits <= 52);
> > + if (PhysicalAddressBits > 48) {
> > + PhysicalAddressBits = 48;
> > + }
> > + return PhysicalAddressBits;
> > +}
> > +
> > +/**
> > + Return page table base.
> > +
> > + @return page table base.
> > +**/
> > +UINTN
> > +GetPageTableBase (
> > + VOID
> > + )
> > +{
> > + return (AsmReadCr3 () & PAGING_4K_ADDRESS_MASK_64);
> > +}
> > +
> > +/**
> > + Return length according to page attributes.
> > +
> > + @param[in] PageAttributes The page attribute of the page entry.
> > +
> > + @return The length of page entry.
> > +**/
> > +UINTN
> > +PageAttributeToLength (
> > + IN PAGE_ATTRIBUTE PageAttribute
> > + )
> > +{
> > + if (PageAttribute <= Page1G) {
> > + return (UINTN)mPageAttributeTable[PageAttribute].Length;
> > + }
> > + return 0;
> > +}
> > +
> > +/**
> > + Return address mask according to page attributes.
> > +
> > + @param[in] PageAttributes The page attribute of the page entry.
> > +
> > + @return The address mask of page entry.
> > +**/
> > +UINTN
> > +PageAttributeToMask (
> > + IN PAGE_ATTRIBUTE PageAttribute
> > + )
> > +{
> > + if (PageAttribute <= Page1G) {
> > + return (UINTN)mPageAttributeTable[PageAttribute].AddressMask;
> > + }
> > + return 0;
> > +}
> > +
> > +/**
> > + Return page table entry to match the address.
> > +
> > + @param[in] Address The address to be checked.
> > + @param[out] PageAttributes The page attribute of the page entry.
> > +
> > + @return The page entry.
> > +**/
> > +VOID *
> > +GetPageTableEntry (
> > + IN PHYSICAL_ADDRESS Address,
> > + OUT PAGE_ATTRIBUTE *PageAttribute
> > + )
> > +{
> > + UINTN Index1;
> > + UINTN Index2;
> > + UINTN Index3;
> > + UINTN Index4;
> > + UINT64 *L1PageTable;
> > + UINT64 *L2PageTable;
> > + UINT64 *L3PageTable;
> > + UINT64 *L4PageTable;
> > +
> > + Index4 = ((UINTN)RShiftU64 (Address, 39)) & PAGING_PAE_INDEX_MASK;
> > + Index3 = ((UINTN)Address >> 30) & PAGING_PAE_INDEX_MASK;
> > + Index2 = ((UINTN)Address >> 21) & PAGING_PAE_INDEX_MASK;
> > + Index1 = ((UINTN)Address >> 12) & PAGING_PAE_INDEX_MASK;
> > +
> > + if (sizeof(UINTN) == sizeof(UINT64)) {
> > + L4PageTable = (UINT64 *)GetPageTableBase ();
> > + if (L4PageTable[Index4] == 0) {
> > + *PageAttribute = PageNone;
> > + return NULL;
> > + }
> > +
> > + L3PageTable = (UINT64 *)(UINTN)(L4PageTable[Index4] &
> > ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> > + } else {
> > + L3PageTable = (UINT64 *)GetPageTableBase ();
> > + }
> > + if (L3PageTable[Index3] == 0) {
> > + *PageAttribute = PageNone;
> > + return NULL;
> > + }
> > + if ((L3PageTable[Index3] & IA32_PG_PS) != 0) {
> > + // 1G
> > + *PageAttribute = Page1G;
> > + return &L3PageTable[Index3];
> > + }
> > +
> > + L2PageTable = (UINT64 *)(UINTN)(L3PageTable[Index3] &
> > ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> > + if (L2PageTable[Index2] == 0) {
> > + *PageAttribute = PageNone;
> > + return NULL;
> > + }
> > + if ((L2PageTable[Index2] & IA32_PG_PS) != 0) {
> > + // 2M
> > + *PageAttribute = Page2M;
> > + return &L2PageTable[Index2];
> > + }
> > +
> > + // 4k
> > + L1PageTable = (UINT64 *)(UINTN)(L2PageTable[Index2] &
> > ~mAddressEncMask & PAGING_4K_ADDRESS_MASK_64);
> > + if ((L1PageTable[Index1] == 0) && (Address != 0)) {
> > + *PageAttribute = PageNone;
> > + return NULL;
> > + }
> > + *PageAttribute = Page4K;
> > + return &L1PageTable[Index1];
> > +}
> > +
> > +/**
> > + Return memory attributes of page entry.
> > +
> > + @param[in] PageEntry The page entry.
> > +
> > + @return Memory attributes of page entry.
> > +**/
> > +UINT64
> > +GetAttributesFromPageEntry (
> > + IN UINT64 *PageEntry
> > + )
> > +{
> > + UINT64 Attributes;
> > + Attributes = 0;
> > + if ((*PageEntry & IA32_PG_P) == 0) {
> > + Attributes |= EFI_MEMORY_RP;
> > + }
> > + if ((*PageEntry & IA32_PG_RW) == 0) {
> > + Attributes |= EFI_MEMORY_RO;
> > + }
> > + if ((*PageEntry & IA32_PG_NX) != 0) {
> > + Attributes |= EFI_MEMORY_XP;
> > + }
> > + return Attributes;
> > +}
> > +
> > +/**
> > + Modify memory attributes of page entry.
> > +
> > + @param[in] PageEntry The page entry.
> > + @param[in] Attributes The bit mask of attributes to modify for the
> > memory region.
> > + @param[in] IsSet TRUE means to set attributes. FALSE means to
> > clear attributes.
> > + @param[out] IsModified TRUE means page table modified. FALSE
> > means page table not modified.
> > +**/
> > +VOID
> > +ConvertPageEntryAttribute (
> > + IN UINT64 *PageEntry,
> > + IN UINT64 Attributes,
> > + IN BOOLEAN IsSet,
> > + OUT BOOLEAN *IsModified
> > + )
> > +{
> > + UINT64 CurrentPageEntry;
> > + UINT64 NewPageEntry;
> > +
> > + CurrentPageEntry = *PageEntry;
> > + NewPageEntry = CurrentPageEntry;
> > + if ((Attributes & EFI_MEMORY_RP) != 0) {
> > + if (IsSet) {
> > + NewPageEntry &= ~(UINT64)IA32_PG_P;
> > + } else {
> > + NewPageEntry |= IA32_PG_P;
> > + }
> > + }
> > + if ((Attributes & EFI_MEMORY_RO) != 0) {
> > + if (IsSet) {
> > + NewPageEntry &= ~(UINT64)IA32_PG_RW;
> > + } else {
> > + NewPageEntry |= IA32_PG_RW;
> > + }
> > + }
> > + if ((Attributes & EFI_MEMORY_XP) != 0) {
> > + if (IsSet) {
> > + NewPageEntry |= IA32_PG_NX;
> > + } else {
> > + NewPageEntry &= ~IA32_PG_NX;
> > + }
> > + }
> > +
> > + if (CurrentPageEntry != NewPageEntry) {
> > + *PageEntry = NewPageEntry;
> > + *IsModified = TRUE;
> > + DEBUG ((DEBUG_INFO, "(SMM)ConvertPageEntryAttribute 0x%lx",
> > CurrentPageEntry));
> > + DEBUG ((DEBUG_INFO, "->0x%lx\n", NewPageEntry));
> > + } else {
> > + *IsModified = FALSE;
> > + }
> > +}
> > +
> > +/**
> > + This function returns if there is need to split page entry.
> > +
> > + @param[in] BaseAddress The base address to be checked.
> > + @param[in] Length The length to be checked.
> > + @param[in] PageEntry The page entry to be checked.
> > + @param[in] PageAttribute The page attribute of the page entry.
> > +
> > + @retval SplitAttributes on if there is need to split page entry.
> > +**/
> > +PAGE_ATTRIBUTE
> > +NeedSplitPage (
> > + IN PHYSICAL_ADDRESS BaseAddress,
> > + IN UINT64 Length,
> > + IN UINT64 *PageEntry,
> > + IN PAGE_ATTRIBUTE PageAttribute
> > + )
> > +{
> > + UINT64 PageEntryLength;
> > +
> > + PageEntryLength = PageAttributeToLength (PageAttribute);
> > +
> > + if (((BaseAddress & (PageEntryLength - 1)) == 0) && (Length >=
> > PageEntryLength)) {
> > + return PageNone;
> > + }
> > +
> > + if (((BaseAddress & PAGING_2M_MASK) != 0) || (Length < SIZE_2MB)) {
> > + return Page4K;
> > + }
> > +
> > + return Page2M;
> > +}
> > +
> > +/**
> > + This function splits one page entry to small page entries.
> > +
> > + @param[in] PageEntry The page entry to be splitted.
> > + @param[in] PageAttribute The page attribute of the page entry.
> > + @param[in] SplitAttribute How to split the page entry.
> > +
> > + @retval RETURN_SUCCESS The page entry is splitted.
> > + @retval RETURN_UNSUPPORTED The page entry does not support to
> > be splitted.
> > + @retval RETURN_OUT_OF_RESOURCES No resource to split page entry.
> > +**/
> > +RETURN_STATUS
> > +SplitPage (
> > + IN UINT64 *PageEntry,
> > + IN PAGE_ATTRIBUTE PageAttribute,
> > + IN PAGE_ATTRIBUTE SplitAttribute
> > + )
> > +{
> > + UINT64 BaseAddress;
> > + UINT64 *NewPageEntry;
> > + UINTN Index;
> > +
> > + ASSERT (PageAttribute == Page2M || PageAttribute == Page1G);
> > +
> > + if (PageAttribute == Page2M) {
> > + //
> > + // Split 2M to 4K
> > + //
> > + ASSERT (SplitAttribute == Page4K);
> > + if (SplitAttribute == Page4K) {
> > + NewPageEntry = PageAlloc (1);
> > + DEBUG ((DEBUG_VERBOSE, "Split - 0x%x\n", NewPageEntry));
> > + if (NewPageEntry == NULL) {
> > + return RETURN_OUT_OF_RESOURCES;
> > + }
> > + BaseAddress = *PageEntry & PAGING_2M_ADDRESS_MASK_64;
> > + for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
> > + NewPageEntry[Index] = (BaseAddress + SIZE_4KB * Index) |
> > mAddressEncMask | ((*PageEntry) & PAGE_PROGATE_BITS);
> > + }
> > + (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask |
> > PAGE_ATTRIBUTE_BITS;
> > + return RETURN_SUCCESS;
> > + } else {
> > + return RETURN_UNSUPPORTED;
> > + }
> > + } else if (PageAttribute == Page1G) {
> > + //
> > + // Split 1G to 2M
> > + // No need support 1G->4K directly, we should use 1G->2M, then 2M->4K
> > to get more compact page table.
> > + //
> > + ASSERT (SplitAttribute == Page2M || SplitAttribute == Page4K);
> > + if ((SplitAttribute == Page2M || SplitAttribute == Page4K)) {
> > + NewPageEntry = PageAlloc (1);
> > + DEBUG ((DEBUG_VERBOSE, "Split - 0x%x\n", NewPageEntry));
> > + if (NewPageEntry == NULL) {
> > + return RETURN_OUT_OF_RESOURCES;
> > + }
> > + BaseAddress = *PageEntry & PAGING_1G_ADDRESS_MASK_64;
> > + for (Index = 0; Index < SIZE_4KB / sizeof(UINT64); Index++) {
> > + NewPageEntry[Index] = (BaseAddress + SIZE_2MB * Index) |
> > mAddressEncMask | IA32_PG_PS | ((*PageEntry) & PAGE_PROGATE_BITS);
> > + }
> > + (*PageEntry) = (UINT64)(UINTN)NewPageEntry | mAddressEncMask |
> > PAGE_ATTRIBUTE_BITS;
> > + return RETURN_SUCCESS;
> > + } else {
> > + return RETURN_UNSUPPORTED;
> > + }
> > + } else {
> > + return RETURN_UNSUPPORTED;
> > + }
> > +}
> > +
> > +/**
> > + This function modifies the page attributes for the memory region specified
> > by BaseAddress and
> > + Length from their current attributes to the attributes specified by
> > Attributes.
> > +
> > + Caller should make sure BaseAddress and Length is at page boundary.
> > +
> > + @param[in] BaseAddress The physical address that is the start address
> > of a memory region.
> > + @param[in] Length The size in bytes of the memory region.
> > + @param[in] Attributes The bit mask of attributes to modify for the
> > memory region.
> > + @param[in] IsSet TRUE means to set attributes. FALSE means to
> > clear attributes.
> > + @param[out] IsSplitted TRUE means page table splitted. FALSE means
> > page table not splitted.
> > + @param[out] IsModified TRUE means page table modified. FALSE
> > means page table not modified.
> > +
> > + @retval RETURN_SUCCESS The attributes were modified for the
> > memory region.
> > + @retval RETURN_ACCESS_DENIED The attributes for the memory
> > resource range specified by
> > + BaseAddress and Length cannot be modified.
> > + @retval RETURN_INVALID_PARAMETER Length is zero.
> > + Attributes specified an illegal combination of attributes
> > that
> > + cannot be set together.
> > + @retval RETURN_OUT_OF_RESOURCES There are not enough system
> > resources to modify the attributes of
> > + the memory resource range.
> > + @retval RETURN_UNSUPPORTED The processor does not support one
> > or more bytes of the memory
> > + resource range specified by BaseAddress and Length.
> > + The bit mask of attributes is not support for the memory
> > resource
> > + range specified by BaseAddress and Length.
> > +**/
> > +RETURN_STATUS
> > +EFIAPI
> > +ConvertMemoryPageAttributes (
> > + IN PHYSICAL_ADDRESS BaseAddress,
> > + IN UINT64 Length,
> > + IN UINT64 Attributes,
> > + IN BOOLEAN IsSet,
> > + OUT BOOLEAN *IsSplitted, OPTIONAL
> > + OUT BOOLEAN *IsModified OPTIONAL
> > + )
> > +{
> > + UINT64 *PageEntry;
> > + PAGE_ATTRIBUTE PageAttribute;
> > + UINTN PageEntryLength;
> > + PAGE_ATTRIBUTE SplitAttribute;
> > + RETURN_STATUS Status;
> > + BOOLEAN IsEntryModified;
> > + EFI_PHYSICAL_ADDRESS MaximumSupportMemAddress;
> > +
> > + ASSERT (Attributes != 0);
> > + ASSERT ((Attributes & ~(EFI_MEMORY_RP | EFI_MEMORY_RO |
> > EFI_MEMORY_XP)) == 0);
> > +
> > + ASSERT ((BaseAddress & (SIZE_4KB - 1)) == 0);
> > + ASSERT ((Length & (SIZE_4KB - 1)) == 0);
> > +
> > + if (Length == 0) {
> > + return RETURN_INVALID_PARAMETER;
> > + }
> > +
> > + MaximumSupportMemAddress =
> > (EFI_PHYSICAL_ADDRESS)(UINTN)(LShiftU64 (1, mPhysicalAddressBits) - 1);
> > + if (BaseAddress > MaximumSupportMemAddress) {
> > + return RETURN_UNSUPPORTED;
> > + }
> > + if (Length > MaximumSupportMemAddress) {
> > + return RETURN_UNSUPPORTED;
> > + }
> > + if ((Length != 0) && (BaseAddress > MaximumSupportMemAddress -
> > (Length - 1))) {
> > + return RETURN_UNSUPPORTED;
> > + }
> > +
> > +// DEBUG ((DEBUG_ERROR, "ConvertMemoryPageAttributes(%x) -
> > %016lx, %016lx, %02lx\n", IsSet, BaseAddress, Length, Attributes));
> > +
> > + if (IsSplitted != NULL) {
> > + *IsSplitted = FALSE;
> > + }
> > + if (IsModified != NULL) {
> > + *IsModified = FALSE;
> > + }
> > +
> > + //
> > + // Below logic is to check 2M/4K page to make sure we do not waste
> > memory.
> > + //
> > + while (Length != 0) {
> > + PageEntry = GetPageTableEntry (BaseAddress, &PageAttribute);
> > + if (PageEntry == NULL) {
> > + return RETURN_UNSUPPORTED;
> > + }
> > + PageEntryLength = PageAttributeToLength (PageAttribute);
> > + SplitAttribute = NeedSplitPage (BaseAddress, Length, PageEntry,
> > PageAttribute);
> > + if (SplitAttribute == PageNone) {
> > + ConvertPageEntryAttribute (PageEntry, Attributes, IsSet,
> > &IsEntryModified);
> > + if (IsEntryModified) {
> > + if (IsModified != NULL) {
> > + *IsModified = TRUE;
> > + }
> > + }
> > + //
> > + // Convert success, move to next
> > + //
> > + BaseAddress += PageEntryLength;
> > + Length -= PageEntryLength;
> > + } else {
> > + Status = SplitPage (PageEntry, PageAttribute, SplitAttribute);
> > + if (RETURN_ERROR (Status)) {
> > + return RETURN_UNSUPPORTED;
> > + }
> > + if (IsSplitted != NULL) {
> > + *IsSplitted = TRUE;
> > + }
> > + if (IsModified != NULL) {
> > + *IsModified = TRUE;
> > + }
> > + //
> > + // Just split current page
> > + // Convert success in next around
> > + //
> > + }
> > + }
> > +
> > + return RETURN_SUCCESS;
> > +}
> > +
> > +/**
> > + FlushTlb on current processor.
> > +
> > + @param[in,out] Buffer Pointer to private data buffer.
> > +**/
> > +VOID
> > +EFIAPI
> > +FlushTlbOnCurrentProcessor (
> > + IN OUT VOID *Buffer
> > + )
> > +{
> > + CpuFlushTlb ();
> > +}
> > +
> > +/**
> > + FlushTlb for all processors.
> > +**/
> > +VOID
> > +FlushTlbForAll (
> > + VOID
> > + )
> > +{
> > + UINTN Index;
> > +
> > + FlushTlbOnCurrentProcessor (NULL);
> > +
> > + if (gSmmCoreSmst.SmmStartupThisAp == NULL) {
> > + DEBUG ((DEBUG_WARN, "Cannot flush TLB for APs\r\n"));
> > + return;
> > + }
> > +
> > + for (Index = 0; Index < gSmmCoreSmst.NumberOfCpus; Index++) {
> > + if (Index != gSmmCoreSmst.CurrentlyExecutingCpu) {
> > + // Force to start up AP in blocking mode,
> > + gSmmCoreSmst.SmmStartupThisAp (FlushTlbOnCurrentProcessor, Index,
> > NULL);
> > + // Do not check return status, because AP might not be present in some
> > corner cases.
> > + }
> > + }
> > +}
> > +
> > +/**
> > + This function sets the attributes for the memory region specified by
> > BaseAddress and
> > + Length from their current attributes to the attributes specified by
> > Attributes.
> > +
> > + @param[in] BaseAddress The physical address that is the start address
> > of a memory region.
> > + @param[in] Length The size in bytes of the memory region.
> > + @param[in] Attributes The bit mask of attributes to set for the
> > memory region.
> > + @param[out] IsSplitted TRUE means page table splitted. FALSE means
> > page table not splitted.
> > +
> > + @retval EFI_SUCCESS The attributes were set for the memory region.
> > + @retval EFI_ACCESS_DENIED The attributes for the memory resource
> > range specified by
> > + BaseAddress and Length cannot be modified.
> > + @retval EFI_INVALID_PARAMETER Length is zero.
> > + Attributes specified an illegal combination of attributes that
> > + cannot be set together.
> > + @retval EFI_OUT_OF_RESOURCES There are not enough system
> > resources to modify the attributes of
> > + the memory resource range.
> > + @retval EFI_UNSUPPORTED The processor does not support one or
> > more bytes of the memory
> > + resource range specified by BaseAddress and Length.
> > + The bit mask of attributes is not support for the memory
> > resource
> > + range specified by BaseAddress and Length.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +SmmSetMemoryAttributesEx (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> > + IN UINT64 Length,
> > + IN UINT64 Attributes,
> > + OUT BOOLEAN *IsSplitted OPTIONAL
> > + )
> > +{
> > + EFI_STATUS Status;
> > + BOOLEAN IsModified;
> > +
> > + Status = ConvertMemoryPageAttributes (BaseAddress, Length, Attributes,
> > TRUE, IsSplitted, &IsModified);
> > + if (!EFI_ERROR(Status)) {
> > + if (IsModified) {
> > + //
> > + // Flush TLB as last step
> > + //
> > + FlushTlbForAll();
> > + }
> > + }
> > +
> > + return Status;
> > +}
> > +
> > +/**
> > + This function clears the attributes for the memory region specified by
> > BaseAddress and
> > + Length from their current attributes to the attributes specified by
> > Attributes.
> > +
> > + @param[in] BaseAddress The physical address that is the start address
> > of a memory region.
> > + @param[in] Length The size in bytes of the memory region.
> > + @param[in] Attributes The bit mask of attributes to clear for the
> > memory region.
> > + @param[out] IsSplitted TRUE means page table splitted. FALSE means
> > page table not splitted.
> > +
> > + @retval EFI_SUCCESS The attributes were cleared for the memory
> > region.
> > + @retval EFI_ACCESS_DENIED The attributes for the memory resource
> > range specified by
> > + BaseAddress and Length cannot be modified.
> > + @retval EFI_INVALID_PARAMETER Length is zero.
> > + Attributes specified an illegal combination of attributes that
> > + cannot be set together.
> > + @retval EFI_OUT_OF_RESOURCES There are not enough system
> > resources to modify the attributes of
> > + the memory resource range.
> > + @retval EFI_UNSUPPORTED The processor does not support one or
> > more bytes of the memory
> > + resource range specified by BaseAddress and Length.
> > + The bit mask of attributes is not support for the memory
> > resource
> > + range specified by BaseAddress and Length.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +SmmClearMemoryAttributesEx (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> > + IN UINT64 Length,
> > + IN UINT64 Attributes,
> > + OUT BOOLEAN *IsSplitted OPTIONAL
> > + )
> > +{
> > + EFI_STATUS Status;
> > + BOOLEAN IsModified;
> > +
> > + Status = ConvertMemoryPageAttributes (BaseAddress, Length, Attributes,
> > FALSE, IsSplitted, &IsModified);
> > + if (!EFI_ERROR(Status)) {
> > + if (IsModified) {
> > + //
> > + // Flush TLB as last step
> > + //
> > + FlushTlbForAll();
> > + }
> > + }
> > +
> > + return Status;
> > +}
> > +
> > +/**
> > + This function sets the attributes for the memory region specified by
> > BaseAddress and
> > + Length from their current attributes to the attributes specified by
> > Attributes.
> > +
> > + @param[in] BaseAddress The physical address that is the start address
> > of a memory region.
> > + @param[in] Length The size in bytes of the memory region.
> > + @param[in] Attributes The bit mask of attributes to set for the memory
> > region.
> > +
> > + @retval EFI_SUCCESS The attributes were set for the memory region.
> > + @retval EFI_ACCESS_DENIED The attributes for the memory resource
> > range specified by
> > + BaseAddress and Length cannot be modified.
> > + @retval EFI_INVALID_PARAMETER Length is zero.
> > + Attributes specified an illegal combination of attributes that
> > + cannot be set together.
> > + @retval EFI_OUT_OF_RESOURCES There are not enough system
> > resources to modify the attributes of
> > + the memory resource range.
> > + @retval EFI_UNSUPPORTED The processor does not support one or
> > more bytes of the memory
> > + resource range specified by BaseAddress and Length.
> > + The bit mask of attributes is not support for the memory
> > resource
> > + range specified by BaseAddress and Length.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +SmmSetMemoryAttributes (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> > + IN UINT64 Length,
> > + IN UINT64 Attributes
> > + )
> > +{
> > + return SmmSetMemoryAttributesEx (BaseAddress, Length, Attributes,
> > NULL);
> > +}
> > +
> > +/**
> > + This function clears the attributes for the memory region specified by
> > BaseAddress and
> > + Length from their current attributes to the attributes specified by
> > Attributes.
> > +
> > + @param[in] BaseAddress The physical address that is the start address
> > of a memory region.
> > + @param[in] Length The size in bytes of the memory region.
> > + @param[in] Attributes The bit mask of attributes to clear for the
> > memory region.
> > +
> > + @retval EFI_SUCCESS The attributes were cleared for the memory
> > region.
> > + @retval EFI_ACCESS_DENIED The attributes for the memory resource
> > range specified by
> > + BaseAddress and Length cannot be modified.
> > + @retval EFI_INVALID_PARAMETER Length is zero.
> > + Attributes specified an illegal combination of attributes that
> > + cannot be set together.
> > + @retval EFI_OUT_OF_RESOURCES There are not enough system
> > resources to modify the attributes of
> > + the memory resource range.
> > + @retval EFI_UNSUPPORTED The processor does not support one or
> > more bytes of the memory
> > + resource range specified by BaseAddress and Length.
> > + The bit mask of attributes is not support for the memory
> > resource
> > + range specified by BaseAddress and Length.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +SmmClearMemoryAttributes (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> > + IN UINT64 Length,
> > + IN UINT64 Attributes
> > + )
> > +{
> > + return SmmClearMemoryAttributesEx (BaseAddress, Length, Attributes,
> > NULL);
> > +}
> > +
> > +/**
> > + Initialize the Page Table lib.
> > +**/
> > +VOID
> > +InitializePageTableLib (
> > + VOID
> > + )
> > +{
> > + mAddressEncMask = PcdGet64 (PcdPteMemoryEncryptionAddressOrMask)
> > & PAGING_1G_ADDRESS_MASK_64;
> > + mPhysicalAddressBits = CalculateMaximumSupportAddress ();
> > + DEBUG ((DEBUG_INFO, "mAddressEncMask = 0x%lx\r\n",
> > mAddressEncMask));
> > + DEBUG ((DEBUG_INFO, "mPhysicalAddressBits = %d\r\n",
> > mPhysicalAddressBits));
> > + return ;
> > +}
> > +
> > diff --git a/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h
> > b/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h
> > new file mode 100644
> > index 0000000000..61a64af370
> > --- /dev/null
> > +++ b/MdeModulePkg/Core/PiSmmCore/Misc/PageTable.h
> > @@ -0,0 +1,174 @@
> > +/** @file
> > + Page table management header file.
> > +
> > + Copyright (c) 2017, Intel Corporation. All rights reserved.<BR>
> > + This program and the accompanying materials
> > + are licensed and made available under the terms and conditions of the BSD
> > License
> > + which accompanies this distribution. The full text of the license may be
> > found at
> > + http://opensource.org/licenses/bsd-license.php
> > +
> > + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS"
> > BASIS,
> > + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER
> > EXPRESS OR IMPLIED.
> > +
> > +**/
> > +
> > +#ifndef _PAGE_TABLE_LIB_H_
> > +#define _PAGE_TABLE_LIB_H_
> > +
> > +///
> > +/// Page Table Entry
> > +///
> > +#define IA32_PG_P BIT0
> > +#define IA32_PG_RW BIT1
> > +#define IA32_PG_U BIT2
> > +#define IA32_PG_WT BIT3
> > +#define IA32_PG_CD BIT4
> > +#define IA32_PG_A BIT5
> > +#define IA32_PG_D BIT6
> > +#define IA32_PG_PS BIT7
> > +#define IA32_PG_PAT_2M BIT12
> > +#define IA32_PG_PAT_4K IA32_PG_PS
> > +#define IA32_PG_PMNT BIT62
> > +#define IA32_PG_NX BIT63
> > +
> > +#define PAGE_ATTRIBUTE_BITS (IA32_PG_D | IA32_PG_A |
> > IA32_PG_U | IA32_PG_RW | IA32_PG_P)
> > +//
> > +// Bits 1, 2, 5, 6 are reserved in the IA32 PAE PDPTE
> > +// X64 PAE PDPTE does not have such restriction
> > +//
> > +#define IA32_PAE_PDPTE_ATTRIBUTE_BITS (IA32_PG_P)
> > +
> > +#define PAGE_PROGATE_BITS (IA32_PG_NX | PAGE_ATTRIBUTE_BITS)
> > +
> > +#define PAGING_4K_MASK 0xFFF
> > +#define PAGING_2M_MASK 0x1FFFFF
> > +#define PAGING_1G_MASK 0x3FFFFFFF
> > +
> > +#define PAGING_PAE_INDEX_MASK 0x1FF
> > +
> > +#define PAGING_4K_ADDRESS_MASK_64 0x000FFFFFFFFFF000ull
> > +#define PAGING_2M_ADDRESS_MASK_64 0x000FFFFFFFE00000ull
> > +#define PAGING_1G_ADDRESS_MASK_64 0x000FFFFFC0000000ull
> > +
> > +#define SMRR_MAX_ADDRESS BASE_4GB
> > +
> > +typedef enum {
> > + PageNone = 0,
> > + Page4K,
> > + Page2M,
> > + Page1G,
> > +} PAGE_ATTRIBUTE;
> > +
> > +typedef struct {
> > + PAGE_ATTRIBUTE Attribute;
> > + UINT64 Length;
> > + UINT64 AddressMask;
> > +} PAGE_ATTRIBUTE_TABLE;
> > +
> > +/**
> > + Helper function to allocate pages without Guard for internal uses
> > +
> > + @param[in] Pages Page number
> > +
> > + @return Address of memory allocated
> > +**/
> > +VOID *
> > +PageAlloc (
> > + IN UINTN Pages
> > + );
> > +
> > +/**
> > + This function sets the attributes for the memory region specified by
> > BaseAddress and
> > + Length from their current attributes to the attributes specified by
> > Attributes.
> > +
> > + @param[in] BaseAddress The physical address that is the start address
> > of a memory region.
> > + @param[in] Length The size in bytes of the memory region.
> > + @param[in] Attributes The bit mask of attributes to set for the
> > memory region.
> > + @param[out] IsSplitted TRUE means page table splitted. FALSE means
> > page table not splitted.
> > +
> > + @retval EFI_SUCCESS The attributes were set for the memory region.
> > + @retval EFI_ACCESS_DENIED The attributes for the memory resource
> > range specified by
> > + BaseAddress and Length cannot be modified.
> > + @retval EFI_INVALID_PARAMETER Length is zero.
> > + Attributes specified an illegal combination of attributes that
> > + cannot be set together.
> > + @retval EFI_OUT_OF_RESOURCES There are not enough system
> > resources to modify the attributes of
> > + the memory resource range.
> > + @retval EFI_UNSUPPORTED The processor does not support one or
> > more bytes of the memory
> > + resource range specified by BaseAddress and Length.
> > + The bit mask of attributes is not support for the memory
> > resource
> > + range specified by BaseAddress and Length.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +SmmSetMemoryAttributes (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> > + IN UINT64 Length,
> > + IN UINT64 Attributes
> > + );
> > +
> > +/**
> > + This function clears the attributes for the memory region specified by
> > BaseAddress and
> > + Length from their current attributes to the attributes specified by
> > Attributes.
> > +
> > + @param[in] BaseAddress The physical address that is the start address
> > of a memory region.
> > + @param[in] Length The size in bytes of the memory region.
> > + @param[in] Attributes The bit mask of attributes to clear for the
> > memory region.
> > + @param[out] IsSplitted TRUE means page table splitted. FALSE means
> > page table not splitted.
> > +
> > + @retval EFI_SUCCESS The attributes were cleared for the memory
> > region.
> > + @retval EFI_ACCESS_DENIED The attributes for the memory resource
> > range specified by
> > + BaseAddress and Length cannot be modified.
> > + @retval EFI_INVALID_PARAMETER Length is zero.
> > + Attributes specified an illegal combination of attributes that
> > + cannot be set together.
> > + @retval EFI_OUT_OF_RESOURCES There are not enough system
> > resources to modify the attributes of
> > + the memory resource range.
> > + @retval EFI_UNSUPPORTED The processor does not support one or
> > more bytes of the memory
> > + resource range specified by BaseAddress and Length.
> > + The bit mask of attributes is not support for the memory
> > resource
> > + range specified by BaseAddress and Length.
> > +
> > +**/
> > +EFI_STATUS
> > +EFIAPI
> > +SmmClearMemoryAttributes (
> > + IN EFI_PHYSICAL_ADDRESS BaseAddress,
> > + IN UINT64 Length,
> > + IN UINT64 Attributes
> > + );
> > +
> > +/**
> > + Initialize the Page Table lib.
> > +**/
> > +VOID
> > +InitializePageTableLib (
> > + VOID
> > + );
> > +
> > +/**
> > + Return page table base.
> > +
> > + @return page table base.
> > +**/
> > +UINTN
> > +GetPageTableBase (
> > + VOID
> > + );
> > +
> > +/**
> > + Return page table entry to match the address.
> > +
> > + @param[in] Address The address to be checked.
> > + @param[out] PageAttributes The page attribute of the page entry.
> > +
> > + @return The page entry.
> > +**/
> > +VOID *
> > +GetPageTableEntry (
> > + IN PHYSICAL_ADDRESS Address,
> > + OUT PAGE_ATTRIBUTE *PageAttribute
> > + );
> > +
> > +#endif
> > diff --git a/MdeModulePkg/Core/PiSmmCore/Page.c
> > b/MdeModulePkg/Core/PiSmmCore/Page.c
> > index 4154c2e6a1..29d1311f5a 100644
> > --- a/MdeModulePkg/Core/PiSmmCore/Page.c
> > +++ b/MdeModulePkg/Core/PiSmmCore/Page.c
> > @@ -64,6 +64,8 @@ LIST_ENTRY mFreeMemoryMapEntryList =
> > INITIALIZE_LIST_HEAD_VARIABLE (mFreeMemor
> > @param[out] Memory A pointer to receive the base allocated
> > memory
> > address.
> > @param[in] AddRegion If this memory is new added region.
> > + @param[in] NeedGuard Flag to indicate Guard page is needed
> > + or not
> >
> > @retval EFI_INVALID_PARAMETER Parameters violate checking rules
> > defined in spec.
> > @retval EFI_NOT_FOUND Could not allocate pages match the
> > requirement.
> > @@ -77,7 +79,8 @@ SmmInternalAllocatePagesEx (
> > IN EFI_MEMORY_TYPE MemoryType,
> > IN UINTN NumberOfPages,
> > OUT EFI_PHYSICAL_ADDRESS *Memory,
> > - IN BOOLEAN AddRegion
> > + IN BOOLEAN AddRegion,
> > + IN BOOLEAN NeedGuard
> > );
> >
> > /**
> > @@ -112,7 +115,8 @@ AllocateMemoryMapEntry (
> > EfiRuntimeServicesData,
> > EFI_SIZE_TO_PAGES (RUNTIME_PAGE_ALLOCATION_GRANULARITY),
> > &Mem,
> > - TRUE
> > + TRUE,
> > + FALSE
> > );
> > ASSERT_EFI_ERROR (Status);
> > if(!EFI_ERROR (Status)) {
> > @@ -688,6 +692,8 @@ InternalAllocAddress (
> > @param[out] Memory A pointer to receive the base allocated
> > memory
> > address.
> > @param[in] AddRegion If this memory is new added region.
> > + @param[in] NeedGuard Flag to indicate Guard page is needed
> > + or not
> >
> > @retval EFI_INVALID_PARAMETER Parameters violate checking rules
> > defined in spec.
> > @retval EFI_NOT_FOUND Could not allocate pages match the
> > requirement.
> > @@ -701,7 +707,8 @@ SmmInternalAllocatePagesEx (
> > IN EFI_MEMORY_TYPE MemoryType,
> > IN UINTN NumberOfPages,
> > OUT EFI_PHYSICAL_ADDRESS *Memory,
> > - IN BOOLEAN AddRegion
> > + IN BOOLEAN AddRegion,
> > + IN BOOLEAN NeedGuard
> > )
> > {
> > UINTN RequestedAddress;
> > @@ -723,6 +730,21 @@ SmmInternalAllocatePagesEx (
> > case AllocateAnyPages:
> > RequestedAddress = (UINTN)(-1);
> > case AllocateMaxAddress:
> > + if (NeedGuard) {
> > + *Memory = InternalAllocMaxAddressWithGuard (
> > + &mSmmMemoryMap,
> > + NumberOfPages,
> > + RequestedAddress,
> > + MemoryType
> > + );
> > + if (*Memory == (UINTN)-1) {
> > + return EFI_OUT_OF_RESOURCES;
> > + } else {
> > + ASSERT (VerifyMemoryGuard(*Memory, NumberOfPages) == TRUE);
> > + return EFI_SUCCESS;
> > + }
> > + }
> > +
> > *Memory = InternalAllocMaxAddress (
> > &mSmmMemoryMap,
> > NumberOfPages,
> > @@ -766,6 +788,8 @@ SmmInternalAllocatePagesEx (
> > @param[in] NumberOfPages The number of pages to allocate.
> > @param[out] Memory A pointer to receive the base allocated
> > memory
> > address.
> > + @param[in] NeedGuard Flag to indicate Guard page is needed
> > + or not
> >
> > @retval EFI_INVALID_PARAMETER Parameters violate checking rules
> > defined in spec.
> > @retval EFI_NOT_FOUND Could not allocate pages match the
> > requirement.
> > @@ -779,10 +803,12 @@ SmmInternalAllocatePages (
> > IN EFI_ALLOCATE_TYPE Type,
> > IN EFI_MEMORY_TYPE MemoryType,
> > IN UINTN NumberOfPages,
> > - OUT EFI_PHYSICAL_ADDRESS *Memory
> > + OUT EFI_PHYSICAL_ADDRESS *Memory,
> > + IN BOOLEAN NeedGuard
> > )
> > {
> > - return SmmInternalAllocatePagesEx (Type, MemoryType, NumberOfPages,
> > Memory, FALSE);
> > + return SmmInternalAllocatePagesEx (Type, MemoryType,
> > NumberOfPages, Memory,
> > + FALSE, NeedGuard);
> > }
> >
> > /**
> > @@ -811,8 +837,11 @@ SmmAllocatePages (
> > )
> > {
> > EFI_STATUS Status;
> > + BOOLEAN NeedGuard;
> >
> > - Status = SmmInternalAllocatePages (Type, MemoryType, NumberOfPages,
> > Memory);
> > + NeedGuard = IsPageTypeToGuard (MemoryType, Type);
> > + Status = SmmInternalAllocatePages (Type, MemoryType, NumberOfPages,
> > Memory,
> > + NeedGuard);
> > if (!EFI_ERROR (Status)) {
> > SmmCoreUpdateProfile (
> > (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
> > @@ -941,9 +970,13 @@ EFI_STATUS
> > EFIAPI
> > SmmInternalFreePages (
> > IN EFI_PHYSICAL_ADDRESS Memory,
> > - IN UINTN NumberOfPages
> > + IN UINTN NumberOfPages,
> > + IN BOOLEAN IsGuarded
> > )
> > {
> > + if (IsGuarded) {
> > + return SmmInternalFreePagesExWithGuard (Memory, NumberOfPages,
> > FALSE);
> > + }
> > return SmmInternalFreePagesEx (Memory, NumberOfPages, FALSE);
> > }
> >
> > @@ -966,8 +999,10 @@ SmmFreePages (
> > )
> > {
> > EFI_STATUS Status;
> > + BOOLEAN IsGuarded;
> >
> > - Status = SmmInternalFreePages (Memory, NumberOfPages);
> > + IsGuarded = IsHeapGuardEnabled () && IsMemoryGuarded (Memory);
> > + Status = SmmInternalFreePages (Memory, NumberOfPages, IsGuarded);
> > if (!EFI_ERROR (Status)) {
> > SmmCoreUpdateProfile (
> > (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
> > diff --git a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.c
> > b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.c
> > index 9e4390e15a..b4609c2fed 100644
> > --- a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.c
> > +++ b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.c
> > @@ -451,6 +451,11 @@ SmmEntryPoint (
> > //
> > PlatformHookBeforeSmmDispatch ();
> >
> > + //
> > + // Call memory management hook function
> > + //
> > + SmmEntryPointMemoryManagementHook ();
> > +
> > //
> > // If a legacy boot has occured, then make sure gSmmCorePrivate is not
> > accessed
> > //
> > @@ -644,7 +649,12 @@ SmmMain (
> > //
> > gSmmCorePrivate->Smst = &gSmmCoreSmst;
> > gSmmCorePrivate->SmmEntryPoint = SmmEntryPoint;
> > -
> > +
> > + //
> > + // Initialize page table operations
> > + //
> > + InitializePageTableLib();
> > +
> > //
> > // No need to initialize memory service.
> > // It is done in constructor of PiSmmCoreMemoryAllocationLib(),
> > diff --git a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.h
> > b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.h
> > index b6f815c68d..8c61fdcf0c 100644
> > --- a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.h
> > +++ b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.h
> > @@ -59,6 +59,7 @@
> > #include <Library/SmmMemLib.h>
> >
> > #include "PiSmmCorePrivateData.h"
> > +#include "Misc/HeapGuard.h"
> >
> > //
> > // Used to build a table of SMI Handlers that the SMM Core registers
> > @@ -317,6 +318,7 @@ SmmAllocatePages (
> > @param NumberOfPages The number of pages to allocate
> > @param Memory A pointer to receive the base allocated memory
> > address
> > + @param NeedGuard Flag to indicate Guard page is needed or not
> >
> > @retval EFI_INVALID_PARAMETER Parameters violate checking rules
> > defined in spec.
> > @retval EFI_NOT_FOUND Could not allocate pages match the
> > requirement.
> > @@ -330,7 +332,8 @@ SmmInternalAllocatePages (
> > IN EFI_ALLOCATE_TYPE Type,
> > IN EFI_MEMORY_TYPE MemoryType,
> > IN UINTN NumberOfPages,
> > - OUT EFI_PHYSICAL_ADDRESS *Memory
> > + OUT EFI_PHYSICAL_ADDRESS *Memory,
> > + IN BOOLEAN NeedGuard
> > );
> >
> > /**
> > @@ -356,6 +359,8 @@ SmmFreePages (
> >
> > @param Memory Base address of memory being freed
> > @param NumberOfPages The number of pages to free
> > + @param IsGuarded Flag to indicate if the memory is guarded
> > + or not
> >
> > @retval EFI_NOT_FOUND Could not find the entry that covers the
> > range
> > @retval EFI_INVALID_PARAMETER Address not aligned, Address is zero or
> > NumberOfPages is zero.
> > @@ -366,7 +371,8 @@ EFI_STATUS
> > EFIAPI
> > SmmInternalFreePages (
> > IN EFI_PHYSICAL_ADDRESS Memory,
> > - IN UINTN NumberOfPages
> > + IN UINTN NumberOfPages,
> > + IN BOOLEAN IsGuarded
> > );
> >
> > /**
> > @@ -1231,4 +1237,74 @@ typedef enum {
> >
> > extern LIST_ENTRY
> > mSmmPoolLists[SmmPoolTypeMax][MAX_POOL_INDEX];
> >
> > +/**
> > + Internal Function. Allocate n pages from given free page node.
> > +
> > + @param Pages The free page node.
> > + @param NumberOfPages Number of pages to be allocated.
> > + @param MaxAddress Request to allocate memory below this
> > address.
> > +
> > + @return Memory address of allocated pages.
> > +
> > +**/
> > +UINTN
> > +InternalAllocPagesOnOneNode (
> > + IN OUT FREE_PAGE_LIST *Pages,
> > + IN UINTN NumberOfPages,
> > + IN UINTN MaxAddress
> > + );
> > +
> > +/**
> > + Update SMM memory map entry.
> > +
> > + @param[in] Type The type of allocation to perform.
> > + @param[in] Memory The base of memory address.
> > + @param[in] NumberOfPages The number of pages to allocate.
> > + @param[in] AddRegion If this memory is new added region.
> > +**/
> > +VOID
> > +ConvertSmmMemoryMapEntry (
> > + IN EFI_MEMORY_TYPE Type,
> > + IN EFI_PHYSICAL_ADDRESS Memory,
> > + IN UINTN NumberOfPages,
> > + IN BOOLEAN AddRegion
> > + );
> > +
> > +/**
> > + Internal function. Moves any memory descriptors that are on the
> > + temporary descriptor stack to heap.
> > +
> > +**/
> > +VOID
> > +CoreFreeMemoryMapStack (
> > + VOID
> > + );
> > +
> > +/**
> > + Frees previous allocated pages.
> > +
> > + @param[in] Memory Base address of memory being freed.
> > + @param[in] NumberOfPages The number of pages to free.
> > + @param[in] AddRegion If this memory is new added region.
> > +
> > + @retval EFI_NOT_FOUND Could not find the entry that covers the
> > range.
> > + @retval EFI_INVALID_PARAMETER Address not aligned, Address is zero or
> > NumberOfPages is zero.
> > + @return EFI_SUCCESS Pages successfully freed.
> > +
> > +**/
> > +EFI_STATUS
> > +SmmInternalFreePagesEx (
> > + IN EFI_PHYSICAL_ADDRESS Memory,
> > + IN UINTN NumberOfPages,
> > + IN BOOLEAN AddRegion
> > + );
> > +
> > +/**
> > + Hook function used to set all Guard pages after entering SMM mode
> > +**/
> > +VOID
> > +SmmEntryPointMemoryManagementHook (
> > + VOID
> > + );
> > +
> > #endif
> > diff --git a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf
> > b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf
> > index 49ae6fbb57..e505b165bc 100644
> > --- a/MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf
> > +++ b/MdeModulePkg/Core/PiSmmCore/PiSmmCore.inf
> > @@ -40,6 +40,8 @@
> > SmramProfileRecord.c
> > MemoryAttributesTable.c
> > SmiHandlerProfile.c
> > + Misc/HeapGuard.c
> > + Misc/PageTable.c
> >
> > [Packages]
> > MdePkg/MdePkg.dec
> > @@ -65,6 +67,7 @@
> > HobLib
> > SmmMemLib
> > DxeServicesLib
> > + CpuLib
> >
> > [Protocols]
> > gEfiDxeSmmReadyToLockProtocolGuid ## UNDEFINED #
> > SmiHandlerRegister
> > @@ -88,6 +91,7 @@
> > gEfiSmmGpiDispatch2ProtocolGuid ## SOMETIMES_CONSUMES
> > gEfiSmmIoTrapDispatch2ProtocolGuid ## SOMETIMES_CONSUMES
> > gEfiSmmUsbDispatch2ProtocolGuid ## SOMETIMES_CONSUMES
> > + gEfiSmmCpuProtocolGuid ## SOMETIMES_CONSUMES
> >
> > [Pcd]
> >
> > gEfiMdeModulePkgTokenSpaceGuid.PcdLoadFixAddressSmmCodePageNum
> > ber ## SOMETIMES_CONSUMES
> > @@ -96,6 +100,10 @@
> > gEfiMdeModulePkgTokenSpaceGuid.PcdMemoryProfilePropertyMask
> > ## CONSUMES
> > gEfiMdeModulePkgTokenSpaceGuid.PcdMemoryProfileDriverPath
> > ## CONSUMES
> > gEfiMdeModulePkgTokenSpaceGuid.PcdSmiHandlerProfilePropertyMask
> > ## CONSUMES
> > + gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPageType ##
> > CONSUMES
> > + gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType ##
> > CONSUMES
> > + gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> > ## CONSUMES
> > +
> > gEfiMdeModulePkgTokenSpaceGuid.PcdPteMemoryEncryptionAddressOrM
> > ask ## CONSUMES
> >
> > [Guids]
> > gAprioriGuid ## SOMETIMES_CONSUMES ## File
> > diff --git a/MdeModulePkg/Core/PiSmmCore/Pool.c
> > b/MdeModulePkg/Core/PiSmmCore/Pool.c
> > index 36317563c4..1f9213ea6e 100644
> > --- a/MdeModulePkg/Core/PiSmmCore/Pool.c
> > +++ b/MdeModulePkg/Core/PiSmmCore/Pool.c
> > @@ -144,7 +144,9 @@ InternalAllocPoolByIndex (
> > Status = EFI_SUCCESS;
> > Hdr = NULL;
> > if (PoolIndex == MAX_POOL_INDEX) {
> > - Status = SmmInternalAllocatePages (AllocateAnyPages, PoolType,
> > EFI_SIZE_TO_PAGES (MAX_POOL_SIZE << 1), &Address);
> > + Status = SmmInternalAllocatePages (AllocateAnyPages, PoolType,
> > + EFI_SIZE_TO_PAGES (MAX_POOL_SIZE << 1),
> > + &Address, FALSE);
> > if (EFI_ERROR (Status)) {
> > return EFI_OUT_OF_RESOURCES;
> > }
> > @@ -243,6 +245,9 @@ SmmInternalAllocatePool (
> > EFI_STATUS Status;
> > EFI_PHYSICAL_ADDRESS Address;
> > UINTN PoolIndex;
> > + BOOLEAN HasPoolTail;
> > + BOOLEAN NeedGuard;
> > + UINTN NoPages;
> >
> > Address = 0;
> >
> > @@ -251,25 +256,45 @@ SmmInternalAllocatePool (
> > return EFI_INVALID_PARAMETER;
> > }
> >
> > + NeedGuard = IsPoolTypeToGuard (PoolType);
> > + HasPoolTail = !(NeedGuard &&
> > + ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +
> > //
> > // Adjust the size by the pool header & tail overhead
> > //
> > Size += POOL_OVERHEAD;
> > - if (Size > MAX_POOL_SIZE) {
> > - Size = EFI_SIZE_TO_PAGES (Size);
> > - Status = SmmInternalAllocatePages (AllocateAnyPages, PoolType, Size,
> > &Address);
> > + if (Size > MAX_POOL_SIZE || NeedGuard) {
> > + if (!HasPoolTail) {
> > + Size -= sizeof (POOL_TAIL);
> > + }
> > +
> > + NoPages = EFI_SIZE_TO_PAGES (Size);
> > + Status = SmmInternalAllocatePages (AllocateAnyPages, PoolType,
> > NoPages,
> > + &Address, NeedGuard);
> > if (EFI_ERROR (Status)) {
> > return Status;
> > }
> >
> > + if (NeedGuard) {
> > + ASSERT (VerifyMemoryGuard(Address, NoPages) == TRUE);
> > + DEBUG ((DEBUG_INFO, "SmmInternalAllocatePool: %lx ->", Address));
> > + Address = (EFI_PHYSICAL_ADDRESS)AdjustPoolHeadA (Address,
> > NoPages, Size);
> > + DEBUG ((DEBUG_INFO, " %lx %d %x\r\n", Address, NoPages, Size));
> > + }
> > +
> > PoolHdr = (POOL_HEADER*)(UINTN)Address;
> > PoolHdr->Signature = POOL_HEAD_SIGNATURE;
> > - PoolHdr->Size = EFI_PAGES_TO_SIZE (Size);
> > + PoolHdr->Size = Size; //EFI_PAGES_TO_SIZE (NoPages)
> > PoolHdr->Available = FALSE;
> > PoolHdr->Type = PoolType;
> > - PoolTail = HEAD_TO_TAIL(PoolHdr);
> > - PoolTail->Signature = POOL_TAIL_SIGNATURE;
> > - PoolTail->Size = PoolHdr->Size;
> > +
> > + if (HasPoolTail) {
> > + PoolTail = HEAD_TO_TAIL (PoolHdr);
> > + PoolTail->Signature = POOL_TAIL_SIGNATURE;
> > + PoolTail->Size = PoolHdr->Size;
> > + }
> > +
> > *Buffer = PoolHdr + 1;
> > return Status;
> > }
> > @@ -341,28 +366,45 @@ SmmInternalFreePool (
> > {
> > FREE_POOL_HEADER *FreePoolHdr;
> > POOL_TAIL *PoolTail;
> > + BOOLEAN HasPoolTail;
> > + BOOLEAN MemoryGuarded;
> >
> > if (Buffer == NULL) {
> > return EFI_INVALID_PARAMETER;
> > }
> >
> > + MemoryGuarded = IsHeapGuardEnabled () &&
> > + IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Buffer);
> > + HasPoolTail = !(MemoryGuarded &&
> > + ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> > +
> > FreePoolHdr = (FREE_POOL_HEADER*)((POOL_HEADER*)Buffer - 1);
> > ASSERT (FreePoolHdr->Header.Signature == POOL_HEAD_SIGNATURE);
> > ASSERT (!FreePoolHdr->Header.Available);
> > - PoolTail = HEAD_TO_TAIL(&FreePoolHdr->Header);
> > - ASSERT (PoolTail->Signature == POOL_TAIL_SIGNATURE);
> > - ASSERT (FreePoolHdr->Header.Size == PoolTail->Size);
> > -
> > if (FreePoolHdr->Header.Signature != POOL_HEAD_SIGNATURE) {
> > return EFI_INVALID_PARAMETER;
> > }
> >
> > - if (PoolTail->Signature != POOL_TAIL_SIGNATURE) {
> > - return EFI_INVALID_PARAMETER;
> > + if (HasPoolTail) {
> > + PoolTail = HEAD_TO_TAIL (&FreePoolHdr->Header);
> > + ASSERT (PoolTail->Signature == POOL_TAIL_SIGNATURE);
> > + ASSERT (FreePoolHdr->Header.Size == PoolTail->Size);
> > + if (PoolTail->Signature != POOL_TAIL_SIGNATURE) {
> > + return EFI_INVALID_PARAMETER;
> > + }
> > +
> > + if (FreePoolHdr->Header.Size != PoolTail->Size) {
> > + return EFI_INVALID_PARAMETER;
> > + }
> > }
> >
> > - if (FreePoolHdr->Header.Size != PoolTail->Size) {
> > - return EFI_INVALID_PARAMETER;
> > + if (MemoryGuarded) {
> > + Buffer = AdjustPoolHeadF
> > ((EFI_PHYSICAL_ADDRESS)(UINTN)FreePoolHdr);
> > + return SmmInternalFreePages (
> > + (EFI_PHYSICAL_ADDRESS)(UINTN)Buffer,
> > + EFI_SIZE_TO_PAGES (FreePoolHdr->Header.Size),
> > + TRUE
> > + );
> > }
> >
> > if (FreePoolHdr->Header.Size > MAX_POOL_SIZE) {
> > @@ -370,7 +412,8 @@ SmmInternalFreePool (
> > ASSERT ((FreePoolHdr->Header.Size & EFI_PAGE_MASK) == 0);
> > return SmmInternalFreePages (
> > (EFI_PHYSICAL_ADDRESS)(UINTN)FreePoolHdr,
> > - EFI_SIZE_TO_PAGES (FreePoolHdr->Header.Size)
> > + EFI_SIZE_TO_PAGES (FreePoolHdr->Header.Size),
> > + FALSE
> > );
> > }
> > return InternalFreePoolByIndex (FreePoolHdr, PoolTail);
> > --
> > 2.14.1.windows.1
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2017-10-13 6:11 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-10-11 3:18 [PATCH 0/5] Implement heap guard feature Jian J Wang
2017-10-11 3:18 ` [PATCH 1/5] MdeModulePkg/DxeCore: Implement heap guard feature for UEFI Jian J Wang
2017-10-11 3:18 ` [PATCH 2/5] MdeModulePkg/PiSmmCore: Implement heap guard feature for SMM mode Jian J Wang
2017-10-13 1:27 ` Dong, Eric
2017-10-13 6:15 ` Wang, Jian J
2017-10-11 3:18 ` [PATCH 3/5] MdeModulePkg/MdeModulePkg.dec, .uni: Add heap guard related PCDs and string tokens Jian J Wang
2017-10-11 3:18 ` [PATCH 4/5] UefiCpuPkg/CpuDxe: Reduce debug message Jian J Wang
2017-10-11 3:18 ` [PATCH 5/5] UefiCpuPkg/PiSmmCpuDxeSmm: Disable page table protection Jian J Wang
2017-10-13 1:24 ` Dong, Eric
2017-10-13 6:14 ` Wang, Jian J
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox